… | |
… | |
396 | Please note that epoll sometimes generates spurious notifications, so you |
396 | Please note that epoll sometimes generates spurious notifications, so you |
397 | need to use non-blocking I/O or other means to avoid blocking when no data |
397 | need to use non-blocking I/O or other means to avoid blocking when no data |
398 | (or space) is available. |
398 | (or space) is available. |
399 | |
399 | |
400 | Best performance from this backend is achieved by not unregistering all |
400 | Best performance from this backend is achieved by not unregistering all |
401 | watchers for a file descriptor until it has been closed, if possible, i.e. |
401 | watchers for a file descriptor until it has been closed, if possible, |
402 | keep at least one watcher active per fd at all times. |
402 | i.e. keep at least one watcher active per fd at all times. Stopping and |
|
|
403 | starting a watcher (without re-setting it) also usually doesn't cause |
|
|
404 | extra overhead. |
403 | |
405 | |
404 | While nominally embeddable in other event loops, this feature is broken in |
406 | While nominally embeddable in other event loops, this feature is broken in |
405 | all kernel versions tested so far. |
407 | all kernel versions tested so far. |
406 | |
408 | |
407 | This backend maps C<EV_READ> and C<EV_WRITE> in the same way as |
409 | This backend maps C<EV_READ> and C<EV_WRITE> in the same way as |
408 | C<EVBACKEND_POLL>. |
410 | C<EVBACKEND_POLL>. |
409 | |
411 | |
410 | =item C<EVBACKEND_KQUEUE> (value 8, most BSD clones) |
412 | =item C<EVBACKEND_KQUEUE> (value 8, most BSD clones) |
411 | |
413 | |
412 | Kqueue deserves special mention, as at the time of this writing, it |
414 | Kqueue deserves special mention, as at the time of this writing, it was |
413 | was broken on all BSDs except NetBSD (usually it doesn't work reliably |
415 | broken on all BSDs except NetBSD (usually it doesn't work reliably with |
414 | with anything but sockets and pipes, except on Darwin, where of course |
416 | anything but sockets and pipes, except on Darwin, where of course it's |
415 | it's completely useless). For this reason it's not being "auto-detected" |
417 | completely useless). For this reason it's not being "auto-detected" unless |
416 | unless you explicitly specify it explicitly in the flags (i.e. using |
418 | you explicitly specify it in the flags (i.e. using C<EVBACKEND_KQUEUE>) or |
417 | C<EVBACKEND_KQUEUE>) or libev was compiled on a known-to-be-good (-enough) |
419 | libev was compiled on a known-to-be-good (-enough) system like NetBSD. |
418 | system like NetBSD. |
|
|
419 | |
420 | |
420 | You still can embed kqueue into a normal poll or select backend and use it |
421 | You still can embed kqueue into a normal poll or select backend and use it |
421 | only for sockets (after having made sure that sockets work with kqueue on |
422 | only for sockets (after having made sure that sockets work with kqueue on |
422 | the target platform). See C<ev_embed> watchers for more info. |
423 | the target platform). See C<ev_embed> watchers for more info. |
423 | |
424 | |
424 | It scales in the same way as the epoll backend, but the interface to the |
425 | It scales in the same way as the epoll backend, but the interface to the |
425 | kernel is more efficient (which says nothing about its actual speed, of |
426 | kernel is more efficient (which says nothing about its actual speed, of |
426 | course). While stopping, setting and starting an I/O watcher does never |
427 | course). While stopping, setting and starting an I/O watcher does never |
427 | cause an extra system call as with C<EVBACKEND_EPOLL>, it still adds up to |
428 | cause an extra system call as with C<EVBACKEND_EPOLL>, it still adds up to |
428 | two event changes per incident, support for C<fork ()> is very bad and it |
429 | two event changes per incident. Support for C<fork ()> is very bad and it |
429 | drops fds silently in similarly hard-to-detect cases. |
430 | drops fds silently in similarly hard-to-detect cases. |
430 | |
431 | |
431 | This backend usually performs well under most conditions. |
432 | This backend usually performs well under most conditions. |
432 | |
433 | |
433 | While nominally embeddable in other event loops, this doesn't work |
434 | While nominally embeddable in other event loops, this doesn't work |
434 | everywhere, so you might need to test for this. And since it is broken |
435 | everywhere, so you might need to test for this. And since it is broken |
435 | almost everywhere, you should only use it when you have a lot of sockets |
436 | almost everywhere, you should only use it when you have a lot of sockets |
436 | (for which it usually works), by embedding it into another event loop |
437 | (for which it usually works), by embedding it into another event loop |
437 | (e.g. C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>) and using it only for |
438 | (e.g. C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>) and, did I mention it, |
438 | sockets. |
439 | using it only for sockets. |
439 | |
440 | |
440 | This backend maps C<EV_READ> into an C<EVFILT_READ> kevent with |
441 | This backend maps C<EV_READ> into an C<EVFILT_READ> kevent with |
441 | C<NOTE_EOF>, and C<EV_WRITE> into an C<EVFILT_WRITE> kevent with |
442 | C<NOTE_EOF>, and C<EV_WRITE> into an C<EVFILT_WRITE> kevent with |
442 | C<NOTE_EOF>. |
443 | C<NOTE_EOF>. |
443 | |
444 | |
… | |
… | |
460 | While this backend scales well, it requires one system call per active |
461 | While this backend scales well, it requires one system call per active |
461 | file descriptor per loop iteration. For small and medium numbers of file |
462 | file descriptor per loop iteration. For small and medium numbers of file |
462 | descriptors a "slow" C<EVBACKEND_SELECT> or C<EVBACKEND_POLL> backend |
463 | descriptors a "slow" C<EVBACKEND_SELECT> or C<EVBACKEND_POLL> backend |
463 | might perform better. |
464 | might perform better. |
464 | |
465 | |
465 | On the positive side, ignoring the spurious readiness notifications, this |
466 | On the positive side, with the exception of the spurious readiness |
466 | backend actually performed to specification in all tests and is fully |
467 | notifications, this backend actually performed fully to specification |
467 | embeddable, which is a rare feat among the OS-specific backends. |
468 | in all tests and is fully embeddable, which is a rare feat among the |
|
|
469 | OS-specific backends. |
468 | |
470 | |
469 | This backend maps C<EV_READ> and C<EV_WRITE> in the same way as |
471 | This backend maps C<EV_READ> and C<EV_WRITE> in the same way as |
470 | C<EVBACKEND_POLL>. |
472 | C<EVBACKEND_POLL>. |
471 | |
473 | |
472 | =item C<EVBACKEND_ALL> |
474 | =item C<EVBACKEND_ALL> |
… | |
… | |
481 | |
483 | |
482 | If one or more of these are or'ed into the flags value, then only these |
484 | If one or more of these are or'ed into the flags value, then only these |
483 | backends will be tried (in the reverse order as listed here). If none are |
485 | backends will be tried (in the reverse order as listed here). If none are |
484 | specified, all backends in C<ev_recommended_backends ()> will be tried. |
486 | specified, all backends in C<ev_recommended_backends ()> will be tried. |
485 | |
487 | |
486 | The most typical usage is like this: |
488 | Example: This is the most typical usage. |
487 | |
489 | |
488 | if (!ev_default_loop (0)) |
490 | if (!ev_default_loop (0)) |
489 | fatal ("could not initialise libev, bad $LIBEV_FLAGS in environment?"); |
491 | fatal ("could not initialise libev, bad $LIBEV_FLAGS in environment?"); |
490 | |
492 | |
491 | Restrict libev to the select and poll backends, and do not allow |
493 | Example: Restrict libev to the select and poll backends, and do not allow |
492 | environment settings to be taken into account: |
494 | environment settings to be taken into account: |
493 | |
495 | |
494 | ev_default_loop (EVBACKEND_POLL | EVBACKEND_SELECT | EVFLAG_NOENV); |
496 | ev_default_loop (EVBACKEND_POLL | EVBACKEND_SELECT | EVFLAG_NOENV); |
495 | |
497 | |
496 | Use whatever libev has to offer, but make sure that kqueue is used if |
498 | Example: Use whatever libev has to offer, but make sure that kqueue is |
497 | available (warning, breaks stuff, best use only with your own private |
499 | used if available (warning, breaks stuff, best use only with your own |
498 | event loop and only if you know the OS supports your types of fds): |
500 | private event loop and only if you know the OS supports your types of |
|
|
501 | fds): |
499 | |
502 | |
500 | ev_default_loop (ev_recommended_backends () | EVBACKEND_KQUEUE); |
503 | ev_default_loop (ev_recommended_backends () | EVBACKEND_KQUEUE); |
501 | |
504 | |
502 | =item struct ev_loop *ev_loop_new (unsigned int flags) |
505 | =item struct ev_loop *ev_loop_new (unsigned int flags) |
503 | |
506 | |
… | |
… | |
561 | |
564 | |
562 | =item ev_loop_fork (loop) |
565 | =item ev_loop_fork (loop) |
563 | |
566 | |
564 | Like C<ev_default_fork>, but acts on an event loop created by |
567 | Like C<ev_default_fork>, but acts on an event loop created by |
565 | C<ev_loop_new>. Yes, you have to call this on every allocated event loop |
568 | C<ev_loop_new>. Yes, you have to call this on every allocated event loop |
566 | after fork, and how you do this is entirely your own problem. |
569 | after fork that you want to re-use in the child, and how you do this is |
|
|
570 | entirely your own problem. |
567 | |
571 | |
568 | =item int ev_is_default_loop (loop) |
572 | =item int ev_is_default_loop (loop) |
569 | |
573 | |
570 | Returns true when the given loop actually is the default loop, false otherwise. |
574 | Returns true when the given loop is, in fact, the default loop, and false |
|
|
575 | otherwise. |
571 | |
576 | |
572 | =item unsigned int ev_loop_count (loop) |
577 | =item unsigned int ev_loop_count (loop) |
573 | |
578 | |
574 | Returns the count of loop iterations for the loop, which is identical to |
579 | Returns the count of loop iterations for the loop, which is identical to |
575 | the number of times libev did poll for new events. It starts at C<0> and |
580 | the number of times libev did poll for new events. It starts at C<0> and |
… | |
… | |
613 | If the flags argument is specified as C<0>, it will not return until |
618 | If the flags argument is specified as C<0>, it will not return until |
614 | either no event watchers are active anymore or C<ev_unloop> was called. |
619 | either no event watchers are active anymore or C<ev_unloop> was called. |
615 | |
620 | |
616 | Please note that an explicit C<ev_unloop> is usually better than |
621 | Please note that an explicit C<ev_unloop> is usually better than |
617 | relying on all watchers to be stopped when deciding when a program has |
622 | relying on all watchers to be stopped when deciding when a program has |
618 | finished (especially in interactive programs), but having a program that |
623 | finished (especially in interactive programs), but having a program |
619 | automatically loops as long as it has to and no longer by virtue of |
624 | that automatically loops as long as it has to and no longer by virtue |
620 | relying on its watchers stopping correctly is a thing of beauty. |
625 | of relying on its watchers stopping correctly, that is truly a thing of |
|
|
626 | beauty. |
621 | |
627 | |
622 | A flags value of C<EVLOOP_NONBLOCK> will look for new events, will handle |
628 | A flags value of C<EVLOOP_NONBLOCK> will look for new events, will handle |
623 | those events and any outstanding ones, but will not block your process in |
629 | those events and any already outstanding ones, but will not block your |
624 | case there are no events and will return after one iteration of the loop. |
630 | process in case there are no events and will return after one iteration of |
|
|
631 | the loop. |
625 | |
632 | |
626 | A flags value of C<EVLOOP_ONESHOT> will look for new events (waiting if |
633 | A flags value of C<EVLOOP_ONESHOT> will look for new events (waiting if |
627 | necessary) and will handle those and any outstanding ones. It will block |
634 | necessary) and will handle those and any already outstanding ones. It |
628 | your process until at least one new event arrives, and will return after |
635 | will block your process until at least one new event arrives (which could |
629 | one iteration of the loop. This is useful if you are waiting for some |
636 | be an event internal to libev itself, so there is no guarentee that a |
630 | external event in conjunction with something not expressible using other |
637 | user-registered callback will be called), and will return after one |
|
|
638 | iteration of the loop. |
|
|
639 | |
|
|
640 | This is useful if you are waiting for some external event in conjunction |
|
|
641 | with something not expressible using other libev watchers (i.e. "roll your |
631 | libev watchers. However, a pair of C<ev_prepare>/C<ev_check> watchers is |
642 | own C<ev_loop>"). However, a pair of C<ev_prepare>/C<ev_check> watchers is |
632 | usually a better approach for this kind of thing. |
643 | usually a better approach for this kind of thing. |
633 | |
644 | |
634 | Here are the gory details of what C<ev_loop> does: |
645 | Here are the gory details of what C<ev_loop> does: |
635 | |
646 | |
636 | - Before the first iteration, call any pending watchers. |
647 | - Before the first iteration, call any pending watchers. |
… | |
… | |
646 | any active watchers at all will result in not sleeping). |
657 | any active watchers at all will result in not sleeping). |
647 | - Sleep if the I/O and timer collect interval say so. |
658 | - Sleep if the I/O and timer collect interval say so. |
648 | - Block the process, waiting for any events. |
659 | - Block the process, waiting for any events. |
649 | - Queue all outstanding I/O (fd) events. |
660 | - Queue all outstanding I/O (fd) events. |
650 | - Update the "event loop time" (ev_now ()), and do time jump adjustments. |
661 | - Update the "event loop time" (ev_now ()), and do time jump adjustments. |
651 | - Queue all outstanding timers. |
662 | - Queue all expired timers. |
652 | - Queue all outstanding periodics. |
663 | - Queue all expired periodics. |
653 | - Unless any events are pending now, queue all idle watchers. |
664 | - Unless any events are pending now, queue all idle watchers. |
654 | - Queue all check watchers. |
665 | - Queue all check watchers. |
655 | - Call all queued watchers in reverse order (i.e. check watchers first). |
666 | - Call all queued watchers in reverse order (i.e. check watchers first). |
656 | Signals and child watchers are implemented as I/O watchers, and will |
667 | Signals and child watchers are implemented as I/O watchers, and will |
657 | be handled here by queueing them when their watcher gets executed. |
668 | be handled here by queueing them when their watcher gets executed. |
… | |
… | |
680 | |
691 | |
681 | =item ev_unref (loop) |
692 | =item ev_unref (loop) |
682 | |
693 | |
683 | Ref/unref can be used to add or remove a reference count on the event |
694 | Ref/unref can be used to add or remove a reference count on the event |
684 | loop: Every watcher keeps one reference, and as long as the reference |
695 | loop: Every watcher keeps one reference, and as long as the reference |
685 | count is nonzero, C<ev_loop> will not return on its own. If you have |
696 | count is nonzero, C<ev_loop> will not return on its own. |
|
|
697 | |
686 | a watcher you never unregister that should not keep C<ev_loop> from |
698 | If you have a watcher you never unregister that should not keep C<ev_loop> |
687 | returning, ev_unref() after starting, and ev_ref() before stopping it. For |
699 | from returning, call ev_unref() after starting, and ev_ref() before |
|
|
700 | stopping it. |
|
|
701 | |
688 | example, libev itself uses this for its internal signal pipe: It is not |
702 | As an example, libev itself uses this for its internal signal pipe: It is |
689 | visible to the libev user and should not keep C<ev_loop> from exiting if |
703 | not visible to the libev user and should not keep C<ev_loop> from exiting |
690 | no event watchers registered by it are active. It is also an excellent |
704 | if no event watchers registered by it are active. It is also an excellent |
691 | way to do this for generic recurring timers or from within third-party |
705 | way to do this for generic recurring timers or from within third-party |
692 | libraries. Just remember to I<unref after start> and I<ref before stop> |
706 | libraries. Just remember to I<unref after start> and I<ref before stop> |
693 | (but only if the watcher wasn't active before, or was active before, |
707 | (but only if the watcher wasn't active before, or was active before, |
694 | respectively). |
708 | respectively). |
695 | |
709 | |
… | |
… | |
718 | Setting these to a higher value (the C<interval> I<must> be >= C<0>) |
732 | Setting these to a higher value (the C<interval> I<must> be >= C<0>) |
719 | allows libev to delay invocation of I/O and timer/periodic callbacks |
733 | allows libev to delay invocation of I/O and timer/periodic callbacks |
720 | to increase efficiency of loop iterations (or to increase power-saving |
734 | to increase efficiency of loop iterations (or to increase power-saving |
721 | opportunities). |
735 | opportunities). |
722 | |
736 | |
723 | The background is that sometimes your program runs just fast enough to |
737 | The idea is that sometimes your program runs just fast enough to handle |
724 | handle one (or very few) event(s) per loop iteration. While this makes |
738 | one (or very few) event(s) per loop iteration. While this makes the |
725 | the program responsive, it also wastes a lot of CPU time to poll for new |
739 | program responsive, it also wastes a lot of CPU time to poll for new |
726 | events, especially with backends like C<select ()> which have a high |
740 | events, especially with backends like C<select ()> which have a high |
727 | overhead for the actual polling but can deliver many events at once. |
741 | overhead for the actual polling but can deliver many events at once. |
728 | |
742 | |
729 | By setting a higher I<io collect interval> you allow libev to spend more |
743 | By setting a higher I<io collect interval> you allow libev to spend more |
730 | time collecting I/O events, so you can handle more events per iteration, |
744 | time collecting I/O events, so you can handle more events per iteration, |
… | |
… | |
732 | C<ev_timer>) will be not affected. Setting this to a non-null value will |
746 | C<ev_timer>) will be not affected. Setting this to a non-null value will |
733 | introduce an additional C<ev_sleep ()> call into most loop iterations. |
747 | introduce an additional C<ev_sleep ()> call into most loop iterations. |
734 | |
748 | |
735 | Likewise, by setting a higher I<timeout collect interval> you allow libev |
749 | Likewise, by setting a higher I<timeout collect interval> you allow libev |
736 | to spend more time collecting timeouts, at the expense of increased |
750 | to spend more time collecting timeouts, at the expense of increased |
737 | latency (the watcher callback will be called later). C<ev_io> watchers |
751 | latency/jitter/inexactness (the watcher callback will be called |
738 | will not be affected. Setting this to a non-null value will not introduce |
752 | later). C<ev_io> watchers will not be affected. Setting this to a non-null |
739 | any overhead in libev. |
753 | value will not introduce any overhead in libev. |
740 | |
754 | |
741 | Many (busy) programs can usually benefit by setting the I/O collect |
755 | Many (busy) programs can usually benefit by setting the I/O collect |
742 | interval to a value near C<0.1> or so, which is often enough for |
756 | interval to a value near C<0.1> or so, which is often enough for |
743 | interactive servers (of course not for games), likewise for timeouts. It |
757 | interactive servers (of course not for games), likewise for timeouts. It |
744 | usually doesn't make much sense to set it to a lower value than C<0.01>, |
758 | usually doesn't make much sense to set it to a lower value than C<0.01>, |
… | |
… | |
752 | they fire on, say, one-second boundaries only. |
766 | they fire on, say, one-second boundaries only. |
753 | |
767 | |
754 | =item ev_loop_verify (loop) |
768 | =item ev_loop_verify (loop) |
755 | |
769 | |
756 | This function only does something when C<EV_VERIFY> support has been |
770 | This function only does something when C<EV_VERIFY> support has been |
757 | compiled in. It tries to go through all internal structures and checks |
771 | compiled in. which is the default for non-minimal builds. It tries to go |
758 | them for validity. If anything is found to be inconsistent, it will print |
772 | through all internal structures and checks them for validity. If anything |
759 | an error message to standard error and call C<abort ()>. |
773 | is found to be inconsistent, it will print an error message to standard |
|
|
774 | error and call C<abort ()>. |
760 | |
775 | |
761 | This can be used to catch bugs inside libev itself: under normal |
776 | This can be used to catch bugs inside libev itself: under normal |
762 | circumstances, this function will never abort as of course libev keeps its |
777 | circumstances, this function will never abort as of course libev keeps its |
763 | data structures consistent. |
778 | data structures consistent. |
764 | |
779 | |
… | |
… | |
880 | happen because the watcher could not be properly started because libev |
895 | happen because the watcher could not be properly started because libev |
881 | ran out of memory, a file descriptor was found to be closed or any other |
896 | ran out of memory, a file descriptor was found to be closed or any other |
882 | problem. You best act on it by reporting the problem and somehow coping |
897 | problem. You best act on it by reporting the problem and somehow coping |
883 | with the watcher being stopped. |
898 | with the watcher being stopped. |
884 | |
899 | |
885 | Libev will usually signal a few "dummy" events together with an error, |
900 | Libev will usually signal a few "dummy" events together with an error, for |
886 | for example it might indicate that a fd is readable or writable, and if |
901 | example it might indicate that a fd is readable or writable, and if your |
887 | your callbacks is well-written it can just attempt the operation and cope |
902 | callbacks is well-written it can just attempt the operation and cope with |
888 | with the error from read() or write(). This will not work in multi-threaded |
903 | the error from read() or write(). This will not work in multi-threaded |
889 | programs, though, so beware. |
904 | programs, though, as the fd could already be closed and reused for another |
|
|
905 | thing, so beware. |
890 | |
906 | |
891 | =back |
907 | =back |
892 | |
908 | |
893 | =head2 GENERIC WATCHER FUNCTIONS |
909 | =head2 GENERIC WATCHER FUNCTIONS |
894 | |
910 | |
… | |
… | |
910 | (or never started) and there are no pending events outstanding. |
926 | (or never started) and there are no pending events outstanding. |
911 | |
927 | |
912 | The callback is always of type C<void (*)(ev_loop *loop, ev_TYPE *watcher, |
928 | The callback is always of type C<void (*)(ev_loop *loop, ev_TYPE *watcher, |
913 | int revents)>. |
929 | int revents)>. |
914 | |
930 | |
|
|
931 | Example: Initialise an C<ev_io> watcher in two steps. |
|
|
932 | |
|
|
933 | ev_io w; |
|
|
934 | ev_init (&w, my_cb); |
|
|
935 | ev_io_set (&w, STDIN_FILENO, EV_READ); |
|
|
936 | |
915 | =item C<ev_TYPE_set> (ev_TYPE *, [args]) |
937 | =item C<ev_TYPE_set> (ev_TYPE *, [args]) |
916 | |
938 | |
917 | This macro initialises the type-specific parts of a watcher. You need to |
939 | This macro initialises the type-specific parts of a watcher. You need to |
918 | call C<ev_init> at least once before you call this macro, but you can |
940 | call C<ev_init> at least once before you call this macro, but you can |
919 | call C<ev_TYPE_set> any number of times. You must not, however, call this |
941 | call C<ev_TYPE_set> any number of times. You must not, however, call this |
… | |
… | |
921 | difference to the C<ev_init> macro). |
943 | difference to the C<ev_init> macro). |
922 | |
944 | |
923 | Although some watcher types do not have type-specific arguments |
945 | Although some watcher types do not have type-specific arguments |
924 | (e.g. C<ev_prepare>) you still need to call its C<set> macro. |
946 | (e.g. C<ev_prepare>) you still need to call its C<set> macro. |
925 | |
947 | |
|
|
948 | See C<ev_init>, above, for an example. |
|
|
949 | |
926 | =item C<ev_TYPE_init> (ev_TYPE *watcher, callback, [args]) |
950 | =item C<ev_TYPE_init> (ev_TYPE *watcher, callback, [args]) |
927 | |
951 | |
928 | This convenience macro rolls both C<ev_init> and C<ev_TYPE_set> macro |
952 | This convenience macro rolls both C<ev_init> and C<ev_TYPE_set> macro |
929 | calls into a single call. This is the most convenient method to initialise |
953 | calls into a single call. This is the most convenient method to initialise |
930 | a watcher. The same limitations apply, of course. |
954 | a watcher. The same limitations apply, of course. |
931 | |
955 | |
|
|
956 | Example: Initialise and set an C<ev_io> watcher in one step. |
|
|
957 | |
|
|
958 | ev_io_init (&w, my_cb, STDIN_FILENO, EV_READ); |
|
|
959 | |
932 | =item C<ev_TYPE_start> (loop *, ev_TYPE *watcher) |
960 | =item C<ev_TYPE_start> (loop *, ev_TYPE *watcher) |
933 | |
961 | |
934 | Starts (activates) the given watcher. Only active watchers will receive |
962 | Starts (activates) the given watcher. Only active watchers will receive |
935 | events. If the watcher is already active nothing will happen. |
963 | events. If the watcher is already active nothing will happen. |
|
|
964 | |
|
|
965 | Example: Start the C<ev_io> watcher that is being abused as example in this |
|
|
966 | whole section. |
|
|
967 | |
|
|
968 | ev_io_start (EV_DEFAULT_UC, &w); |
936 | |
969 | |
937 | =item C<ev_TYPE_stop> (loop *, ev_TYPE *watcher) |
970 | =item C<ev_TYPE_stop> (loop *, ev_TYPE *watcher) |
938 | |
971 | |
939 | Stops the given watcher again (if active) and clears the pending |
972 | Stops the given watcher again (if active) and clears the pending |
940 | status. It is possible that stopped watchers are pending (for example, |
973 | status. It is possible that stopped watchers are pending (for example, |
… | |
… | |
997 | |
1030 | |
998 | =item ev_invoke (loop, ev_TYPE *watcher, int revents) |
1031 | =item ev_invoke (loop, ev_TYPE *watcher, int revents) |
999 | |
1032 | |
1000 | Invoke the C<watcher> with the given C<loop> and C<revents>. Neither |
1033 | Invoke the C<watcher> with the given C<loop> and C<revents>. Neither |
1001 | C<loop> nor C<revents> need to be valid as long as the watcher callback |
1034 | C<loop> nor C<revents> need to be valid as long as the watcher callback |
1002 | can deal with that fact. |
1035 | can deal with that fact, as both are simply passed through to the |
|
|
1036 | callback. |
1003 | |
1037 | |
1004 | =item int ev_clear_pending (loop, ev_TYPE *watcher) |
1038 | =item int ev_clear_pending (loop, ev_TYPE *watcher) |
1005 | |
1039 | |
1006 | If the watcher is pending, this function returns clears its pending status |
1040 | If the watcher is pending, this function clears its pending status and |
1007 | and returns its C<revents> bitset (as if its callback was invoked). If the |
1041 | returns its C<revents> bitset (as if its callback was invoked). If the |
1008 | watcher isn't pending it does nothing and returns C<0>. |
1042 | watcher isn't pending it does nothing and returns C<0>. |
1009 | |
1043 | |
|
|
1044 | Sometimes it can be useful to "poll" a watcher instead of waiting for its |
|
|
1045 | callback to be invoked, which can be accomplished with this function. |
|
|
1046 | |
1010 | =back |
1047 | =back |
1011 | |
1048 | |
1012 | |
1049 | |
1013 | =head2 ASSOCIATING CUSTOM DATA WITH A WATCHER |
1050 | =head2 ASSOCIATING CUSTOM DATA WITH A WATCHER |
1014 | |
1051 | |
1015 | Each watcher has, by default, a member C<void *data> that you can change |
1052 | Each watcher has, by default, a member C<void *data> that you can change |
1016 | and read at any time, libev will completely ignore it. This can be used |
1053 | and read at any time: libev will completely ignore it. This can be used |
1017 | to associate arbitrary data with your watcher. If you need more data and |
1054 | to associate arbitrary data with your watcher. If you need more data and |
1018 | don't want to allocate memory and store a pointer to it in that data |
1055 | don't want to allocate memory and store a pointer to it in that data |
1019 | member, you can also "subclass" the watcher type and provide your own |
1056 | member, you can also "subclass" the watcher type and provide your own |
1020 | data: |
1057 | data: |
1021 | |
1058 | |
… | |
… | |
1053 | ev_timer t2; |
1090 | ev_timer t2; |
1054 | } |
1091 | } |
1055 | |
1092 | |
1056 | In this case getting the pointer to C<my_biggy> is a bit more |
1093 | In this case getting the pointer to C<my_biggy> is a bit more |
1057 | complicated: Either you store the address of your C<my_biggy> struct |
1094 | complicated: Either you store the address of your C<my_biggy> struct |
1058 | in the C<data> member of the watcher, or you need to use some pointer |
1095 | in the C<data> member of the watcher (for woozies), or you need to use |
1059 | arithmetic using C<offsetof> inside your watchers: |
1096 | some pointer arithmetic using C<offsetof> inside your watchers (for real |
|
|
1097 | programmers): |
1060 | |
1098 | |
1061 | #include <stddef.h> |
1099 | #include <stddef.h> |
1062 | |
1100 | |
1063 | static void |
1101 | static void |
1064 | t1_cb (EV_P_ struct ev_timer *w, int revents) |
1102 | t1_cb (EV_P_ struct ev_timer *w, int revents) |
… | |
… | |
1104 | In general you can register as many read and/or write event watchers per |
1142 | In general you can register as many read and/or write event watchers per |
1105 | fd as you want (as long as you don't confuse yourself). Setting all file |
1143 | fd as you want (as long as you don't confuse yourself). Setting all file |
1106 | descriptors to non-blocking mode is also usually a good idea (but not |
1144 | descriptors to non-blocking mode is also usually a good idea (but not |
1107 | required if you know what you are doing). |
1145 | required if you know what you are doing). |
1108 | |
1146 | |
1109 | If you must do this, then force the use of a known-to-be-good backend |
1147 | If you cannot use non-blocking mode, then force the use of a |
1110 | (at the time of this writing, this includes only C<EVBACKEND_SELECT> and |
1148 | known-to-be-good backend (at the time of this writing, this includes only |
1111 | C<EVBACKEND_POLL>). |
1149 | C<EVBACKEND_SELECT> and C<EVBACKEND_POLL>). |
1112 | |
1150 | |
1113 | Another thing you have to watch out for is that it is quite easy to |
1151 | Another thing you have to watch out for is that it is quite easy to |
1114 | receive "spurious" readiness notifications, that is your callback might |
1152 | receive "spurious" readiness notifications, that is your callback might |
1115 | be called with C<EV_READ> but a subsequent C<read>(2) will actually block |
1153 | be called with C<EV_READ> but a subsequent C<read>(2) will actually block |
1116 | because there is no data. Not only are some backends known to create a |
1154 | because there is no data. Not only are some backends known to create a |
1117 | lot of those (for example Solaris ports), it is very easy to get into |
1155 | lot of those (for example Solaris ports), it is very easy to get into |
1118 | this situation even with a relatively standard program structure. Thus |
1156 | this situation even with a relatively standard program structure. Thus |
1119 | it is best to always use non-blocking I/O: An extra C<read>(2) returning |
1157 | it is best to always use non-blocking I/O: An extra C<read>(2) returning |
1120 | C<EAGAIN> is far preferable to a program hanging until some data arrives. |
1158 | C<EAGAIN> is far preferable to a program hanging until some data arrives. |
1121 | |
1159 | |
1122 | If you cannot run the fd in non-blocking mode (for example you should not |
1160 | If you cannot run the fd in non-blocking mode (for example you should |
1123 | play around with an Xlib connection), then you have to separately re-test |
1161 | not play around with an Xlib connection), then you have to separately |
1124 | whether a file descriptor is really ready with a known-to-be good interface |
1162 | re-test whether a file descriptor is really ready with a known-to-be good |
1125 | such as poll (fortunately in our Xlib example, Xlib already does this on |
1163 | interface such as poll (fortunately in our Xlib example, Xlib already |
1126 | its own, so its quite safe to use). |
1164 | does this on its own, so its quite safe to use). Some people additionally |
|
|
1165 | use C<SIGALRM> and an interval timer, just to be sure you won't block |
|
|
1166 | indefinitely. |
|
|
1167 | |
|
|
1168 | But really, best use non-blocking mode. |
1127 | |
1169 | |
1128 | =head3 The special problem of disappearing file descriptors |
1170 | =head3 The special problem of disappearing file descriptors |
1129 | |
1171 | |
1130 | Some backends (e.g. kqueue, epoll) need to be told about closing a file |
1172 | Some backends (e.g. kqueue, epoll) need to be told about closing a file |
1131 | descriptor (either by calling C<close> explicitly or by any other means, |
1173 | descriptor (either due to calling C<close> explicitly or any other means, |
1132 | such as C<dup>). The reason is that you register interest in some file |
1174 | such as C<dup2>). The reason is that you register interest in some file |
1133 | descriptor, but when it goes away, the operating system will silently drop |
1175 | descriptor, but when it goes away, the operating system will silently drop |
1134 | this interest. If another file descriptor with the same number then is |
1176 | this interest. If another file descriptor with the same number then is |
1135 | registered with libev, there is no efficient way to see that this is, in |
1177 | registered with libev, there is no efficient way to see that this is, in |
1136 | fact, a different file descriptor. |
1178 | fact, a different file descriptor. |
1137 | |
1179 | |
… | |
… | |
1168 | enable C<EVFLAG_FORKCHECK>, or resort to C<EVBACKEND_SELECT> or |
1210 | enable C<EVFLAG_FORKCHECK>, or resort to C<EVBACKEND_SELECT> or |
1169 | C<EVBACKEND_POLL>. |
1211 | C<EVBACKEND_POLL>. |
1170 | |
1212 | |
1171 | =head3 The special problem of SIGPIPE |
1213 | =head3 The special problem of SIGPIPE |
1172 | |
1214 | |
1173 | While not really specific to libev, it is easy to forget about SIGPIPE: |
1215 | While not really specific to libev, it is easy to forget about C<SIGPIPE>: |
1174 | when writing to a pipe whose other end has been closed, your program gets |
1216 | when writing to a pipe whose other end has been closed, your program gets |
1175 | send a SIGPIPE, which, by default, aborts your program. For most programs |
1217 | sent a SIGPIPE, which, by default, aborts your program. For most programs |
1176 | this is sensible behaviour, for daemons, this is usually undesirable. |
1218 | this is sensible behaviour, for daemons, this is usually undesirable. |
1177 | |
1219 | |
1178 | So when you encounter spurious, unexplained daemon exits, make sure you |
1220 | So when you encounter spurious, unexplained daemon exits, make sure you |
1179 | ignore SIGPIPE (and maybe make sure you log the exit status of your daemon |
1221 | ignore SIGPIPE (and maybe make sure you log the exit status of your daemon |
1180 | somewhere, as that would have given you a big clue). |
1222 | somewhere, as that would have given you a big clue). |
… | |
… | |
1187 | =item ev_io_init (ev_io *, callback, int fd, int events) |
1229 | =item ev_io_init (ev_io *, callback, int fd, int events) |
1188 | |
1230 | |
1189 | =item ev_io_set (ev_io *, int fd, int events) |
1231 | =item ev_io_set (ev_io *, int fd, int events) |
1190 | |
1232 | |
1191 | Configures an C<ev_io> watcher. The C<fd> is the file descriptor to |
1233 | Configures an C<ev_io> watcher. The C<fd> is the file descriptor to |
1192 | receive events for and events is either C<EV_READ>, C<EV_WRITE> or |
1234 | receive events for and C<events> is either C<EV_READ>, C<EV_WRITE> or |
1193 | C<EV_READ | EV_WRITE> to receive the given events. |
1235 | C<EV_READ | EV_WRITE>, to express the desire to receive the given events. |
1194 | |
1236 | |
1195 | =item int fd [read-only] |
1237 | =item int fd [read-only] |
1196 | |
1238 | |
1197 | The file descriptor being watched. |
1239 | The file descriptor being watched. |
1198 | |
1240 | |
… | |
… | |
1210 | |
1252 | |
1211 | static void |
1253 | static void |
1212 | stdin_readable_cb (struct ev_loop *loop, struct ev_io *w, int revents) |
1254 | stdin_readable_cb (struct ev_loop *loop, struct ev_io *w, int revents) |
1213 | { |
1255 | { |
1214 | ev_io_stop (loop, w); |
1256 | ev_io_stop (loop, w); |
1215 | .. read from stdin here (or from w->fd) and haqndle any I/O errors |
1257 | .. read from stdin here (or from w->fd) and handle any I/O errors |
1216 | } |
1258 | } |
1217 | |
1259 | |
1218 | ... |
1260 | ... |
1219 | struct ev_loop *loop = ev_default_init (0); |
1261 | struct ev_loop *loop = ev_default_init (0); |
1220 | struct ev_io stdin_readable; |
1262 | struct ev_io stdin_readable; |
… | |
… | |
1228 | Timer watchers are simple relative timers that generate an event after a |
1270 | Timer watchers are simple relative timers that generate an event after a |
1229 | given time, and optionally repeating in regular intervals after that. |
1271 | given time, and optionally repeating in regular intervals after that. |
1230 | |
1272 | |
1231 | The timers are based on real time, that is, if you register an event that |
1273 | The timers are based on real time, that is, if you register an event that |
1232 | times out after an hour and you reset your system clock to January last |
1274 | times out after an hour and you reset your system clock to January last |
1233 | year, it will still time out after (roughly) and hour. "Roughly" because |
1275 | year, it will still time out after (roughly) one hour. "Roughly" because |
1234 | detecting time jumps is hard, and some inaccuracies are unavoidable (the |
1276 | detecting time jumps is hard, and some inaccuracies are unavoidable (the |
1235 | monotonic clock option helps a lot here). |
1277 | monotonic clock option helps a lot here). |
1236 | |
1278 | |
1237 | The callback is guaranteed to be invoked only after its timeout has passed, |
1279 | The callback is guaranteed to be invoked only I<after> its timeout has |
1238 | but if multiple timers become ready during the same loop iteration then |
1280 | passed, but if multiple timers become ready during the same loop iteration |
1239 | order of execution is undefined. |
1281 | then order of execution is undefined. |
1240 | |
1282 | |
1241 | =head3 The special problem of time updates |
1283 | =head3 The special problem of time updates |
1242 | |
1284 | |
1243 | Establishing the current time is a costly operation (it usually takes at |
1285 | Establishing the current time is a costly operation (it usually takes at |
1244 | least two system calls): EV therefore updates its idea of the current |
1286 | least two system calls): EV therefore updates its idea of the current |
1245 | time only before and after C<ev_loop> polls for new events, which causes |
1287 | time only before and after C<ev_loop> collects new events, which causes a |
1246 | a growing difference between C<ev_now ()> and C<ev_time ()> when handling |
1288 | growing difference between C<ev_now ()> and C<ev_time ()> when handling |
1247 | lots of events. |
1289 | lots of events in one iteration. |
1248 | |
1290 | |
1249 | The relative timeouts are calculated relative to the C<ev_now ()> |
1291 | The relative timeouts are calculated relative to the C<ev_now ()> |
1250 | time. This is usually the right thing as this timestamp refers to the time |
1292 | time. This is usually the right thing as this timestamp refers to the time |
1251 | of the event triggering whatever timeout you are modifying/starting. If |
1293 | of the event triggering whatever timeout you are modifying/starting. If |
1252 | you suspect event processing to be delayed and you I<need> to base the |
1294 | you suspect event processing to be delayed and you I<need> to base the |
… | |
… | |
1313 | ev_timer_again (loop, timer); |
1355 | ev_timer_again (loop, timer); |
1314 | |
1356 | |
1315 | This is more slightly efficient then stopping/starting the timer each time |
1357 | This is more slightly efficient then stopping/starting the timer each time |
1316 | you want to modify its timeout value. |
1358 | you want to modify its timeout value. |
1317 | |
1359 | |
|
|
1360 | Note, however, that it is often even more efficient to remember the |
|
|
1361 | time of the last activity and let the timer time-out naturally. In the |
|
|
1362 | callback, you then check whether the time-out is real, or, if there was |
|
|
1363 | some activity, you reschedule the watcher to time-out in "last_activity + |
|
|
1364 | timeout - ev_now ()" seconds. |
|
|
1365 | |
1318 | =item ev_tstamp repeat [read-write] |
1366 | =item ev_tstamp repeat [read-write] |
1319 | |
1367 | |
1320 | The current C<repeat> value. Will be used each time the watcher times out |
1368 | The current C<repeat> value. Will be used each time the watcher times out |
1321 | or C<ev_timer_again> is called and determines the next timeout (if any), |
1369 | or C<ev_timer_again> is called, and determines the next timeout (if any), |
1322 | which is also when any modifications are taken into account. |
1370 | which is also when any modifications are taken into account. |
1323 | |
1371 | |
1324 | =back |
1372 | =back |
1325 | |
1373 | |
1326 | =head3 Examples |
1374 | =head3 Examples |
… | |
… | |
1370 | to trigger the event (unlike an C<ev_timer>, which would still trigger |
1418 | to trigger the event (unlike an C<ev_timer>, which would still trigger |
1371 | roughly 10 seconds later as it uses a relative timeout). |
1419 | roughly 10 seconds later as it uses a relative timeout). |
1372 | |
1420 | |
1373 | C<ev_periodic>s can also be used to implement vastly more complex timers, |
1421 | C<ev_periodic>s can also be used to implement vastly more complex timers, |
1374 | such as triggering an event on each "midnight, local time", or other |
1422 | such as triggering an event on each "midnight, local time", or other |
1375 | complicated, rules. |
1423 | complicated rules. |
1376 | |
1424 | |
1377 | As with timers, the callback is guaranteed to be invoked only when the |
1425 | As with timers, the callback is guaranteed to be invoked only when the |
1378 | time (C<at>) has passed, but if multiple periodic timers become ready |
1426 | time (C<at>) has passed, but if multiple periodic timers become ready |
1379 | during the same loop iteration then order of execution is undefined. |
1427 | during the same loop iteration, then order of execution is undefined. |
1380 | |
1428 | |
1381 | =head3 Watcher-Specific Functions and Data Members |
1429 | =head3 Watcher-Specific Functions and Data Members |
1382 | |
1430 | |
1383 | =over 4 |
1431 | =over 4 |
1384 | |
1432 | |
1385 | =item ev_periodic_init (ev_periodic *, callback, ev_tstamp at, ev_tstamp interval, reschedule_cb) |
1433 | =item ev_periodic_init (ev_periodic *, callback, ev_tstamp at, ev_tstamp interval, reschedule_cb) |
1386 | |
1434 | |
1387 | =item ev_periodic_set (ev_periodic *, ev_tstamp after, ev_tstamp repeat, reschedule_cb) |
1435 | =item ev_periodic_set (ev_periodic *, ev_tstamp after, ev_tstamp repeat, reschedule_cb) |
1388 | |
1436 | |
1389 | Lots of arguments, lets sort it out... There are basically three modes of |
1437 | Lots of arguments, lets sort it out... There are basically three modes of |
1390 | operation, and we will explain them from simplest to complex: |
1438 | operation, and we will explain them from simplest to most complex: |
1391 | |
1439 | |
1392 | =over 4 |
1440 | =over 4 |
1393 | |
1441 | |
1394 | =item * absolute timer (at = time, interval = reschedule_cb = 0) |
1442 | =item * absolute timer (at = time, interval = reschedule_cb = 0) |
1395 | |
1443 | |
1396 | In this configuration the watcher triggers an event after the wall clock |
1444 | In this configuration the watcher triggers an event after the wall clock |
1397 | time C<at> has passed and doesn't repeat. It will not adjust when a time |
1445 | time C<at> has passed. It will not repeat and will not adjust when a time |
1398 | jump occurs, that is, if it is to be run at January 1st 2011 then it will |
1446 | jump occurs, that is, if it is to be run at January 1st 2011 then it will |
1399 | run when the system time reaches or surpasses this time. |
1447 | only run when the system clock reaches or surpasses this time. |
1400 | |
1448 | |
1401 | =item * repeating interval timer (at = offset, interval > 0, reschedule_cb = 0) |
1449 | =item * repeating interval timer (at = offset, interval > 0, reschedule_cb = 0) |
1402 | |
1450 | |
1403 | In this mode the watcher will always be scheduled to time out at the next |
1451 | In this mode the watcher will always be scheduled to time out at the next |
1404 | C<at + N * interval> time (for some integer N, which can also be negative) |
1452 | C<at + N * interval> time (for some integer N, which can also be negative) |
1405 | and then repeat, regardless of any time jumps. |
1453 | and then repeat, regardless of any time jumps. |
1406 | |
1454 | |
1407 | This can be used to create timers that do not drift with respect to system |
1455 | This can be used to create timers that do not drift with respect to the |
1408 | time, for example, here is a C<ev_periodic> that triggers each hour, on |
1456 | system clock, for example, here is a C<ev_periodic> that triggers each |
1409 | the hour: |
1457 | hour, on the hour: |
1410 | |
1458 | |
1411 | ev_periodic_set (&periodic, 0., 3600., 0); |
1459 | ev_periodic_set (&periodic, 0., 3600., 0); |
1412 | |
1460 | |
1413 | This doesn't mean there will always be 3600 seconds in between triggers, |
1461 | This doesn't mean there will always be 3600 seconds in between triggers, |
1414 | but only that the callback will be called when the system time shows a |
1462 | but only that the callback will be called when the system time shows a |
… | |
… | |
1501 | =back |
1549 | =back |
1502 | |
1550 | |
1503 | =head3 Examples |
1551 | =head3 Examples |
1504 | |
1552 | |
1505 | Example: Call a callback every hour, or, more precisely, whenever the |
1553 | Example: Call a callback every hour, or, more precisely, whenever the |
1506 | system clock is divisible by 3600. The callback invocation times have |
1554 | system time is divisible by 3600. The callback invocation times have |
1507 | potentially a lot of jitter, but good long-term stability. |
1555 | potentially a lot of jitter, but good long-term stability. |
1508 | |
1556 | |
1509 | static void |
1557 | static void |
1510 | clock_cb (struct ev_loop *loop, struct ev_io *w, int revents) |
1558 | clock_cb (struct ev_loop *loop, struct ev_io *w, int revents) |
1511 | { |
1559 | { |
… | |
… | |
1521 | #include <math.h> |
1569 | #include <math.h> |
1522 | |
1570 | |
1523 | static ev_tstamp |
1571 | static ev_tstamp |
1524 | my_scheduler_cb (struct ev_periodic *w, ev_tstamp now) |
1572 | my_scheduler_cb (struct ev_periodic *w, ev_tstamp now) |
1525 | { |
1573 | { |
1526 | return fmod (now, 3600.) + 3600.; |
1574 | return now + (3600. - fmod (now, 3600.)); |
1527 | } |
1575 | } |
1528 | |
1576 | |
1529 | ev_periodic_init (&hourly_tick, clock_cb, 0., 0., my_scheduler_cb); |
1577 | ev_periodic_init (&hourly_tick, clock_cb, 0., 0., my_scheduler_cb); |
1530 | |
1578 | |
1531 | Example: Call a callback every hour, starting now: |
1579 | Example: Call a callback every hour, starting now: |
… | |
… | |
1541 | Signal watchers will trigger an event when the process receives a specific |
1589 | Signal watchers will trigger an event when the process receives a specific |
1542 | signal one or more times. Even though signals are very asynchronous, libev |
1590 | signal one or more times. Even though signals are very asynchronous, libev |
1543 | will try it's best to deliver signals synchronously, i.e. as part of the |
1591 | will try it's best to deliver signals synchronously, i.e. as part of the |
1544 | normal event processing, like any other event. |
1592 | normal event processing, like any other event. |
1545 | |
1593 | |
|
|
1594 | If you want signals asynchronously, just use C<sigaction> as you would |
|
|
1595 | do without libev and forget about sharing the signal. You can even use |
|
|
1596 | C<ev_async> from a signal handler to synchronously wake up an event loop. |
|
|
1597 | |
1546 | You can configure as many watchers as you like per signal. Only when the |
1598 | You can configure as many watchers as you like per signal. Only when the |
1547 | first watcher gets started will libev actually register a signal watcher |
1599 | first watcher gets started will libev actually register a signal handler |
1548 | with the kernel (thus it coexists with your own signal handlers as long |
1600 | with the kernel (thus it coexists with your own signal handlers as long as |
1549 | as you don't register any with libev). Similarly, when the last signal |
1601 | you don't register any with libev for the same signal). Similarly, when |
1550 | watcher for a signal is stopped libev will reset the signal handler to |
1602 | the last signal watcher for a signal is stopped, libev will reset the |
1551 | SIG_DFL (regardless of what it was set to before). |
1603 | signal handler to SIG_DFL (regardless of what it was set to before). |
1552 | |
1604 | |
1553 | If possible and supported, libev will install its handlers with |
1605 | If possible and supported, libev will install its handlers with |
1554 | C<SA_RESTART> behaviour enabled, so system calls should not be unduly |
1606 | C<SA_RESTART> behaviour enabled, so system calls should not be unduly |
1555 | interrupted. If you have a problem with system calls getting interrupted by |
1607 | interrupted. If you have a problem with system calls getting interrupted by |
1556 | signals you can block all signals in an C<ev_check> watcher and unblock |
1608 | signals you can block all signals in an C<ev_check> watcher and unblock |
… | |
… | |
1589 | |
1641 | |
1590 | |
1642 | |
1591 | =head2 C<ev_child> - watch out for process status changes |
1643 | =head2 C<ev_child> - watch out for process status changes |
1592 | |
1644 | |
1593 | Child watchers trigger when your process receives a SIGCHLD in response to |
1645 | Child watchers trigger when your process receives a SIGCHLD in response to |
1594 | some child status changes (most typically when a child of yours dies). It |
1646 | some child status changes (most typically when a child of yours dies or |
1595 | is permissible to install a child watcher I<after> the child has been |
1647 | exits). It is permissible to install a child watcher I<after> the child |
1596 | forked (which implies it might have already exited), as long as the event |
1648 | has been forked (which implies it might have already exited), as long |
1597 | loop isn't entered (or is continued from a watcher). |
1649 | as the event loop isn't entered (or is continued from a watcher), i.e., |
|
|
1650 | forking and then immediately registering a watcher for the child is fine, |
|
|
1651 | but forking and registering a watcher a few event loop iterations later is |
|
|
1652 | not. |
1598 | |
1653 | |
1599 | Only the default event loop is capable of handling signals, and therefore |
1654 | Only the default event loop is capable of handling signals, and therefore |
1600 | you can only register child watchers in the default event loop. |
1655 | you can only register child watchers in the default event loop. |
1601 | |
1656 | |
1602 | =head3 Process Interaction |
1657 | =head3 Process Interaction |
… | |
… | |
1700 | the stat buffer having unspecified contents. |
1755 | the stat buffer having unspecified contents. |
1701 | |
1756 | |
1702 | The path I<should> be absolute and I<must not> end in a slash. If it is |
1757 | The path I<should> be absolute and I<must not> end in a slash. If it is |
1703 | relative and your working directory changes, the behaviour is undefined. |
1758 | relative and your working directory changes, the behaviour is undefined. |
1704 | |
1759 | |
1705 | Since there is no standard to do this, the portable implementation simply |
1760 | Since there is no standard kernel interface to do this, the portable |
1706 | calls C<stat (2)> regularly on the path to see if it changed somehow. You |
1761 | implementation simply calls C<stat (2)> regularly on the path to see if |
1707 | can specify a recommended polling interval for this case. If you specify |
1762 | it changed somehow. You can specify a recommended polling interval for |
1708 | a polling interval of C<0> (highly recommended!) then a I<suitable, |
1763 | this case. If you specify a polling interval of C<0> (highly recommended!) |
1709 | unspecified default> value will be used (which you can expect to be around |
1764 | then a I<suitable, unspecified default> value will be used (which |
1710 | five seconds, although this might change dynamically). Libev will also |
1765 | you can expect to be around five seconds, although this might change |
1711 | impose a minimum interval which is currently around C<0.1>, but thats |
1766 | dynamically). Libev will also impose a minimum interval which is currently |
1712 | usually overkill. |
1767 | around C<0.1>, but thats usually overkill. |
1713 | |
1768 | |
1714 | This watcher type is not meant for massive numbers of stat watchers, |
1769 | This watcher type is not meant for massive numbers of stat watchers, |
1715 | as even with OS-supported change notifications, this can be |
1770 | as even with OS-supported change notifications, this can be |
1716 | resource-intensive. |
1771 | resource-intensive. |
1717 | |
1772 | |
1718 | At the time of this writing, only the Linux inotify interface is |
1773 | At the time of this writing, the only OS-specific interface implemented |
1719 | implemented (implementing kqueue support is left as an exercise for the |
1774 | is the Linux inotify interface (implementing kqueue support is left as |
1720 | reader, note, however, that the author sees no way of implementing ev_stat |
1775 | an exercise for the reader. Note, however, that the author sees no way |
1721 | semantics with kqueue). Inotify will be used to give hints only and should |
1776 | of implementing C<ev_stat> semantics with kqueue). |
1722 | not change the semantics of C<ev_stat> watchers, which means that libev |
|
|
1723 | sometimes needs to fall back to regular polling again even with inotify, |
|
|
1724 | but changes are usually detected immediately, and if the file exists there |
|
|
1725 | will be no polling. |
|
|
1726 | |
1777 | |
1727 | =head3 ABI Issues (Largefile Support) |
1778 | =head3 ABI Issues (Largefile Support) |
1728 | |
1779 | |
1729 | Libev by default (unless the user overrides this) uses the default |
1780 | Libev by default (unless the user overrides this) uses the default |
1730 | compilation environment, which means that on systems with large file |
1781 | compilation environment, which means that on systems with large file |
… | |
… | |
1739 | file interfaces available by default (as e.g. FreeBSD does) and not |
1790 | file interfaces available by default (as e.g. FreeBSD does) and not |
1740 | optional. Libev cannot simply switch on large file support because it has |
1791 | optional. Libev cannot simply switch on large file support because it has |
1741 | to exchange stat structures with application programs compiled using the |
1792 | to exchange stat structures with application programs compiled using the |
1742 | default compilation environment. |
1793 | default compilation environment. |
1743 | |
1794 | |
1744 | =head3 Inotify |
1795 | =head3 Inotify and Kqueue |
1745 | |
1796 | |
1746 | When C<inotify (7)> support has been compiled into libev (generally only |
1797 | When C<inotify (7)> support has been compiled into libev (generally only |
1747 | available on Linux) and present at runtime, it will be used to speed up |
1798 | available with Linux) and present at runtime, it will be used to speed up |
1748 | change detection where possible. The inotify descriptor will be created lazily |
1799 | change detection where possible. The inotify descriptor will be created lazily |
1749 | when the first C<ev_stat> watcher is being started. |
1800 | when the first C<ev_stat> watcher is being started. |
1750 | |
1801 | |
1751 | Inotify presence does not change the semantics of C<ev_stat> watchers |
1802 | Inotify presence does not change the semantics of C<ev_stat> watchers |
1752 | except that changes might be detected earlier, and in some cases, to avoid |
1803 | except that changes might be detected earlier, and in some cases, to avoid |
1753 | making regular C<stat> calls. Even in the presence of inotify support |
1804 | making regular C<stat> calls. Even in the presence of inotify support |
1754 | there are many cases where libev has to resort to regular C<stat> polling. |
1805 | there are many cases where libev has to resort to regular C<stat> polling, |
|
|
1806 | but as long as the path exists, libev usually gets away without polling. |
1755 | |
1807 | |
1756 | (There is no support for kqueue, as apparently it cannot be used to |
1808 | There is no support for kqueue, as apparently it cannot be used to |
1757 | implement this functionality, due to the requirement of having a file |
1809 | implement this functionality, due to the requirement of having a file |
1758 | descriptor open on the object at all times). |
1810 | descriptor open on the object at all times, and detecting renames, unlinks |
|
|
1811 | etc. is difficult. |
1759 | |
1812 | |
1760 | =head3 The special problem of stat time resolution |
1813 | =head3 The special problem of stat time resolution |
1761 | |
1814 | |
1762 | The C<stat ()> system call only supports full-second resolution portably, and |
1815 | The C<stat ()> system call only supports full-second resolution portably, and |
1763 | even on systems where the resolution is higher, many file systems still |
1816 | even on systems where the resolution is higher, most file systems still |
1764 | only support whole seconds. |
1817 | only support whole seconds. |
1765 | |
1818 | |
1766 | That means that, if the time is the only thing that changes, you can |
1819 | That means that, if the time is the only thing that changes, you can |
1767 | easily miss updates: on the first update, C<ev_stat> detects a change and |
1820 | easily miss updates: on the first update, C<ev_stat> detects a change and |
1768 | calls your callback, which does something. When there is another update |
1821 | calls your callback, which does something. When there is another update |
1769 | within the same second, C<ev_stat> will be unable to detect it as the stat |
1822 | within the same second, C<ev_stat> will be unable to detect unless the |
1770 | data does not change. |
1823 | stat data does change in other ways (e.g. file size). |
1771 | |
1824 | |
1772 | The solution to this is to delay acting on a change for slightly more |
1825 | The solution to this is to delay acting on a change for slightly more |
1773 | than a second (or till slightly after the next full second boundary), using |
1826 | than a second (or till slightly after the next full second boundary), using |
1774 | a roughly one-second-delay C<ev_timer> (e.g. C<ev_timer_set (w, 0., 1.02); |
1827 | a roughly one-second-delay C<ev_timer> (e.g. C<ev_timer_set (w, 0., 1.02); |
1775 | ev_timer_again (loop, w)>). |
1828 | ev_timer_again (loop, w)>). |
… | |
… | |
1795 | C<path>. The C<interval> is a hint on how quickly a change is expected to |
1848 | C<path>. The C<interval> is a hint on how quickly a change is expected to |
1796 | be detected and should normally be specified as C<0> to let libev choose |
1849 | be detected and should normally be specified as C<0> to let libev choose |
1797 | a suitable value. The memory pointed to by C<path> must point to the same |
1850 | a suitable value. The memory pointed to by C<path> must point to the same |
1798 | path for as long as the watcher is active. |
1851 | path for as long as the watcher is active. |
1799 | |
1852 | |
1800 | The callback will receive C<EV_STAT> when a change was detected, relative |
1853 | The callback will receive an C<EV_STAT> event when a change was detected, |
1801 | to the attributes at the time the watcher was started (or the last change |
1854 | relative to the attributes at the time the watcher was started (or the |
1802 | was detected). |
1855 | last change was detected). |
1803 | |
1856 | |
1804 | =item ev_stat_stat (loop, ev_stat *) |
1857 | =item ev_stat_stat (loop, ev_stat *) |
1805 | |
1858 | |
1806 | Updates the stat buffer immediately with new values. If you change the |
1859 | Updates the stat buffer immediately with new values. If you change the |
1807 | watched path in your callback, you could call this function to avoid |
1860 | watched path in your callback, you could call this function to avoid |
… | |
… | |
1890 | |
1943 | |
1891 | |
1944 | |
1892 | =head2 C<ev_idle> - when you've got nothing better to do... |
1945 | =head2 C<ev_idle> - when you've got nothing better to do... |
1893 | |
1946 | |
1894 | Idle watchers trigger events when no other events of the same or higher |
1947 | Idle watchers trigger events when no other events of the same or higher |
1895 | priority are pending (prepare, check and other idle watchers do not |
1948 | priority are pending (prepare, check and other idle watchers do not count |
1896 | count). |
1949 | as receiving "events"). |
1897 | |
1950 | |
1898 | That is, as long as your process is busy handling sockets or timeouts |
1951 | That is, as long as your process is busy handling sockets or timeouts |
1899 | (or even signals, imagine) of the same or higher priority it will not be |
1952 | (or even signals, imagine) of the same or higher priority it will not be |
1900 | triggered. But when your process is idle (or only lower-priority watchers |
1953 | triggered. But when your process is idle (or only lower-priority watchers |
1901 | are pending), the idle watchers are being called once per event loop |
1954 | are pending), the idle watchers are being called once per event loop |
… | |
… | |
1940 | ev_idle_start (loop, idle_cb); |
1993 | ev_idle_start (loop, idle_cb); |
1941 | |
1994 | |
1942 | |
1995 | |
1943 | =head2 C<ev_prepare> and C<ev_check> - customise your event loop! |
1996 | =head2 C<ev_prepare> and C<ev_check> - customise your event loop! |
1944 | |
1997 | |
1945 | Prepare and check watchers are usually (but not always) used in tandem: |
1998 | Prepare and check watchers are usually (but not always) used in pairs: |
1946 | prepare watchers get invoked before the process blocks and check watchers |
1999 | prepare watchers get invoked before the process blocks and check watchers |
1947 | afterwards. |
2000 | afterwards. |
1948 | |
2001 | |
1949 | You I<must not> call C<ev_loop> or similar functions that enter |
2002 | You I<must not> call C<ev_loop> or similar functions that enter |
1950 | the current event loop from either C<ev_prepare> or C<ev_check> |
2003 | the current event loop from either C<ev_prepare> or C<ev_check> |
… | |
… | |
1953 | those watchers, i.e. the sequence will always be C<ev_prepare>, blocking, |
2006 | those watchers, i.e. the sequence will always be C<ev_prepare>, blocking, |
1954 | C<ev_check> so if you have one watcher of each kind they will always be |
2007 | C<ev_check> so if you have one watcher of each kind they will always be |
1955 | called in pairs bracketing the blocking call. |
2008 | called in pairs bracketing the blocking call. |
1956 | |
2009 | |
1957 | Their main purpose is to integrate other event mechanisms into libev and |
2010 | Their main purpose is to integrate other event mechanisms into libev and |
1958 | their use is somewhat advanced. This could be used, for example, to track |
2011 | their use is somewhat advanced. They could be used, for example, to track |
1959 | variable changes, implement your own watchers, integrate net-snmp or a |
2012 | variable changes, implement your own watchers, integrate net-snmp or a |
1960 | coroutine library and lots more. They are also occasionally useful if |
2013 | coroutine library and lots more. They are also occasionally useful if |
1961 | you cache some data and want to flush it before blocking (for example, |
2014 | you cache some data and want to flush it before blocking (for example, |
1962 | in X programs you might want to do an C<XFlush ()> in an C<ev_prepare> |
2015 | in X programs you might want to do an C<XFlush ()> in an C<ev_prepare> |
1963 | watcher). |
2016 | watcher). |
1964 | |
2017 | |
1965 | This is done by examining in each prepare call which file descriptors need |
2018 | This is done by examining in each prepare call which file descriptors |
1966 | to be watched by the other library, registering C<ev_io> watchers for |
2019 | need to be watched by the other library, registering C<ev_io> watchers |
1967 | them and starting an C<ev_timer> watcher for any timeouts (many libraries |
2020 | for them and starting an C<ev_timer> watcher for any timeouts (many |
1968 | provide just this functionality). Then, in the check watcher you check for |
2021 | libraries provide exactly this functionality). Then, in the check watcher, |
1969 | any events that occurred (by checking the pending status of all watchers |
2022 | you check for any events that occurred (by checking the pending status |
1970 | and stopping them) and call back into the library. The I/O and timer |
2023 | of all watchers and stopping them) and call back into the library. The |
1971 | callbacks will never actually be called (but must be valid nevertheless, |
2024 | I/O and timer callbacks will never actually be called (but must be valid |
1972 | because you never know, you know?). |
2025 | nevertheless, because you never know, you know?). |
1973 | |
2026 | |
1974 | As another example, the Perl Coro module uses these hooks to integrate |
2027 | As another example, the Perl Coro module uses these hooks to integrate |
1975 | coroutines into libev programs, by yielding to other active coroutines |
2028 | coroutines into libev programs, by yielding to other active coroutines |
1976 | during each prepare and only letting the process block if no coroutines |
2029 | during each prepare and only letting the process block if no coroutines |
1977 | are ready to run (it's actually more complicated: it only runs coroutines |
2030 | are ready to run (it's actually more complicated: it only runs coroutines |
… | |
… | |
1980 | loop from blocking if lower-priority coroutines are active, thus mapping |
2033 | loop from blocking if lower-priority coroutines are active, thus mapping |
1981 | low-priority coroutines to idle/background tasks). |
2034 | low-priority coroutines to idle/background tasks). |
1982 | |
2035 | |
1983 | It is recommended to give C<ev_check> watchers highest (C<EV_MAXPRI>) |
2036 | It is recommended to give C<ev_check> watchers highest (C<EV_MAXPRI>) |
1984 | priority, to ensure that they are being run before any other watchers |
2037 | priority, to ensure that they are being run before any other watchers |
|
|
2038 | after the poll (this doesn't matter for C<ev_prepare> watchers). |
|
|
2039 | |
1985 | after the poll. Also, C<ev_check> watchers (and C<ev_prepare> watchers, |
2040 | Also, C<ev_check> watchers (and C<ev_prepare> watchers, too) should not |
1986 | too) should not activate ("feed") events into libev. While libev fully |
2041 | activate ("feed") events into libev. While libev fully supports this, they |
1987 | supports this, they might get executed before other C<ev_check> watchers |
2042 | might get executed before other C<ev_check> watchers did their job. As |
1988 | did their job. As C<ev_check> watchers are often used to embed other |
2043 | C<ev_check> watchers are often used to embed other (non-libev) event |
1989 | (non-libev) event loops those other event loops might be in an unusable |
2044 | loops those other event loops might be in an unusable state until their |
1990 | state until their C<ev_check> watcher ran (always remind yourself to |
2045 | C<ev_check> watcher ran (always remind yourself to coexist peacefully with |
1991 | coexist peacefully with others). |
2046 | others). |
1992 | |
2047 | |
1993 | =head3 Watcher-Specific Functions and Data Members |
2048 | =head3 Watcher-Specific Functions and Data Members |
1994 | |
2049 | |
1995 | =over 4 |
2050 | =over 4 |
1996 | |
2051 | |
… | |
… | |
1998 | |
2053 | |
1999 | =item ev_check_init (ev_check *, callback) |
2054 | =item ev_check_init (ev_check *, callback) |
2000 | |
2055 | |
2001 | Initialises and configures the prepare or check watcher - they have no |
2056 | Initialises and configures the prepare or check watcher - they have no |
2002 | parameters of any kind. There are C<ev_prepare_set> and C<ev_check_set> |
2057 | parameters of any kind. There are C<ev_prepare_set> and C<ev_check_set> |
2003 | macros, but using them is utterly, utterly and completely pointless. |
2058 | macros, but using them is utterly, utterly, utterly and completely |
|
|
2059 | pointless. |
2004 | |
2060 | |
2005 | =back |
2061 | =back |
2006 | |
2062 | |
2007 | =head3 Examples |
2063 | =head3 Examples |
2008 | |
2064 | |
… | |
… | |
2101 | } |
2157 | } |
2102 | |
2158 | |
2103 | // do not ever call adns_afterpoll |
2159 | // do not ever call adns_afterpoll |
2104 | |
2160 | |
2105 | Method 4: Do not use a prepare or check watcher because the module you |
2161 | Method 4: Do not use a prepare or check watcher because the module you |
2106 | want to embed is too inflexible to support it. Instead, you can override |
2162 | want to embed is not flexible enough to support it. Instead, you can |
2107 | their poll function. The drawback with this solution is that the main |
2163 | override their poll function. The drawback with this solution is that the |
2108 | loop is now no longer controllable by EV. The C<Glib::EV> module does |
2164 | main loop is now no longer controllable by EV. The C<Glib::EV> module uses |
2109 | this. |
2165 | this approach, effectively embedding EV as a client into the horrible |
|
|
2166 | libglib event loop. |
2110 | |
2167 | |
2111 | static gint |
2168 | static gint |
2112 | event_poll_func (GPollFD *fds, guint nfds, gint timeout) |
2169 | event_poll_func (GPollFD *fds, guint nfds, gint timeout) |
2113 | { |
2170 | { |
2114 | int got_events = 0; |
2171 | int got_events = 0; |
… | |
… | |
2145 | prioritise I/O. |
2202 | prioritise I/O. |
2146 | |
2203 | |
2147 | As an example for a bug workaround, the kqueue backend might only support |
2204 | As an example for a bug workaround, the kqueue backend might only support |
2148 | sockets on some platform, so it is unusable as generic backend, but you |
2205 | sockets on some platform, so it is unusable as generic backend, but you |
2149 | still want to make use of it because you have many sockets and it scales |
2206 | still want to make use of it because you have many sockets and it scales |
2150 | so nicely. In this case, you would create a kqueue-based loop and embed it |
2207 | so nicely. In this case, you would create a kqueue-based loop and embed |
2151 | into your default loop (which might use e.g. poll). Overall operation will |
2208 | it into your default loop (which might use e.g. poll). Overall operation |
2152 | be a bit slower because first libev has to poll and then call kevent, but |
2209 | will be a bit slower because first libev has to call C<poll> and then |
2153 | at least you can use both at what they are best. |
2210 | C<kevent>, but at least you can use both mechanisms for what they are |
|
|
2211 | best: C<kqueue> for scalable sockets and C<poll> if you want it to work :) |
2154 | |
2212 | |
2155 | As for prioritising I/O: rarely you have the case where some fds have |
2213 | As for prioritising I/O: under rare circumstances you have the case where |
2156 | to be watched and handled very quickly (with low latency), and even |
2214 | some fds have to be watched and handled very quickly (with low latency), |
2157 | priorities and idle watchers might have too much overhead. In this case |
2215 | and even priorities and idle watchers might have too much overhead. In |
2158 | you would put all the high priority stuff in one loop and all the rest in |
2216 | this case you would put all the high priority stuff in one loop and all |
2159 | a second one, and embed the second one in the first. |
2217 | the rest in a second one, and embed the second one in the first. |
2160 | |
2218 | |
2161 | As long as the watcher is active, the callback will be invoked every time |
2219 | As long as the watcher is active, the callback will be invoked every time |
2162 | there might be events pending in the embedded loop. The callback must then |
2220 | there might be events pending in the embedded loop. The callback must then |
2163 | call C<ev_embed_sweep (mainloop, watcher)> to make a single sweep and invoke |
2221 | call C<ev_embed_sweep (mainloop, watcher)> to make a single sweep and invoke |
2164 | their callbacks (you could also start an idle watcher to give the embedded |
2222 | their callbacks (you could also start an idle watcher to give the embedded |
… | |
… | |
2172 | interested in that. |
2230 | interested in that. |
2173 | |
2231 | |
2174 | Also, there have not currently been made special provisions for forking: |
2232 | Also, there have not currently been made special provisions for forking: |
2175 | when you fork, you not only have to call C<ev_loop_fork> on both loops, |
2233 | when you fork, you not only have to call C<ev_loop_fork> on both loops, |
2176 | but you will also have to stop and restart any C<ev_embed> watchers |
2234 | but you will also have to stop and restart any C<ev_embed> watchers |
2177 | yourself. |
2235 | yourself - but you can use a fork watcher to handle this automatically, |
|
|
2236 | and future versions of libev might do just that. |
2178 | |
2237 | |
2179 | Unfortunately, not all backends are embeddable, only the ones returned by |
2238 | Unfortunately, not all backends are embeddable, only the ones returned by |
2180 | C<ev_embeddable_backends> are, which, unfortunately, does not include any |
2239 | C<ev_embeddable_backends> are, which, unfortunately, does not include any |
2181 | portable one. |
2240 | portable one. |
2182 | |
2241 | |