… | |
… | |
620 | happily wraps around with enough iterations. |
620 | happily wraps around with enough iterations. |
621 | |
621 | |
622 | This value can sometimes be useful as a generation counter of sorts (it |
622 | This value can sometimes be useful as a generation counter of sorts (it |
623 | "ticks" the number of loop iterations), as it roughly corresponds with |
623 | "ticks" the number of loop iterations), as it roughly corresponds with |
624 | C<ev_prepare> and C<ev_check> calls. |
624 | C<ev_prepare> and C<ev_check> calls. |
|
|
625 | |
|
|
626 | =item unsigned int ev_loop_depth (loop) |
|
|
627 | |
|
|
628 | Returns the number of times C<ev_loop> was entered minus the number of |
|
|
629 | times C<ev_loop> was exited, in other words, the recursion depth. |
|
|
630 | |
|
|
631 | Outside C<ev_loop>, this number is zero. In a callback, this number is |
|
|
632 | C<1>, unless C<ev_loop> was invoked recursively (or from another thread), |
|
|
633 | in which case it is higher. |
|
|
634 | |
|
|
635 | Leaving C<ev_loop> abnormally (setjmp/longjmp, cancelling the thread |
|
|
636 | etc.), doesn't count as exit. |
625 | |
637 | |
626 | =item unsigned int ev_backend (loop) |
638 | =item unsigned int ev_backend (loop) |
627 | |
639 | |
628 | Returns one of the C<EVBACKEND_*> flags indicating the event backend in |
640 | Returns one of the C<EVBACKEND_*> flags indicating the event backend in |
629 | use. |
641 | use. |
… | |
… | |
844 | more often than 100 times per second: |
856 | more often than 100 times per second: |
845 | |
857 | |
846 | ev_set_timeout_collect_interval (EV_DEFAULT_UC_ 0.1); |
858 | ev_set_timeout_collect_interval (EV_DEFAULT_UC_ 0.1); |
847 | ev_set_io_collect_interval (EV_DEFAULT_UC_ 0.01); |
859 | ev_set_io_collect_interval (EV_DEFAULT_UC_ 0.01); |
848 | |
860 | |
|
|
861 | =item ev_invoke_pending (loop) |
|
|
862 | |
|
|
863 | This call will simply invoke all pending watchers while resetting their |
|
|
864 | pending state. Normally, C<ev_loop> does this automatically when required, |
|
|
865 | but when overriding the invoke callback this call comes handy. |
|
|
866 | |
|
|
867 | =item int ev_pending_count (loop) |
|
|
868 | |
|
|
869 | Returns the number of pending watchers - zero indicates that no watchers |
|
|
870 | are pending. |
|
|
871 | |
|
|
872 | =item ev_set_invoke_pending_cb (loop, void (*invoke_pending_cb)(EV_P)) |
|
|
873 | |
|
|
874 | This overrides the invoke pending functionality of the loop: Instead of |
|
|
875 | invoking all pending watchers when there are any, C<ev_loop> will call |
|
|
876 | this callback instead. This is useful, for example, when you want to |
|
|
877 | invoke the actual watchers inside another context (another thread etc.). |
|
|
878 | |
|
|
879 | If you want to reset the callback, use C<ev_invoke_pending> as new |
|
|
880 | callback. |
|
|
881 | |
|
|
882 | =item ev_set_loop_release_cb (loop, void (*release)(EV_P), void (*acquire)(EV_P)) |
|
|
883 | |
|
|
884 | Sometimes you want to share the same loop between multiple threads. This |
|
|
885 | can be done relatively simply by putting mutex_lock/unlock calls around |
|
|
886 | each call to a libev function. |
|
|
887 | |
|
|
888 | However, C<ev_loop> can run an indefinite time, so it is not feasible to |
|
|
889 | wait for it to return. One way around this is to wake up the loop via |
|
|
890 | C<ev_unloop> and C<av_async_send>, another way is to set these I<release> |
|
|
891 | and I<acquire> callbacks on the loop. |
|
|
892 | |
|
|
893 | When set, then C<release> will be called just before the thread is |
|
|
894 | suspended waiting for new events, and C<acquire> is called just |
|
|
895 | afterwards. |
|
|
896 | |
|
|
897 | Ideally, C<release> will just call your mutex_unlock function, and |
|
|
898 | C<acquire> will just call the mutex_lock function again. |
|
|
899 | |
|
|
900 | While event loop modifications are allowed between invocations of |
|
|
901 | C<release> and C<acquire> (that's their only purpose after all), no |
|
|
902 | modifications done will affect the event loop, i.e. adding watchers will |
|
|
903 | have no effect on the set of file descriptors being watched, or the time |
|
|
904 | waited. USe an C<ev_async> watcher to wake up C<ev_loop> when you want it |
|
|
905 | to take note of any changes you made. |
|
|
906 | |
|
|
907 | In theory, threads executing C<ev_loop> will be async-cancel safe between |
|
|
908 | invocations of C<release> and C<acquire>. |
|
|
909 | |
|
|
910 | See also the locking example in the C<THREADS> section later in this |
|
|
911 | document. |
|
|
912 | |
|
|
913 | =item ev_set_userdata (loop, void *data) |
|
|
914 | |
|
|
915 | =item ev_userdata (loop) |
|
|
916 | |
|
|
917 | Set and retrieve a single C<void *> associated with a loop. When |
|
|
918 | C<ev_set_userdata> has never been called, then C<ev_userdata> returns |
|
|
919 | C<0.> |
|
|
920 | |
|
|
921 | These two functions can be used to associate arbitrary data with a loop, |
|
|
922 | and are intended solely for the C<invoke_pending_cb>, C<release> and |
|
|
923 | C<acquire> callbacks described above, but of course can be (ab-)used for |
|
|
924 | any other purpose as well. |
|
|
925 | |
849 | =item ev_loop_verify (loop) |
926 | =item ev_loop_verify (loop) |
850 | |
927 | |
851 | This function only does something when C<EV_VERIFY> support has been |
928 | This function only does something when C<EV_VERIFY> support has been |
852 | compiled in, which is the default for non-minimal builds. It tries to go |
929 | compiled in, which is the default for non-minimal builds. It tries to go |
853 | through all internal structures and checks them for validity. If anything |
930 | through all internal structures and checks them for validity. If anything |
… | |
… | |
1480 | |
1557 | |
1481 | The callback is guaranteed to be invoked only I<after> its timeout has |
1558 | The callback is guaranteed to be invoked only I<after> its timeout has |
1482 | passed (not I<at>, so on systems with very low-resolution clocks this |
1559 | passed (not I<at>, so on systems with very low-resolution clocks this |
1483 | might introduce a small delay). If multiple timers become ready during the |
1560 | might introduce a small delay). If multiple timers become ready during the |
1484 | same loop iteration then the ones with earlier time-out values are invoked |
1561 | same loop iteration then the ones with earlier time-out values are invoked |
1485 | before ones with later time-out values (but this is no longer true when a |
1562 | before ones of the same priority with later time-out values (but this is |
1486 | callback calls C<ev_loop> recursively). |
1563 | no longer true when a callback calls C<ev_loop> recursively). |
1487 | |
1564 | |
1488 | =head3 Be smart about timeouts |
1565 | =head3 Be smart about timeouts |
1489 | |
1566 | |
1490 | Many real-world problems involve some kind of timeout, usually for error |
1567 | Many real-world problems involve some kind of timeout, usually for error |
1491 | recovery. A typical example is an HTTP request - if the other side hangs, |
1568 | recovery. A typical example is an HTTP request - if the other side hangs, |
… | |
… | |
2020 | in the next callback invocation is not. |
2097 | in the next callback invocation is not. |
2021 | |
2098 | |
2022 | Only the default event loop is capable of handling signals, and therefore |
2099 | Only the default event loop is capable of handling signals, and therefore |
2023 | you can only register child watchers in the default event loop. |
2100 | you can only register child watchers in the default event loop. |
2024 | |
2101 | |
|
|
2102 | Due to some design glitches inside libev, child watchers will always be |
|
|
2103 | handled at maximum priority (their priority is set to C<EV_MAXPRI> by |
|
|
2104 | libev) |
|
|
2105 | |
2025 | =head3 Process Interaction |
2106 | =head3 Process Interaction |
2026 | |
2107 | |
2027 | Libev grabs C<SIGCHLD> as soon as the default event loop is |
2108 | Libev grabs C<SIGCHLD> as soon as the default event loop is |
2028 | initialised. This is necessary to guarantee proper behaviour even if |
2109 | initialised. This is necessary to guarantee proper behaviour even if |
2029 | the first child watcher is started after the child exits. The occurrence |
2110 | the first child watcher is started after the child exits. The occurrence |
… | |
… | |
3655 | defined to be C<0>, then they are not. |
3736 | defined to be C<0>, then they are not. |
3656 | |
3737 | |
3657 | =item EV_MINIMAL |
3738 | =item EV_MINIMAL |
3658 | |
3739 | |
3659 | If you need to shave off some kilobytes of code at the expense of some |
3740 | If you need to shave off some kilobytes of code at the expense of some |
3660 | speed, define this symbol to C<1>. Currently this is used to override some |
3741 | speed (but with the full API), define this symbol to C<1>. Currently this |
3661 | inlining decisions, saves roughly 30% code size on amd64. It also selects a |
3742 | is used to override some inlining decisions, saves roughly 30% code size |
3662 | much smaller 2-heap for timer management over the default 4-heap. |
3743 | on amd64. It also selects a much smaller 2-heap for timer management over |
|
|
3744 | the default 4-heap. |
|
|
3745 | |
|
|
3746 | You can save even more by disabling watcher types you do not need |
|
|
3747 | and setting C<EV_MAXPRI> == C<EV_MINPRI>. Also, disabling C<assert> |
|
|
3748 | (C<-DNDEBUG>) will usually reduce code size a lot. |
|
|
3749 | |
|
|
3750 | Defining C<EV_MINIMAL> to C<2> will additionally reduce the core API to |
|
|
3751 | provide a bare-bones event library. See C<ev.h> for details on what parts |
|
|
3752 | of the API are still available, and do not complain if this subset changes |
|
|
3753 | over time. |
3663 | |
3754 | |
3664 | =item EV_PID_HASHSIZE |
3755 | =item EV_PID_HASHSIZE |
3665 | |
3756 | |
3666 | C<ev_child> watchers use a small hash table to distribute workload by |
3757 | C<ev_child> watchers use a small hash table to distribute workload by |
3667 | pid. The default size is C<16> (or C<1> with C<EV_MINIMAL>), usually more |
3758 | pid. The default size is C<16> (or C<1> with C<EV_MINIMAL>), usually more |
… | |
… | |
3853 | default loop and triggering an C<ev_async> watcher from the default loop |
3944 | default loop and triggering an C<ev_async> watcher from the default loop |
3854 | watcher callback into the event loop interested in the signal. |
3945 | watcher callback into the event loop interested in the signal. |
3855 | |
3946 | |
3856 | =back |
3947 | =back |
3857 | |
3948 | |
|
|
3949 | =head4 THREAD LOCKING EXAMPLE |
|
|
3950 | |
|
|
3951 | Here is a fictitious example of how to run an event loop in a different |
|
|
3952 | thread than where callbacks are being invoked and watchers are |
|
|
3953 | created/added/removed. |
|
|
3954 | |
|
|
3955 | For a real-world example, see the C<EV::Loop::Async> perl module, |
|
|
3956 | which uses exactly this technique (which is suited for many high-level |
|
|
3957 | languages). |
|
|
3958 | |
|
|
3959 | The example uses a pthread mutex to protect the loop data, a condition |
|
|
3960 | variable to wait for callback invocations, an async watcher to notify the |
|
|
3961 | event loop thread and an unspecified mechanism to wake up the main thread. |
|
|
3962 | |
|
|
3963 | First, you need to associate some data with the event loop: |
|
|
3964 | |
|
|
3965 | typedef struct { |
|
|
3966 | mutex_t lock; /* global loop lock */ |
|
|
3967 | ev_async async_w; |
|
|
3968 | thread_t tid; |
|
|
3969 | cond_t invoke_cv; |
|
|
3970 | } userdata; |
|
|
3971 | |
|
|
3972 | void prepare_loop (EV_P) |
|
|
3973 | { |
|
|
3974 | // for simplicity, we use a static userdata struct. |
|
|
3975 | static userdata u; |
|
|
3976 | |
|
|
3977 | ev_async_init (&u->async_w, async_cb); |
|
|
3978 | ev_async_start (EV_A_ &u->async_w); |
|
|
3979 | |
|
|
3980 | pthread_mutex_init (&u->lock, 0); |
|
|
3981 | pthread_cond_init (&u->invoke_cv, 0); |
|
|
3982 | |
|
|
3983 | // now associate this with the loop |
|
|
3984 | ev_set_userdata (EV_A_ u); |
|
|
3985 | ev_set_invoke_pending_cb (EV_A_ l_invoke); |
|
|
3986 | ev_set_loop_release_cb (EV_A_ l_release, l_acquire); |
|
|
3987 | |
|
|
3988 | // then create the thread running ev_loop |
|
|
3989 | pthread_create (&u->tid, 0, l_run, EV_A); |
|
|
3990 | } |
|
|
3991 | |
|
|
3992 | The callback for the C<ev_async> watcher does nothing: the watcher is used |
|
|
3993 | solely to wake up the event loop so it takes notice of any new watchers |
|
|
3994 | that might have been added: |
|
|
3995 | |
|
|
3996 | static void |
|
|
3997 | async_cb (EV_P_ ev_async *w, int revents) |
|
|
3998 | { |
|
|
3999 | // just used for the side effects |
|
|
4000 | } |
|
|
4001 | |
|
|
4002 | The C<l_release> and C<l_acquire> callbacks simply unlock/lock the mutex |
|
|
4003 | protecting the loop data, respectively. |
|
|
4004 | |
|
|
4005 | static void |
|
|
4006 | l_release (EV_P) |
|
|
4007 | { |
|
|
4008 | userdata *u = ev_userdata (EV_A); |
|
|
4009 | pthread_mutex_unlock (&u->lock); |
|
|
4010 | } |
|
|
4011 | |
|
|
4012 | static void |
|
|
4013 | l_acquire (EV_P) |
|
|
4014 | { |
|
|
4015 | userdata *u = ev_userdata (EV_A); |
|
|
4016 | pthread_mutex_lock (&u->lock); |
|
|
4017 | } |
|
|
4018 | |
|
|
4019 | The event loop thread first acquires the mutex, and then jumps straight |
|
|
4020 | into C<ev_loop>: |
|
|
4021 | |
|
|
4022 | void * |
|
|
4023 | l_run (void *thr_arg) |
|
|
4024 | { |
|
|
4025 | struct ev_loop *loop = (struct ev_loop *)thr_arg; |
|
|
4026 | |
|
|
4027 | l_acquire (EV_A); |
|
|
4028 | pthread_setcanceltype (PTHREAD_CANCEL_ASYNCHRONOUS, 0); |
|
|
4029 | ev_loop (EV_A_ 0); |
|
|
4030 | l_release (EV_A); |
|
|
4031 | |
|
|
4032 | return 0; |
|
|
4033 | } |
|
|
4034 | |
|
|
4035 | Instead of invoking all pending watchers, the C<l_invoke> callback will |
|
|
4036 | signal the main thread via some unspecified mechanism (signals? pipe |
|
|
4037 | writes? C<Async::Interrupt>?) and then waits until all pending watchers |
|
|
4038 | have been called (in a while loop because a) spurious wakeups are possible |
|
|
4039 | and b) skipping inter-thread-communication when there are no pending |
|
|
4040 | watchers is very beneficial): |
|
|
4041 | |
|
|
4042 | static void |
|
|
4043 | l_invoke (EV_P) |
|
|
4044 | { |
|
|
4045 | userdata *u = ev_userdata (EV_A); |
|
|
4046 | |
|
|
4047 | while (ev_pending_count (EV_A)) |
|
|
4048 | { |
|
|
4049 | wake_up_other_thread_in_some_magic_or_not_so_magic_way (); |
|
|
4050 | pthread_cond_wait (&u->invoke_cv, &u->lock); |
|
|
4051 | } |
|
|
4052 | } |
|
|
4053 | |
|
|
4054 | Now, whenever the main thread gets told to invoke pending watchers, it |
|
|
4055 | will grab the lock, call C<ev_invoke_pending> and then signal the loop |
|
|
4056 | thread to continue: |
|
|
4057 | |
|
|
4058 | static void |
|
|
4059 | real_invoke_pending (EV_P) |
|
|
4060 | { |
|
|
4061 | userdata *u = ev_userdata (EV_A); |
|
|
4062 | |
|
|
4063 | pthread_mutex_lock (&u->lock); |
|
|
4064 | ev_invoke_pending (EV_A); |
|
|
4065 | pthread_cond_signal (&u->invoke_cv); |
|
|
4066 | pthread_mutex_unlock (&u->lock); |
|
|
4067 | } |
|
|
4068 | |
|
|
4069 | Whenever you want to start/stop a watcher or do other modifications to an |
|
|
4070 | event loop, you will now have to lock: |
|
|
4071 | |
|
|
4072 | ev_timer timeout_watcher; |
|
|
4073 | userdata *u = ev_userdata (EV_A); |
|
|
4074 | |
|
|
4075 | ev_timer_init (&timeout_watcher, timeout_cb, 5.5, 0.); |
|
|
4076 | |
|
|
4077 | pthread_mutex_lock (&u->lock); |
|
|
4078 | ev_timer_start (EV_A_ &timeout_watcher); |
|
|
4079 | ev_async_send (EV_A_ &u->async_w); |
|
|
4080 | pthread_mutex_unlock (&u->lock); |
|
|
4081 | |
|
|
4082 | Note that sending the C<ev_async> watcher is required because otherwise |
|
|
4083 | an event loop currently blocking in the kernel will have no knowledge |
|
|
4084 | about the newly added timer. By waking up the loop it will pick up any new |
|
|
4085 | watchers in the next event loop iteration. |
|
|
4086 | |
3858 | =head3 COROUTINES |
4087 | =head3 COROUTINES |
3859 | |
4088 | |
3860 | Libev is very accommodating to coroutines ("cooperative threads"): |
4089 | Libev is very accommodating to coroutines ("cooperative threads"): |
3861 | libev fully supports nesting calls to its functions from different |
4090 | libev fully supports nesting calls to its functions from different |
3862 | coroutines (e.g. you can call C<ev_loop> on the same loop from two |
4091 | coroutines (e.g. you can call C<ev_loop> on the same loop from two |
3863 | different coroutines, and switch freely between both coroutines running the |
4092 | different coroutines, and switch freely between both coroutines running |
3864 | loop, as long as you don't confuse yourself). The only exception is that |
4093 | the loop, as long as you don't confuse yourself). The only exception is |
3865 | you must not do this from C<ev_periodic> reschedule callbacks. |
4094 | that you must not do this from C<ev_periodic> reschedule callbacks. |
3866 | |
4095 | |
3867 | Care has been taken to ensure that libev does not keep local state inside |
4096 | Care has been taken to ensure that libev does not keep local state inside |
3868 | C<ev_loop>, and other calls do not usually allow for coroutine switches as |
4097 | C<ev_loop>, and other calls do not usually allow for coroutine switches as |
3869 | they do not call any callbacks. |
4098 | they do not call any callbacks. |
3870 | |
4099 | |
… | |
… | |
4077 | =item C<double> must hold a time value in seconds with enough accuracy |
4306 | =item C<double> must hold a time value in seconds with enough accuracy |
4078 | |
4307 | |
4079 | The type C<double> is used to represent timestamps. It is required to |
4308 | The type C<double> is used to represent timestamps. It is required to |
4080 | have at least 51 bits of mantissa (and 9 bits of exponent), which is good |
4309 | have at least 51 bits of mantissa (and 9 bits of exponent), which is good |
4081 | enough for at least into the year 4000. This requirement is fulfilled by |
4310 | enough for at least into the year 4000. This requirement is fulfilled by |
4082 | implementations implementing IEEE 754 (basically all existing ones). |
4311 | implementations implementing IEEE 754, which is basically all existing |
|
|
4312 | ones. With IEEE 754 doubles, you get microsecond accuracy until at least |
|
|
4313 | 2200. |
4083 | |
4314 | |
4084 | =back |
4315 | =back |
4085 | |
4316 | |
4086 | If you know of other additional requirements drop me a note. |
4317 | If you know of other additional requirements drop me a note. |
4087 | |
4318 | |