… | |
… | |
362 | flag. |
362 | flag. |
363 | |
363 | |
364 | This flag setting cannot be overridden or specified in the C<LIBEV_FLAGS> |
364 | This flag setting cannot be overridden or specified in the C<LIBEV_FLAGS> |
365 | environment variable. |
365 | environment variable. |
366 | |
366 | |
|
|
367 | =item C<EVFLAG_NOINOTIFY> |
|
|
368 | |
|
|
369 | When this flag is specified, then libev will not attempt to use the |
|
|
370 | I<inotify> API for it's C<ev_stat> watchers. Apart from debugging and |
|
|
371 | testing, this flag can be useful to conserve inotify file descriptors, as |
|
|
372 | otherwise each loop using C<ev_stat> watchers consumes one inotify handle. |
|
|
373 | |
|
|
374 | =item C<EVFLAG_NOSIGNALFD> |
|
|
375 | |
|
|
376 | When this flag is specified, then libev will not attempt to use the |
|
|
377 | I<signalfd> API for it's C<ev_signal> (and C<ev_child>) watchers. This is |
|
|
378 | probably only useful to work around any bugs in libev. Consequently, this |
|
|
379 | flag might go away once the signalfd functionality is considered stable, |
|
|
380 | so it's useful mostly in environment variables and not in program code. |
|
|
381 | |
367 | =item C<EVBACKEND_SELECT> (value 1, portable select backend) |
382 | =item C<EVBACKEND_SELECT> (value 1, portable select backend) |
368 | |
383 | |
369 | This is your standard select(2) backend. Not I<completely> standard, as |
384 | This is your standard select(2) backend. Not I<completely> standard, as |
370 | libev tries to roll its own fd_set with no limits on the number of fds, |
385 | libev tries to roll its own fd_set with no limits on the number of fds, |
371 | but if that fails, expect a fairly low limit on the number of fds when |
386 | but if that fails, expect a fairly low limit on the number of fds when |
… | |
… | |
518 | |
533 | |
519 | It is definitely not recommended to use this flag. |
534 | It is definitely not recommended to use this flag. |
520 | |
535 | |
521 | =back |
536 | =back |
522 | |
537 | |
523 | If one or more of these are or'ed into the flags value, then only these |
538 | If one or more of the backend flags are or'ed into the flags value, |
524 | backends will be tried (in the reverse order as listed here). If none are |
539 | then only these backends will be tried (in the reverse order as listed |
525 | specified, all backends in C<ev_recommended_backends ()> will be tried. |
540 | here). If none are specified, all backends in C<ev_recommended_backends |
|
|
541 | ()> will be tried. |
526 | |
542 | |
527 | Example: This is the most typical usage. |
543 | Example: This is the most typical usage. |
528 | |
544 | |
529 | if (!ev_default_loop (0)) |
545 | if (!ev_default_loop (0)) |
530 | fatal ("could not initialise libev, bad $LIBEV_FLAGS in environment?"); |
546 | fatal ("could not initialise libev, bad $LIBEV_FLAGS in environment?"); |
… | |
… | |
856 | more often than 100 times per second: |
872 | more often than 100 times per second: |
857 | |
873 | |
858 | ev_set_timeout_collect_interval (EV_DEFAULT_UC_ 0.1); |
874 | ev_set_timeout_collect_interval (EV_DEFAULT_UC_ 0.1); |
859 | ev_set_io_collect_interval (EV_DEFAULT_UC_ 0.01); |
875 | ev_set_io_collect_interval (EV_DEFAULT_UC_ 0.01); |
860 | |
876 | |
|
|
877 | =item ev_invoke_pending (loop) |
|
|
878 | |
|
|
879 | This call will simply invoke all pending watchers while resetting their |
|
|
880 | pending state. Normally, C<ev_loop> does this automatically when required, |
|
|
881 | but when overriding the invoke callback this call comes handy. |
|
|
882 | |
|
|
883 | =item int ev_pending_count (loop) |
|
|
884 | |
|
|
885 | Returns the number of pending watchers - zero indicates that no watchers |
|
|
886 | are pending. |
|
|
887 | |
|
|
888 | =item ev_set_invoke_pending_cb (loop, void (*invoke_pending_cb)(EV_P)) |
|
|
889 | |
|
|
890 | This overrides the invoke pending functionality of the loop: Instead of |
|
|
891 | invoking all pending watchers when there are any, C<ev_loop> will call |
|
|
892 | this callback instead. This is useful, for example, when you want to |
|
|
893 | invoke the actual watchers inside another context (another thread etc.). |
|
|
894 | |
|
|
895 | If you want to reset the callback, use C<ev_invoke_pending> as new |
|
|
896 | callback. |
|
|
897 | |
|
|
898 | =item ev_set_loop_release_cb (loop, void (*release)(EV_P), void (*acquire)(EV_P)) |
|
|
899 | |
|
|
900 | Sometimes you want to share the same loop between multiple threads. This |
|
|
901 | can be done relatively simply by putting mutex_lock/unlock calls around |
|
|
902 | each call to a libev function. |
|
|
903 | |
|
|
904 | However, C<ev_loop> can run an indefinite time, so it is not feasible to |
|
|
905 | wait for it to return. One way around this is to wake up the loop via |
|
|
906 | C<ev_unloop> and C<av_async_send>, another way is to set these I<release> |
|
|
907 | and I<acquire> callbacks on the loop. |
|
|
908 | |
|
|
909 | When set, then C<release> will be called just before the thread is |
|
|
910 | suspended waiting for new events, and C<acquire> is called just |
|
|
911 | afterwards. |
|
|
912 | |
|
|
913 | Ideally, C<release> will just call your mutex_unlock function, and |
|
|
914 | C<acquire> will just call the mutex_lock function again. |
|
|
915 | |
|
|
916 | While event loop modifications are allowed between invocations of |
|
|
917 | C<release> and C<acquire> (that's their only purpose after all), no |
|
|
918 | modifications done will affect the event loop, i.e. adding watchers will |
|
|
919 | have no effect on the set of file descriptors being watched, or the time |
|
|
920 | waited. USe an C<ev_async> watcher to wake up C<ev_loop> when you want it |
|
|
921 | to take note of any changes you made. |
|
|
922 | |
|
|
923 | In theory, threads executing C<ev_loop> will be async-cancel safe between |
|
|
924 | invocations of C<release> and C<acquire>. |
|
|
925 | |
|
|
926 | See also the locking example in the C<THREADS> section later in this |
|
|
927 | document. |
|
|
928 | |
|
|
929 | =item ev_set_userdata (loop, void *data) |
|
|
930 | |
|
|
931 | =item ev_userdata (loop) |
|
|
932 | |
|
|
933 | Set and retrieve a single C<void *> associated with a loop. When |
|
|
934 | C<ev_set_userdata> has never been called, then C<ev_userdata> returns |
|
|
935 | C<0.> |
|
|
936 | |
|
|
937 | These two functions can be used to associate arbitrary data with a loop, |
|
|
938 | and are intended solely for the C<invoke_pending_cb>, C<release> and |
|
|
939 | C<acquire> callbacks described above, but of course can be (ab-)used for |
|
|
940 | any other purpose as well. |
|
|
941 | |
861 | =item ev_loop_verify (loop) |
942 | =item ev_loop_verify (loop) |
862 | |
943 | |
863 | This function only does something when C<EV_VERIFY> support has been |
944 | This function only does something when C<EV_VERIFY> support has been |
864 | compiled in, which is the default for non-minimal builds. It tries to go |
945 | compiled in, which is the default for non-minimal builds. It tries to go |
865 | through all internal structures and checks them for validity. If anything |
946 | through all internal structures and checks them for validity. If anything |
… | |
… | |
1690 | |
1771 | |
1691 | If the event loop is suspended for a long time, you can also force an |
1772 | If the event loop is suspended for a long time, you can also force an |
1692 | update of the time returned by C<ev_now ()> by calling C<ev_now_update |
1773 | update of the time returned by C<ev_now ()> by calling C<ev_now_update |
1693 | ()>. |
1774 | ()>. |
1694 | |
1775 | |
|
|
1776 | =head3 The special problems of suspended animation |
|
|
1777 | |
|
|
1778 | When you leave the server world it is quite customary to hit machines that |
|
|
1779 | can suspend/hibernate - what happens to the clocks during such a suspend? |
|
|
1780 | |
|
|
1781 | Some quick tests made with a Linux 2.6.28 indicate that a suspend freezes |
|
|
1782 | all processes, while the clocks (C<times>, C<CLOCK_MONOTONIC>) continue |
|
|
1783 | to run until the system is suspended, but they will not advance while the |
|
|
1784 | system is suspended. That means, on resume, it will be as if the program |
|
|
1785 | was frozen for a few seconds, but the suspend time will not be counted |
|
|
1786 | towards C<ev_timer> when a monotonic clock source is used. The real time |
|
|
1787 | clock advanced as expected, but if it is used as sole clocksource, then a |
|
|
1788 | long suspend would be detected as a time jump by libev, and timers would |
|
|
1789 | be adjusted accordingly. |
|
|
1790 | |
|
|
1791 | I would not be surprised to see different behaviour in different between |
|
|
1792 | operating systems, OS versions or even different hardware. |
|
|
1793 | |
|
|
1794 | The other form of suspend (job control, or sending a SIGSTOP) will see a |
|
|
1795 | time jump in the monotonic clocks and the realtime clock. If the program |
|
|
1796 | is suspended for a very long time, and monotonic clock sources are in use, |
|
|
1797 | then you can expect C<ev_timer>s to expire as the full suspension time |
|
|
1798 | will be counted towards the timers. When no monotonic clock source is in |
|
|
1799 | use, then libev will again assume a timejump and adjust accordingly. |
|
|
1800 | |
|
|
1801 | It might be beneficial for this latter case to call C<ev_suspend> |
|
|
1802 | and C<ev_resume> in code that handles C<SIGTSTP>, to at least get |
|
|
1803 | deterministic behaviour in this case (you can do nothing against |
|
|
1804 | C<SIGSTOP>). |
|
|
1805 | |
1695 | =head3 Watcher-Specific Functions and Data Members |
1806 | =head3 Watcher-Specific Functions and Data Members |
1696 | |
1807 | |
1697 | =over 4 |
1808 | =over 4 |
1698 | |
1809 | |
1699 | =item ev_timer_init (ev_timer *, callback, ev_tstamp after, ev_tstamp repeat) |
1810 | =item ev_timer_init (ev_timer *, callback, ev_tstamp after, ev_tstamp repeat) |
… | |
… | |
1724 | If the timer is repeating, either start it if necessary (with the |
1835 | If the timer is repeating, either start it if necessary (with the |
1725 | C<repeat> value), or reset the running timer to the C<repeat> value. |
1836 | C<repeat> value), or reset the running timer to the C<repeat> value. |
1726 | |
1837 | |
1727 | This sounds a bit complicated, see L<Be smart about timeouts>, above, for a |
1838 | This sounds a bit complicated, see L<Be smart about timeouts>, above, for a |
1728 | usage example. |
1839 | usage example. |
|
|
1840 | |
|
|
1841 | =item ev_timer_remaining (loop, ev_timer *) |
|
|
1842 | |
|
|
1843 | Returns the remaining time until a timer fires. If the timer is active, |
|
|
1844 | then this time is relative to the current event loop time, otherwise it's |
|
|
1845 | the timeout value currently configured. |
|
|
1846 | |
|
|
1847 | That is, after an C<ev_timer_set (w, 5, 7)>, C<ev_timer_remaining> returns |
|
|
1848 | C<5>. When the timer is started and one second passes, C<ev_timer_remain> |
|
|
1849 | will return C<4>. When the timer expires and is restarted, it will return |
|
|
1850 | roughly C<7> (likely slightly less as callback invocation takes some time, |
|
|
1851 | too), and so on. |
1729 | |
1852 | |
1730 | =item ev_tstamp repeat [read-write] |
1853 | =item ev_tstamp repeat [read-write] |
1731 | |
1854 | |
1732 | The current C<repeat> value. Will be used each time the watcher times out |
1855 | The current C<repeat> value. Will be used each time the watcher times out |
1733 | or C<ev_timer_again> is called, and determines the next timeout (if any), |
1856 | or C<ev_timer_again> is called, and determines the next timeout (if any), |
… | |
… | |
1969 | Signal watchers will trigger an event when the process receives a specific |
2092 | Signal watchers will trigger an event when the process receives a specific |
1970 | signal one or more times. Even though signals are very asynchronous, libev |
2093 | signal one or more times. Even though signals are very asynchronous, libev |
1971 | will try it's best to deliver signals synchronously, i.e. as part of the |
2094 | will try it's best to deliver signals synchronously, i.e. as part of the |
1972 | normal event processing, like any other event. |
2095 | normal event processing, like any other event. |
1973 | |
2096 | |
1974 | If you want signals asynchronously, just use C<sigaction> as you would |
2097 | If you want signals to be delivered truly asynchronously, just use |
1975 | do without libev and forget about sharing the signal. You can even use |
2098 | C<sigaction> as you would do without libev and forget about sharing |
1976 | C<ev_async> from a signal handler to synchronously wake up an event loop. |
2099 | the signal. You can even use C<ev_async> from a signal handler to |
|
|
2100 | synchronously wake up an event loop. |
1977 | |
2101 | |
1978 | You can configure as many watchers as you like per signal. Only when the |
2102 | You can configure as many watchers as you like for the same signal, but |
|
|
2103 | only within the same loop, i.e. you can watch for C<SIGINT> in your |
|
|
2104 | default loop and for C<SIGIO> in another loop, but you cannot watch for |
|
|
2105 | C<SIGINT> in both the default loop and another loop at the same time. At |
|
|
2106 | the moment, C<SIGCHLD> is permanently tied to the default loop. |
|
|
2107 | |
1979 | first watcher gets started will libev actually register a signal handler |
2108 | When the first watcher gets started will libev actually register something |
1980 | with the kernel (thus it coexists with your own signal handlers as long as |
2109 | with the kernel (thus it coexists with your own signal handlers as long as |
1981 | you don't register any with libev for the same signal). Similarly, when |
2110 | you don't register any with libev for the same signal). |
1982 | the last signal watcher for a signal is stopped, libev will reset the |
2111 | |
1983 | signal handler to SIG_DFL (regardless of what it was set to before). |
2112 | Both the signal mask state (C<sigprocmask>) and the signal handler state |
|
|
2113 | (C<sigaction>) are unspecified after starting a signal watcher (and after |
|
|
2114 | sotpping it again), that is, libev might or might not block the signal, |
|
|
2115 | and might or might not set or restore the installed signal handler. |
1984 | |
2116 | |
1985 | If possible and supported, libev will install its handlers with |
2117 | If possible and supported, libev will install its handlers with |
1986 | C<SA_RESTART> behaviour enabled, so system calls should not be unduly |
2118 | C<SA_RESTART> (or equivalent) behaviour enabled, so system calls should |
1987 | interrupted. If you have a problem with system calls getting interrupted by |
2119 | not be unduly interrupted. If you have a problem with system calls getting |
1988 | signals you can block all signals in an C<ev_check> watcher and unblock |
2120 | interrupted by signals you can block all signals in an C<ev_check> watcher |
1989 | them in an C<ev_prepare> watcher. |
2121 | and unblock them in an C<ev_prepare> watcher. |
1990 | |
2122 | |
1991 | =head3 Watcher-Specific Functions and Data Members |
2123 | =head3 Watcher-Specific Functions and Data Members |
1992 | |
2124 | |
1993 | =over 4 |
2125 | =over 4 |
1994 | |
2126 | |
… | |
… | |
2039 | libev) |
2171 | libev) |
2040 | |
2172 | |
2041 | =head3 Process Interaction |
2173 | =head3 Process Interaction |
2042 | |
2174 | |
2043 | Libev grabs C<SIGCHLD> as soon as the default event loop is |
2175 | Libev grabs C<SIGCHLD> as soon as the default event loop is |
2044 | initialised. This is necessary to guarantee proper behaviour even if |
2176 | initialised. This is necessary to guarantee proper behaviour even if the |
2045 | the first child watcher is started after the child exits. The occurrence |
2177 | first child watcher is started after the child exits. The occurrence |
2046 | of C<SIGCHLD> is recorded asynchronously, but child reaping is done |
2178 | of C<SIGCHLD> is recorded asynchronously, but child reaping is done |
2047 | synchronously as part of the event loop processing. Libev always reaps all |
2179 | synchronously as part of the event loop processing. Libev always reaps all |
2048 | children, even ones not watched. |
2180 | children, even ones not watched. |
2049 | |
2181 | |
2050 | =head3 Overriding the Built-In Processing |
2182 | =head3 Overriding the Built-In Processing |
… | |
… | |
2060 | =head3 Stopping the Child Watcher |
2192 | =head3 Stopping the Child Watcher |
2061 | |
2193 | |
2062 | Currently, the child watcher never gets stopped, even when the |
2194 | Currently, the child watcher never gets stopped, even when the |
2063 | child terminates, so normally one needs to stop the watcher in the |
2195 | child terminates, so normally one needs to stop the watcher in the |
2064 | callback. Future versions of libev might stop the watcher automatically |
2196 | callback. Future versions of libev might stop the watcher automatically |
2065 | when a child exit is detected. |
2197 | when a child exit is detected (calling C<ev_child_stop> twice is not a |
|
|
2198 | problem). |
2066 | |
2199 | |
2067 | =head3 Watcher-Specific Functions and Data Members |
2200 | =head3 Watcher-Specific Functions and Data Members |
2068 | |
2201 | |
2069 | =over 4 |
2202 | =over 4 |
2070 | |
2203 | |
… | |
… | |
3671 | defined to be C<0>, then they are not. |
3804 | defined to be C<0>, then they are not. |
3672 | |
3805 | |
3673 | =item EV_MINIMAL |
3806 | =item EV_MINIMAL |
3674 | |
3807 | |
3675 | If you need to shave off some kilobytes of code at the expense of some |
3808 | If you need to shave off some kilobytes of code at the expense of some |
3676 | speed, define this symbol to C<1>. Currently this is used to override some |
3809 | speed (but with the full API), define this symbol to C<1>. Currently this |
3677 | inlining decisions, saves roughly 30% code size on amd64. It also selects a |
3810 | is used to override some inlining decisions, saves roughly 30% code size |
3678 | much smaller 2-heap for timer management over the default 4-heap. |
3811 | on amd64. It also selects a much smaller 2-heap for timer management over |
|
|
3812 | the default 4-heap. |
|
|
3813 | |
|
|
3814 | You can save even more by disabling watcher types you do not need |
|
|
3815 | and setting C<EV_MAXPRI> == C<EV_MINPRI>. Also, disabling C<assert> |
|
|
3816 | (C<-DNDEBUG>) will usually reduce code size a lot. |
|
|
3817 | |
|
|
3818 | Defining C<EV_MINIMAL> to C<2> will additionally reduce the core API to |
|
|
3819 | provide a bare-bones event library. See C<ev.h> for details on what parts |
|
|
3820 | of the API are still available, and do not complain if this subset changes |
|
|
3821 | over time. |
|
|
3822 | |
|
|
3823 | =item EV_NSIG |
|
|
3824 | |
|
|
3825 | The highest supported signal number, +1 (or, the number of |
|
|
3826 | signals): Normally, libev tries to deduce the maximum number of signals |
|
|
3827 | automatically, but sometimes this fails, in which case it can be |
|
|
3828 | specified. Also, using a lower number than detected (C<32> should be |
|
|
3829 | good for about any system in existance) can save some memory, as libev |
|
|
3830 | statically allocates some 12-24 bytes per signal number. |
3679 | |
3831 | |
3680 | =item EV_PID_HASHSIZE |
3832 | =item EV_PID_HASHSIZE |
3681 | |
3833 | |
3682 | C<ev_child> watchers use a small hash table to distribute workload by |
3834 | C<ev_child> watchers use a small hash table to distribute workload by |
3683 | pid. The default size is C<16> (or C<1> with C<EV_MINIMAL>), usually more |
3835 | pid. The default size is C<16> (or C<1> with C<EV_MINIMAL>), usually more |
… | |
… | |
3869 | default loop and triggering an C<ev_async> watcher from the default loop |
4021 | default loop and triggering an C<ev_async> watcher from the default loop |
3870 | watcher callback into the event loop interested in the signal. |
4022 | watcher callback into the event loop interested in the signal. |
3871 | |
4023 | |
3872 | =back |
4024 | =back |
3873 | |
4025 | |
|
|
4026 | =head4 THREAD LOCKING EXAMPLE |
|
|
4027 | |
|
|
4028 | Here is a fictitious example of how to run an event loop in a different |
|
|
4029 | thread than where callbacks are being invoked and watchers are |
|
|
4030 | created/added/removed. |
|
|
4031 | |
|
|
4032 | For a real-world example, see the C<EV::Loop::Async> perl module, |
|
|
4033 | which uses exactly this technique (which is suited for many high-level |
|
|
4034 | languages). |
|
|
4035 | |
|
|
4036 | The example uses a pthread mutex to protect the loop data, a condition |
|
|
4037 | variable to wait for callback invocations, an async watcher to notify the |
|
|
4038 | event loop thread and an unspecified mechanism to wake up the main thread. |
|
|
4039 | |
|
|
4040 | First, you need to associate some data with the event loop: |
|
|
4041 | |
|
|
4042 | typedef struct { |
|
|
4043 | mutex_t lock; /* global loop lock */ |
|
|
4044 | ev_async async_w; |
|
|
4045 | thread_t tid; |
|
|
4046 | cond_t invoke_cv; |
|
|
4047 | } userdata; |
|
|
4048 | |
|
|
4049 | void prepare_loop (EV_P) |
|
|
4050 | { |
|
|
4051 | // for simplicity, we use a static userdata struct. |
|
|
4052 | static userdata u; |
|
|
4053 | |
|
|
4054 | ev_async_init (&u->async_w, async_cb); |
|
|
4055 | ev_async_start (EV_A_ &u->async_w); |
|
|
4056 | |
|
|
4057 | pthread_mutex_init (&u->lock, 0); |
|
|
4058 | pthread_cond_init (&u->invoke_cv, 0); |
|
|
4059 | |
|
|
4060 | // now associate this with the loop |
|
|
4061 | ev_set_userdata (EV_A_ u); |
|
|
4062 | ev_set_invoke_pending_cb (EV_A_ l_invoke); |
|
|
4063 | ev_set_loop_release_cb (EV_A_ l_release, l_acquire); |
|
|
4064 | |
|
|
4065 | // then create the thread running ev_loop |
|
|
4066 | pthread_create (&u->tid, 0, l_run, EV_A); |
|
|
4067 | } |
|
|
4068 | |
|
|
4069 | The callback for the C<ev_async> watcher does nothing: the watcher is used |
|
|
4070 | solely to wake up the event loop so it takes notice of any new watchers |
|
|
4071 | that might have been added: |
|
|
4072 | |
|
|
4073 | static void |
|
|
4074 | async_cb (EV_P_ ev_async *w, int revents) |
|
|
4075 | { |
|
|
4076 | // just used for the side effects |
|
|
4077 | } |
|
|
4078 | |
|
|
4079 | The C<l_release> and C<l_acquire> callbacks simply unlock/lock the mutex |
|
|
4080 | protecting the loop data, respectively. |
|
|
4081 | |
|
|
4082 | static void |
|
|
4083 | l_release (EV_P) |
|
|
4084 | { |
|
|
4085 | userdata *u = ev_userdata (EV_A); |
|
|
4086 | pthread_mutex_unlock (&u->lock); |
|
|
4087 | } |
|
|
4088 | |
|
|
4089 | static void |
|
|
4090 | l_acquire (EV_P) |
|
|
4091 | { |
|
|
4092 | userdata *u = ev_userdata (EV_A); |
|
|
4093 | pthread_mutex_lock (&u->lock); |
|
|
4094 | } |
|
|
4095 | |
|
|
4096 | The event loop thread first acquires the mutex, and then jumps straight |
|
|
4097 | into C<ev_loop>: |
|
|
4098 | |
|
|
4099 | void * |
|
|
4100 | l_run (void *thr_arg) |
|
|
4101 | { |
|
|
4102 | struct ev_loop *loop = (struct ev_loop *)thr_arg; |
|
|
4103 | |
|
|
4104 | l_acquire (EV_A); |
|
|
4105 | pthread_setcanceltype (PTHREAD_CANCEL_ASYNCHRONOUS, 0); |
|
|
4106 | ev_loop (EV_A_ 0); |
|
|
4107 | l_release (EV_A); |
|
|
4108 | |
|
|
4109 | return 0; |
|
|
4110 | } |
|
|
4111 | |
|
|
4112 | Instead of invoking all pending watchers, the C<l_invoke> callback will |
|
|
4113 | signal the main thread via some unspecified mechanism (signals? pipe |
|
|
4114 | writes? C<Async::Interrupt>?) and then waits until all pending watchers |
|
|
4115 | have been called (in a while loop because a) spurious wakeups are possible |
|
|
4116 | and b) skipping inter-thread-communication when there are no pending |
|
|
4117 | watchers is very beneficial): |
|
|
4118 | |
|
|
4119 | static void |
|
|
4120 | l_invoke (EV_P) |
|
|
4121 | { |
|
|
4122 | userdata *u = ev_userdata (EV_A); |
|
|
4123 | |
|
|
4124 | while (ev_pending_count (EV_A)) |
|
|
4125 | { |
|
|
4126 | wake_up_other_thread_in_some_magic_or_not_so_magic_way (); |
|
|
4127 | pthread_cond_wait (&u->invoke_cv, &u->lock); |
|
|
4128 | } |
|
|
4129 | } |
|
|
4130 | |
|
|
4131 | Now, whenever the main thread gets told to invoke pending watchers, it |
|
|
4132 | will grab the lock, call C<ev_invoke_pending> and then signal the loop |
|
|
4133 | thread to continue: |
|
|
4134 | |
|
|
4135 | static void |
|
|
4136 | real_invoke_pending (EV_P) |
|
|
4137 | { |
|
|
4138 | userdata *u = ev_userdata (EV_A); |
|
|
4139 | |
|
|
4140 | pthread_mutex_lock (&u->lock); |
|
|
4141 | ev_invoke_pending (EV_A); |
|
|
4142 | pthread_cond_signal (&u->invoke_cv); |
|
|
4143 | pthread_mutex_unlock (&u->lock); |
|
|
4144 | } |
|
|
4145 | |
|
|
4146 | Whenever you want to start/stop a watcher or do other modifications to an |
|
|
4147 | event loop, you will now have to lock: |
|
|
4148 | |
|
|
4149 | ev_timer timeout_watcher; |
|
|
4150 | userdata *u = ev_userdata (EV_A); |
|
|
4151 | |
|
|
4152 | ev_timer_init (&timeout_watcher, timeout_cb, 5.5, 0.); |
|
|
4153 | |
|
|
4154 | pthread_mutex_lock (&u->lock); |
|
|
4155 | ev_timer_start (EV_A_ &timeout_watcher); |
|
|
4156 | ev_async_send (EV_A_ &u->async_w); |
|
|
4157 | pthread_mutex_unlock (&u->lock); |
|
|
4158 | |
|
|
4159 | Note that sending the C<ev_async> watcher is required because otherwise |
|
|
4160 | an event loop currently blocking in the kernel will have no knowledge |
|
|
4161 | about the newly added timer. By waking up the loop it will pick up any new |
|
|
4162 | watchers in the next event loop iteration. |
|
|
4163 | |
3874 | =head3 COROUTINES |
4164 | =head3 COROUTINES |
3875 | |
4165 | |
3876 | Libev is very accommodating to coroutines ("cooperative threads"): |
4166 | Libev is very accommodating to coroutines ("cooperative threads"): |
3877 | libev fully supports nesting calls to its functions from different |
4167 | libev fully supports nesting calls to its functions from different |
3878 | coroutines (e.g. you can call C<ev_loop> on the same loop from two |
4168 | coroutines (e.g. you can call C<ev_loop> on the same loop from two |
3879 | different coroutines, and switch freely between both coroutines running the |
4169 | different coroutines, and switch freely between both coroutines running |
3880 | loop, as long as you don't confuse yourself). The only exception is that |
4170 | the loop, as long as you don't confuse yourself). The only exception is |
3881 | you must not do this from C<ev_periodic> reschedule callbacks. |
4171 | that you must not do this from C<ev_periodic> reschedule callbacks. |
3882 | |
4172 | |
3883 | Care has been taken to ensure that libev does not keep local state inside |
4173 | Care has been taken to ensure that libev does not keep local state inside |
3884 | C<ev_loop>, and other calls do not usually allow for coroutine switches as |
4174 | C<ev_loop>, and other calls do not usually allow for coroutine switches as |
3885 | they do not call any callbacks. |
4175 | they do not call any callbacks. |
3886 | |
4176 | |