… | |
… | |
98 | =head2 FEATURES |
98 | =head2 FEATURES |
99 | |
99 | |
100 | Libev supports C<select>, C<poll>, the Linux-specific C<epoll>, the |
100 | Libev supports C<select>, C<poll>, the Linux-specific C<epoll>, the |
101 | BSD-specific C<kqueue> and the Solaris-specific event port mechanisms |
101 | BSD-specific C<kqueue> and the Solaris-specific event port mechanisms |
102 | for file descriptor events (C<ev_io>), the Linux C<inotify> interface |
102 | for file descriptor events (C<ev_io>), the Linux C<inotify> interface |
103 | (for C<ev_stat>), relative timers (C<ev_timer>), absolute timers |
103 | (for C<ev_stat>), Linux eventfd/signalfd (for faster and cleaner |
104 | with customised rescheduling (C<ev_periodic>), synchronous signals |
104 | inter-thread wakeup (C<ev_async>)/signal handling (C<ev_signal>)) relative |
105 | (C<ev_signal>), process status change events (C<ev_child>), and event |
105 | timers (C<ev_timer>), absolute timers with customised rescheduling |
106 | watchers dealing with the event loop mechanism itself (C<ev_idle>, |
106 | (C<ev_periodic>), synchronous signals (C<ev_signal>), process status |
107 | C<ev_embed>, C<ev_prepare> and C<ev_check> watchers) as well as |
107 | change events (C<ev_child>), and event watchers dealing with the event |
108 | file watchers (C<ev_stat>) and even limited support for fork events |
108 | loop mechanism itself (C<ev_idle>, C<ev_embed>, C<ev_prepare> and |
109 | (C<ev_fork>). |
109 | C<ev_check> watchers) as well as file watchers (C<ev_stat>) and even |
|
|
110 | limited support for fork events (C<ev_fork>). |
110 | |
111 | |
111 | It also is quite fast (see this |
112 | It also is quite fast (see this |
112 | L<benchmark|http://libev.schmorp.de/bench.html> comparing it to libevent |
113 | L<benchmark|http://libev.schmorp.de/bench.html> comparing it to libevent |
113 | for example). |
114 | for example). |
114 | |
115 | |
… | |
… | |
362 | flag. |
363 | flag. |
363 | |
364 | |
364 | This flag setting cannot be overridden or specified in the C<LIBEV_FLAGS> |
365 | This flag setting cannot be overridden or specified in the C<LIBEV_FLAGS> |
365 | environment variable. |
366 | environment variable. |
366 | |
367 | |
|
|
368 | =item C<EVFLAG_NOINOTIFY> |
|
|
369 | |
|
|
370 | When this flag is specified, then libev will not attempt to use the |
|
|
371 | I<inotify> API for it's C<ev_stat> watchers. Apart from debugging and |
|
|
372 | testing, this flag can be useful to conserve inotify file descriptors, as |
|
|
373 | otherwise each loop using C<ev_stat> watchers consumes one inotify handle. |
|
|
374 | |
|
|
375 | =item C<EVFLAG_NOSIGFD> |
|
|
376 | |
|
|
377 | When this flag is specified, then libev will not attempt to use the |
|
|
378 | I<signalfd> API for it's C<ev_signal> (and C<ev_child>) watchers. This is |
|
|
379 | probably only useful to work around any bugs in libev. Consequently, this |
|
|
380 | flag might go away once the signalfd functionality is considered stable, |
|
|
381 | so it's useful mostly in environment variables and not in program code. |
|
|
382 | |
367 | =item C<EVBACKEND_SELECT> (value 1, portable select backend) |
383 | =item C<EVBACKEND_SELECT> (value 1, portable select backend) |
368 | |
384 | |
369 | This is your standard select(2) backend. Not I<completely> standard, as |
385 | This is your standard select(2) backend. Not I<completely> standard, as |
370 | libev tries to roll its own fd_set with no limits on the number of fds, |
386 | libev tries to roll its own fd_set with no limits on the number of fds, |
371 | but if that fails, expect a fairly low limit on the number of fds when |
387 | but if that fails, expect a fairly low limit on the number of fds when |
… | |
… | |
394 | |
410 | |
395 | This backend maps C<EV_READ> to C<POLLIN | POLLERR | POLLHUP>, and |
411 | This backend maps C<EV_READ> to C<POLLIN | POLLERR | POLLHUP>, and |
396 | C<EV_WRITE> to C<POLLOUT | POLLERR | POLLHUP>. |
412 | C<EV_WRITE> to C<POLLOUT | POLLERR | POLLHUP>. |
397 | |
413 | |
398 | =item C<EVBACKEND_EPOLL> (value 4, Linux) |
414 | =item C<EVBACKEND_EPOLL> (value 4, Linux) |
|
|
415 | |
|
|
416 | Use the linux-specific epoll(7) interface (for both pre- and post-2.6.9 |
|
|
417 | kernels). |
399 | |
418 | |
400 | For few fds, this backend is a bit little slower than poll and select, |
419 | For few fds, this backend is a bit little slower than poll and select, |
401 | but it scales phenomenally better. While poll and select usually scale |
420 | but it scales phenomenally better. While poll and select usually scale |
402 | like O(total_fds) where n is the total number of fds (or the highest fd), |
421 | like O(total_fds) where n is the total number of fds (or the highest fd), |
403 | epoll scales either O(1) or O(active_fds). |
422 | epoll scales either O(1) or O(active_fds). |
… | |
… | |
518 | |
537 | |
519 | It is definitely not recommended to use this flag. |
538 | It is definitely not recommended to use this flag. |
520 | |
539 | |
521 | =back |
540 | =back |
522 | |
541 | |
523 | If one or more of these are or'ed into the flags value, then only these |
542 | If one or more of the backend flags are or'ed into the flags value, |
524 | backends will be tried (in the reverse order as listed here). If none are |
543 | then only these backends will be tried (in the reverse order as listed |
525 | specified, all backends in C<ev_recommended_backends ()> will be tried. |
544 | here). If none are specified, all backends in C<ev_recommended_backends |
|
|
545 | ()> will be tried. |
526 | |
546 | |
527 | Example: This is the most typical usage. |
547 | Example: This is the most typical usage. |
528 | |
548 | |
529 | if (!ev_default_loop (0)) |
549 | if (!ev_default_loop (0)) |
530 | fatal ("could not initialise libev, bad $LIBEV_FLAGS in environment?"); |
550 | fatal ("could not initialise libev, bad $LIBEV_FLAGS in environment?"); |
… | |
… | |
573 | as signal and child watchers) would need to be stopped manually. |
593 | as signal and child watchers) would need to be stopped manually. |
574 | |
594 | |
575 | In general it is not advisable to call this function except in the |
595 | In general it is not advisable to call this function except in the |
576 | rare occasion where you really need to free e.g. the signal handling |
596 | rare occasion where you really need to free e.g. the signal handling |
577 | pipe fds. If you need dynamically allocated loops it is better to use |
597 | pipe fds. If you need dynamically allocated loops it is better to use |
578 | C<ev_loop_new> and C<ev_loop_destroy>). |
598 | C<ev_loop_new> and C<ev_loop_destroy>. |
579 | |
599 | |
580 | =item ev_loop_destroy (loop) |
600 | =item ev_loop_destroy (loop) |
581 | |
601 | |
582 | Like C<ev_default_destroy>, but destroys an event loop created by an |
602 | Like C<ev_default_destroy>, but destroys an event loop created by an |
583 | earlier call to C<ev_loop_new>. |
603 | earlier call to C<ev_loop_new>. |
… | |
… | |
687 | event loop time (see C<ev_now_update>). |
707 | event loop time (see C<ev_now_update>). |
688 | |
708 | |
689 | =item ev_loop (loop, int flags) |
709 | =item ev_loop (loop, int flags) |
690 | |
710 | |
691 | Finally, this is it, the event handler. This function usually is called |
711 | Finally, this is it, the event handler. This function usually is called |
692 | after you initialised all your watchers and you want to start handling |
712 | after you have initialised all your watchers and you want to start |
693 | events. |
713 | handling events. |
694 | |
714 | |
695 | If the flags argument is specified as C<0>, it will not return until |
715 | If the flags argument is specified as C<0>, it will not return until |
696 | either no event watchers are active anymore or C<ev_unloop> was called. |
716 | either no event watchers are active anymore or C<ev_unloop> was called. |
697 | |
717 | |
698 | Please note that an explicit C<ev_unloop> is usually better than |
718 | Please note that an explicit C<ev_unloop> is usually better than |
… | |
… | |
862 | |
882 | |
863 | This call will simply invoke all pending watchers while resetting their |
883 | This call will simply invoke all pending watchers while resetting their |
864 | pending state. Normally, C<ev_loop> does this automatically when required, |
884 | pending state. Normally, C<ev_loop> does this automatically when required, |
865 | but when overriding the invoke callback this call comes handy. |
885 | but when overriding the invoke callback this call comes handy. |
866 | |
886 | |
|
|
887 | =item int ev_pending_count (loop) |
|
|
888 | |
|
|
889 | Returns the number of pending watchers - zero indicates that no watchers |
|
|
890 | are pending. |
|
|
891 | |
867 | =item ev_set_invoke_pending_cb (loop, void (*invoke_pending_cb)(EV_P)) |
892 | =item ev_set_invoke_pending_cb (loop, void (*invoke_pending_cb)(EV_P)) |
868 | |
893 | |
869 | This overrides the invoke pending functionality of the loop: Instead of |
894 | This overrides the invoke pending functionality of the loop: Instead of |
870 | invoking all pending watchers when there are any, C<ev_loop> will call |
895 | invoking all pending watchers when there are any, C<ev_loop> will call |
871 | this callback instead. This is useful, for example, when you want to |
896 | this callback instead. This is useful, for example, when you want to |
… | |
… | |
889 | suspended waiting for new events, and C<acquire> is called just |
914 | suspended waiting for new events, and C<acquire> is called just |
890 | afterwards. |
915 | afterwards. |
891 | |
916 | |
892 | Ideally, C<release> will just call your mutex_unlock function, and |
917 | Ideally, C<release> will just call your mutex_unlock function, and |
893 | C<acquire> will just call the mutex_lock function again. |
918 | C<acquire> will just call the mutex_lock function again. |
|
|
919 | |
|
|
920 | While event loop modifications are allowed between invocations of |
|
|
921 | C<release> and C<acquire> (that's their only purpose after all), no |
|
|
922 | modifications done will affect the event loop, i.e. adding watchers will |
|
|
923 | have no effect on the set of file descriptors being watched, or the time |
|
|
924 | waited. USe an C<ev_async> watcher to wake up C<ev_loop> when you want it |
|
|
925 | to take note of any changes you made. |
|
|
926 | |
|
|
927 | In theory, threads executing C<ev_loop> will be async-cancel safe between |
|
|
928 | invocations of C<release> and C<acquire>. |
|
|
929 | |
|
|
930 | See also the locking example in the C<THREADS> section later in this |
|
|
931 | document. |
894 | |
932 | |
895 | =item ev_set_userdata (loop, void *data) |
933 | =item ev_set_userdata (loop, void *data) |
896 | |
934 | |
897 | =item ev_userdata (loop) |
935 | =item ev_userdata (loop) |
898 | |
936 | |
… | |
… | |
1197 | returns its C<revents> bitset (as if its callback was invoked). If the |
1235 | returns its C<revents> bitset (as if its callback was invoked). If the |
1198 | watcher isn't pending it does nothing and returns C<0>. |
1236 | watcher isn't pending it does nothing and returns C<0>. |
1199 | |
1237 | |
1200 | Sometimes it can be useful to "poll" a watcher instead of waiting for its |
1238 | Sometimes it can be useful to "poll" a watcher instead of waiting for its |
1201 | callback to be invoked, which can be accomplished with this function. |
1239 | callback to be invoked, which can be accomplished with this function. |
|
|
1240 | |
|
|
1241 | =item ev_feed_event (struct ev_loop *, watcher *, int revents) |
|
|
1242 | |
|
|
1243 | Feeds the given event set into the event loop, as if the specified event |
|
|
1244 | had happened for the specified watcher (which must be a pointer to an |
|
|
1245 | initialised but not necessarily started event watcher). Obviously you must |
|
|
1246 | not free the watcher as long as it has pending events. |
|
|
1247 | |
|
|
1248 | Stopping the watcher, letting libev invoke it, or calling |
|
|
1249 | C<ev_clear_pending> will clear the pending event, even if the watcher was |
|
|
1250 | not started in the first place. |
|
|
1251 | |
|
|
1252 | See also C<ev_feed_fd_event> and C<ev_feed_signal_event> for related |
|
|
1253 | functions that do not need a watcher. |
1202 | |
1254 | |
1203 | =back |
1255 | =back |
1204 | |
1256 | |
1205 | |
1257 | |
1206 | =head2 ASSOCIATING CUSTOM DATA WITH A WATCHER |
1258 | =head2 ASSOCIATING CUSTOM DATA WITH A WATCHER |
… | |
… | |
1737 | |
1789 | |
1738 | If the event loop is suspended for a long time, you can also force an |
1790 | If the event loop is suspended for a long time, you can also force an |
1739 | update of the time returned by C<ev_now ()> by calling C<ev_now_update |
1791 | update of the time returned by C<ev_now ()> by calling C<ev_now_update |
1740 | ()>. |
1792 | ()>. |
1741 | |
1793 | |
|
|
1794 | =head3 The special problems of suspended animation |
|
|
1795 | |
|
|
1796 | When you leave the server world it is quite customary to hit machines that |
|
|
1797 | can suspend/hibernate - what happens to the clocks during such a suspend? |
|
|
1798 | |
|
|
1799 | Some quick tests made with a Linux 2.6.28 indicate that a suspend freezes |
|
|
1800 | all processes, while the clocks (C<times>, C<CLOCK_MONOTONIC>) continue |
|
|
1801 | to run until the system is suspended, but they will not advance while the |
|
|
1802 | system is suspended. That means, on resume, it will be as if the program |
|
|
1803 | was frozen for a few seconds, but the suspend time will not be counted |
|
|
1804 | towards C<ev_timer> when a monotonic clock source is used. The real time |
|
|
1805 | clock advanced as expected, but if it is used as sole clocksource, then a |
|
|
1806 | long suspend would be detected as a time jump by libev, and timers would |
|
|
1807 | be adjusted accordingly. |
|
|
1808 | |
|
|
1809 | I would not be surprised to see different behaviour in different between |
|
|
1810 | operating systems, OS versions or even different hardware. |
|
|
1811 | |
|
|
1812 | The other form of suspend (job control, or sending a SIGSTOP) will see a |
|
|
1813 | time jump in the monotonic clocks and the realtime clock. If the program |
|
|
1814 | is suspended for a very long time, and monotonic clock sources are in use, |
|
|
1815 | then you can expect C<ev_timer>s to expire as the full suspension time |
|
|
1816 | will be counted towards the timers. When no monotonic clock source is in |
|
|
1817 | use, then libev will again assume a timejump and adjust accordingly. |
|
|
1818 | |
|
|
1819 | It might be beneficial for this latter case to call C<ev_suspend> |
|
|
1820 | and C<ev_resume> in code that handles C<SIGTSTP>, to at least get |
|
|
1821 | deterministic behaviour in this case (you can do nothing against |
|
|
1822 | C<SIGSTOP>). |
|
|
1823 | |
1742 | =head3 Watcher-Specific Functions and Data Members |
1824 | =head3 Watcher-Specific Functions and Data Members |
1743 | |
1825 | |
1744 | =over 4 |
1826 | =over 4 |
1745 | |
1827 | |
1746 | =item ev_timer_init (ev_timer *, callback, ev_tstamp after, ev_tstamp repeat) |
1828 | =item ev_timer_init (ev_timer *, callback, ev_tstamp after, ev_tstamp repeat) |
… | |
… | |
1771 | If the timer is repeating, either start it if necessary (with the |
1853 | If the timer is repeating, either start it if necessary (with the |
1772 | C<repeat> value), or reset the running timer to the C<repeat> value. |
1854 | C<repeat> value), or reset the running timer to the C<repeat> value. |
1773 | |
1855 | |
1774 | This sounds a bit complicated, see L<Be smart about timeouts>, above, for a |
1856 | This sounds a bit complicated, see L<Be smart about timeouts>, above, for a |
1775 | usage example. |
1857 | usage example. |
|
|
1858 | |
|
|
1859 | =item ev_timer_remaining (loop, ev_timer *) |
|
|
1860 | |
|
|
1861 | Returns the remaining time until a timer fires. If the timer is active, |
|
|
1862 | then this time is relative to the current event loop time, otherwise it's |
|
|
1863 | the timeout value currently configured. |
|
|
1864 | |
|
|
1865 | That is, after an C<ev_timer_set (w, 5, 7)>, C<ev_timer_remaining> returns |
|
|
1866 | C<5>. When the timer is started and one second passes, C<ev_timer_remain> |
|
|
1867 | will return C<4>. When the timer expires and is restarted, it will return |
|
|
1868 | roughly C<7> (likely slightly less as callback invocation takes some time, |
|
|
1869 | too), and so on. |
1776 | |
1870 | |
1777 | =item ev_tstamp repeat [read-write] |
1871 | =item ev_tstamp repeat [read-write] |
1778 | |
1872 | |
1779 | The current C<repeat> value. Will be used each time the watcher times out |
1873 | The current C<repeat> value. Will be used each time the watcher times out |
1780 | or C<ev_timer_again> is called, and determines the next timeout (if any), |
1874 | or C<ev_timer_again> is called, and determines the next timeout (if any), |
… | |
… | |
2016 | Signal watchers will trigger an event when the process receives a specific |
2110 | Signal watchers will trigger an event when the process receives a specific |
2017 | signal one or more times. Even though signals are very asynchronous, libev |
2111 | signal one or more times. Even though signals are very asynchronous, libev |
2018 | will try it's best to deliver signals synchronously, i.e. as part of the |
2112 | will try it's best to deliver signals synchronously, i.e. as part of the |
2019 | normal event processing, like any other event. |
2113 | normal event processing, like any other event. |
2020 | |
2114 | |
2021 | If you want signals asynchronously, just use C<sigaction> as you would |
2115 | If you want signals to be delivered truly asynchronously, just use |
2022 | do without libev and forget about sharing the signal. You can even use |
2116 | C<sigaction> as you would do without libev and forget about sharing |
2023 | C<ev_async> from a signal handler to synchronously wake up an event loop. |
2117 | the signal. You can even use C<ev_async> from a signal handler to |
|
|
2118 | synchronously wake up an event loop. |
2024 | |
2119 | |
2025 | You can configure as many watchers as you like per signal. Only when the |
2120 | You can configure as many watchers as you like for the same signal, but |
|
|
2121 | only within the same loop, i.e. you can watch for C<SIGINT> in your |
|
|
2122 | default loop and for C<SIGIO> in another loop, but you cannot watch for |
|
|
2123 | C<SIGINT> in both the default loop and another loop at the same time. At |
|
|
2124 | the moment, C<SIGCHLD> is permanently tied to the default loop. |
|
|
2125 | |
2026 | first watcher gets started will libev actually register a signal handler |
2126 | When the first watcher gets started will libev actually register something |
2027 | with the kernel (thus it coexists with your own signal handlers as long as |
2127 | with the kernel (thus it coexists with your own signal handlers as long as |
2028 | you don't register any with libev for the same signal). Similarly, when |
2128 | you don't register any with libev for the same signal). |
2029 | the last signal watcher for a signal is stopped, libev will reset the |
|
|
2030 | signal handler to SIG_DFL (regardless of what it was set to before). |
|
|
2031 | |
2129 | |
2032 | If possible and supported, libev will install its handlers with |
2130 | If possible and supported, libev will install its handlers with |
2033 | C<SA_RESTART> behaviour enabled, so system calls should not be unduly |
2131 | C<SA_RESTART> (or equivalent) behaviour enabled, so system calls should |
2034 | interrupted. If you have a problem with system calls getting interrupted by |
2132 | not be unduly interrupted. If you have a problem with system calls getting |
2035 | signals you can block all signals in an C<ev_check> watcher and unblock |
2133 | interrupted by signals you can block all signals in an C<ev_check> watcher |
2036 | them in an C<ev_prepare> watcher. |
2134 | and unblock them in an C<ev_prepare> watcher. |
|
|
2135 | |
|
|
2136 | =head3 The special problem of inheritance over execve |
|
|
2137 | |
|
|
2138 | Both the signal mask (C<sigprocmask>) and the signal disposition |
|
|
2139 | (C<sigaction>) are unspecified after starting a signal watcher (and after |
|
|
2140 | stopping it again), that is, libev might or might not block the signal, |
|
|
2141 | and might or might not set or restore the installed signal handler. |
|
|
2142 | |
|
|
2143 | While this does not matter for the signal disposition (libev never |
|
|
2144 | sets signals to C<SIG_IGN>, so handlers will be reset to C<SIG_DFL> on |
|
|
2145 | C<execve>), this matters for the signal mask: many programs do not expect |
|
|
2146 | certain signals to be blocked. |
|
|
2147 | |
|
|
2148 | This means that before calling C<exec> (from the child) you should reset |
|
|
2149 | the signal mask to whatever "default" you expect (all clear is a good |
|
|
2150 | choice usually). |
|
|
2151 | |
|
|
2152 | The simplest way to ensure that the signal mask is reset in the child is |
|
|
2153 | to install a fork handler with C<pthread_atfork> that resets it. That will |
|
|
2154 | catch fork calls done by libraries (such as the libc) as well. |
|
|
2155 | |
|
|
2156 | In current versions of libev, you can also ensure that the signal mask is |
|
|
2157 | not blocking any signals (except temporarily, so thread users watch out) |
|
|
2158 | by specifying the C<EVFLAG_NOSIGFD> when creating the event loop. This |
|
|
2159 | is not guaranteed for future versions, however. |
2037 | |
2160 | |
2038 | =head3 Watcher-Specific Functions and Data Members |
2161 | =head3 Watcher-Specific Functions and Data Members |
2039 | |
2162 | |
2040 | =over 4 |
2163 | =over 4 |
2041 | |
2164 | |
… | |
… | |
2086 | libev) |
2209 | libev) |
2087 | |
2210 | |
2088 | =head3 Process Interaction |
2211 | =head3 Process Interaction |
2089 | |
2212 | |
2090 | Libev grabs C<SIGCHLD> as soon as the default event loop is |
2213 | Libev grabs C<SIGCHLD> as soon as the default event loop is |
2091 | initialised. This is necessary to guarantee proper behaviour even if |
2214 | initialised. This is necessary to guarantee proper behaviour even if the |
2092 | the first child watcher is started after the child exits. The occurrence |
2215 | first child watcher is started after the child exits. The occurrence |
2093 | of C<SIGCHLD> is recorded asynchronously, but child reaping is done |
2216 | of C<SIGCHLD> is recorded asynchronously, but child reaping is done |
2094 | synchronously as part of the event loop processing. Libev always reaps all |
2217 | synchronously as part of the event loop processing. Libev always reaps all |
2095 | children, even ones not watched. |
2218 | children, even ones not watched. |
2096 | |
2219 | |
2097 | =head3 Overriding the Built-In Processing |
2220 | =head3 Overriding the Built-In Processing |
… | |
… | |
2107 | =head3 Stopping the Child Watcher |
2230 | =head3 Stopping the Child Watcher |
2108 | |
2231 | |
2109 | Currently, the child watcher never gets stopped, even when the |
2232 | Currently, the child watcher never gets stopped, even when the |
2110 | child terminates, so normally one needs to stop the watcher in the |
2233 | child terminates, so normally one needs to stop the watcher in the |
2111 | callback. Future versions of libev might stop the watcher automatically |
2234 | callback. Future versions of libev might stop the watcher automatically |
2112 | when a child exit is detected. |
2235 | when a child exit is detected (calling C<ev_child_stop> twice is not a |
|
|
2236 | problem). |
2113 | |
2237 | |
2114 | =head3 Watcher-Specific Functions and Data Members |
2238 | =head3 Watcher-Specific Functions and Data Members |
2115 | |
2239 | |
2116 | =over 4 |
2240 | =over 4 |
2117 | |
2241 | |
… | |
… | |
3025 | /* doh, nothing entered */; |
3149 | /* doh, nothing entered */; |
3026 | } |
3150 | } |
3027 | |
3151 | |
3028 | ev_once (STDIN_FILENO, EV_READ, 10., stdin_ready, 0); |
3152 | ev_once (STDIN_FILENO, EV_READ, 10., stdin_ready, 0); |
3029 | |
3153 | |
3030 | =item ev_feed_event (struct ev_loop *, watcher *, int revents) |
|
|
3031 | |
|
|
3032 | Feeds the given event set into the event loop, as if the specified event |
|
|
3033 | had happened for the specified watcher (which must be a pointer to an |
|
|
3034 | initialised but not necessarily started event watcher). |
|
|
3035 | |
|
|
3036 | =item ev_feed_fd_event (struct ev_loop *, int fd, int revents) |
3154 | =item ev_feed_fd_event (struct ev_loop *, int fd, int revents) |
3037 | |
3155 | |
3038 | Feed an event on the given fd, as if a file descriptor backend detected |
3156 | Feed an event on the given fd, as if a file descriptor backend detected |
3039 | the given events it. |
3157 | the given events it. |
3040 | |
3158 | |
… | |
… | |
3320 | =item Ocaml |
3438 | =item Ocaml |
3321 | |
3439 | |
3322 | Erkki Seppala has written Ocaml bindings for libev, to be found at |
3440 | Erkki Seppala has written Ocaml bindings for libev, to be found at |
3323 | L<http://modeemi.cs.tut.fi/~flux/software/ocaml-ev/>. |
3441 | L<http://modeemi.cs.tut.fi/~flux/software/ocaml-ev/>. |
3324 | |
3442 | |
|
|
3443 | =item Lua |
|
|
3444 | |
|
|
3445 | Brian Maher has written a partial interface to libev |
|
|
3446 | for lua (only C<ev_io> and C<ev_timer>), to be found at |
|
|
3447 | L<http://github.com/brimworks/lua-ev>. |
|
|
3448 | |
3325 | =back |
3449 | =back |
3326 | |
3450 | |
3327 | |
3451 | |
3328 | =head1 MACRO MAGIC |
3452 | =head1 MACRO MAGIC |
3329 | |
3453 | |
… | |
… | |
3495 | keeps libev from including F<config.h>, and it also defines dummy |
3619 | keeps libev from including F<config.h>, and it also defines dummy |
3496 | implementations for some libevent functions (such as logging, which is not |
3620 | implementations for some libevent functions (such as logging, which is not |
3497 | supported). It will also not define any of the structs usually found in |
3621 | supported). It will also not define any of the structs usually found in |
3498 | F<event.h> that are not directly supported by the libev core alone. |
3622 | F<event.h> that are not directly supported by the libev core alone. |
3499 | |
3623 | |
3500 | In stanbdalone mode, libev will still try to automatically deduce the |
3624 | In standalone mode, libev will still try to automatically deduce the |
3501 | configuration, but has to be more conservative. |
3625 | configuration, but has to be more conservative. |
3502 | |
3626 | |
3503 | =item EV_USE_MONOTONIC |
3627 | =item EV_USE_MONOTONIC |
3504 | |
3628 | |
3505 | If defined to be C<1>, libev will try to detect the availability of the |
3629 | If defined to be C<1>, libev will try to detect the availability of the |
… | |
… | |
3570 | be used is the winsock select). This means that it will call |
3694 | be used is the winsock select). This means that it will call |
3571 | C<_get_osfhandle> on the fd to convert it to an OS handle. Otherwise, |
3695 | C<_get_osfhandle> on the fd to convert it to an OS handle. Otherwise, |
3572 | it is assumed that all these functions actually work on fds, even |
3696 | it is assumed that all these functions actually work on fds, even |
3573 | on win32. Should not be defined on non-win32 platforms. |
3697 | on win32. Should not be defined on non-win32 platforms. |
3574 | |
3698 | |
3575 | =item EV_FD_TO_WIN32_HANDLE |
3699 | =item EV_FD_TO_WIN32_HANDLE(fd) |
3576 | |
3700 | |
3577 | If C<EV_SELECT_IS_WINSOCKET> is enabled, then libev needs a way to map |
3701 | If C<EV_SELECT_IS_WINSOCKET> is enabled, then libev needs a way to map |
3578 | file descriptors to socket handles. When not defining this symbol (the |
3702 | file descriptors to socket handles. When not defining this symbol (the |
3579 | default), then libev will call C<_get_osfhandle>, which is usually |
3703 | default), then libev will call C<_get_osfhandle>, which is usually |
3580 | correct. In some cases, programs use their own file descriptor management, |
3704 | correct. In some cases, programs use their own file descriptor management, |
3581 | in which case they can provide this function to map fds to socket handles. |
3705 | in which case they can provide this function to map fds to socket handles. |
|
|
3706 | |
|
|
3707 | =item EV_WIN32_HANDLE_TO_FD(handle) |
|
|
3708 | |
|
|
3709 | If C<EV_SELECT_IS_WINSOCKET> then libev maps handles to file descriptors |
|
|
3710 | using the standard C<_open_osfhandle> function. For programs implementing |
|
|
3711 | their own fd to handle mapping, overwriting this function makes it easier |
|
|
3712 | to do so. This can be done by defining this macro to an appropriate value. |
|
|
3713 | |
|
|
3714 | =item EV_WIN32_CLOSE_FD(fd) |
|
|
3715 | |
|
|
3716 | If programs implement their own fd to handle mapping on win32, then this |
|
|
3717 | macro can be used to override the C<close> function, useful to unregister |
|
|
3718 | file descriptors again. Note that the replacement function has to close |
|
|
3719 | the underlying OS handle. |
3582 | |
3720 | |
3583 | =item EV_USE_POLL |
3721 | =item EV_USE_POLL |
3584 | |
3722 | |
3585 | If defined to be C<1>, libev will compile in support for the C<poll>(2) |
3723 | If defined to be C<1>, libev will compile in support for the C<poll>(2) |
3586 | backend. Otherwise it will be enabled on non-win32 platforms. It |
3724 | backend. Otherwise it will be enabled on non-win32 platforms. It |
… | |
… | |
3732 | Defining C<EV_MINIMAL> to C<2> will additionally reduce the core API to |
3870 | Defining C<EV_MINIMAL> to C<2> will additionally reduce the core API to |
3733 | provide a bare-bones event library. See C<ev.h> for details on what parts |
3871 | provide a bare-bones event library. See C<ev.h> for details on what parts |
3734 | of the API are still available, and do not complain if this subset changes |
3872 | of the API are still available, and do not complain if this subset changes |
3735 | over time. |
3873 | over time. |
3736 | |
3874 | |
|
|
3875 | =item EV_NSIG |
|
|
3876 | |
|
|
3877 | The highest supported signal number, +1 (or, the number of |
|
|
3878 | signals): Normally, libev tries to deduce the maximum number of signals |
|
|
3879 | automatically, but sometimes this fails, in which case it can be |
|
|
3880 | specified. Also, using a lower number than detected (C<32> should be |
|
|
3881 | good for about any system in existance) can save some memory, as libev |
|
|
3882 | statically allocates some 12-24 bytes per signal number. |
|
|
3883 | |
3737 | =item EV_PID_HASHSIZE |
3884 | =item EV_PID_HASHSIZE |
3738 | |
3885 | |
3739 | C<ev_child> watchers use a small hash table to distribute workload by |
3886 | C<ev_child> watchers use a small hash table to distribute workload by |
3740 | pid. The default size is C<16> (or C<1> with C<EV_MINIMAL>), usually more |
3887 | pid. The default size is C<16> (or C<1> with C<EV_MINIMAL>), usually more |
3741 | than enough. If you need to manage thousands of children you might want to |
3888 | than enough. If you need to manage thousands of children you might want to |
… | |
… | |
3928 | |
4075 | |
3929 | =back |
4076 | =back |
3930 | |
4077 | |
3931 | =head4 THREAD LOCKING EXAMPLE |
4078 | =head4 THREAD LOCKING EXAMPLE |
3932 | |
4079 | |
|
|
4080 | Here is a fictitious example of how to run an event loop in a different |
|
|
4081 | thread than where callbacks are being invoked and watchers are |
|
|
4082 | created/added/removed. |
|
|
4083 | |
|
|
4084 | For a real-world example, see the C<EV::Loop::Async> perl module, |
|
|
4085 | which uses exactly this technique (which is suited for many high-level |
|
|
4086 | languages). |
|
|
4087 | |
|
|
4088 | The example uses a pthread mutex to protect the loop data, a condition |
|
|
4089 | variable to wait for callback invocations, an async watcher to notify the |
|
|
4090 | event loop thread and an unspecified mechanism to wake up the main thread. |
|
|
4091 | |
|
|
4092 | First, you need to associate some data with the event loop: |
|
|
4093 | |
|
|
4094 | typedef struct { |
|
|
4095 | mutex_t lock; /* global loop lock */ |
|
|
4096 | ev_async async_w; |
|
|
4097 | thread_t tid; |
|
|
4098 | cond_t invoke_cv; |
|
|
4099 | } userdata; |
|
|
4100 | |
|
|
4101 | void prepare_loop (EV_P) |
|
|
4102 | { |
|
|
4103 | // for simplicity, we use a static userdata struct. |
|
|
4104 | static userdata u; |
|
|
4105 | |
|
|
4106 | ev_async_init (&u->async_w, async_cb); |
|
|
4107 | ev_async_start (EV_A_ &u->async_w); |
|
|
4108 | |
|
|
4109 | pthread_mutex_init (&u->lock, 0); |
|
|
4110 | pthread_cond_init (&u->invoke_cv, 0); |
|
|
4111 | |
|
|
4112 | // now associate this with the loop |
|
|
4113 | ev_set_userdata (EV_A_ u); |
|
|
4114 | ev_set_invoke_pending_cb (EV_A_ l_invoke); |
|
|
4115 | ev_set_loop_release_cb (EV_A_ l_release, l_acquire); |
|
|
4116 | |
|
|
4117 | // then create the thread running ev_loop |
|
|
4118 | pthread_create (&u->tid, 0, l_run, EV_A); |
|
|
4119 | } |
|
|
4120 | |
|
|
4121 | The callback for the C<ev_async> watcher does nothing: the watcher is used |
|
|
4122 | solely to wake up the event loop so it takes notice of any new watchers |
|
|
4123 | that might have been added: |
|
|
4124 | |
|
|
4125 | static void |
|
|
4126 | async_cb (EV_P_ ev_async *w, int revents) |
|
|
4127 | { |
|
|
4128 | // just used for the side effects |
|
|
4129 | } |
|
|
4130 | |
|
|
4131 | The C<l_release> and C<l_acquire> callbacks simply unlock/lock the mutex |
|
|
4132 | protecting the loop data, respectively. |
|
|
4133 | |
|
|
4134 | static void |
|
|
4135 | l_release (EV_P) |
|
|
4136 | { |
|
|
4137 | userdata *u = ev_userdata (EV_A); |
|
|
4138 | pthread_mutex_unlock (&u->lock); |
|
|
4139 | } |
|
|
4140 | |
|
|
4141 | static void |
|
|
4142 | l_acquire (EV_P) |
|
|
4143 | { |
|
|
4144 | userdata *u = ev_userdata (EV_A); |
|
|
4145 | pthread_mutex_lock (&u->lock); |
|
|
4146 | } |
|
|
4147 | |
|
|
4148 | The event loop thread first acquires the mutex, and then jumps straight |
|
|
4149 | into C<ev_loop>: |
|
|
4150 | |
|
|
4151 | void * |
|
|
4152 | l_run (void *thr_arg) |
|
|
4153 | { |
|
|
4154 | struct ev_loop *loop = (struct ev_loop *)thr_arg; |
|
|
4155 | |
|
|
4156 | l_acquire (EV_A); |
|
|
4157 | pthread_setcanceltype (PTHREAD_CANCEL_ASYNCHRONOUS, 0); |
|
|
4158 | ev_loop (EV_A_ 0); |
|
|
4159 | l_release (EV_A); |
|
|
4160 | |
|
|
4161 | return 0; |
|
|
4162 | } |
|
|
4163 | |
|
|
4164 | Instead of invoking all pending watchers, the C<l_invoke> callback will |
|
|
4165 | signal the main thread via some unspecified mechanism (signals? pipe |
|
|
4166 | writes? C<Async::Interrupt>?) and then waits until all pending watchers |
|
|
4167 | have been called (in a while loop because a) spurious wakeups are possible |
|
|
4168 | and b) skipping inter-thread-communication when there are no pending |
|
|
4169 | watchers is very beneficial): |
|
|
4170 | |
|
|
4171 | static void |
|
|
4172 | l_invoke (EV_P) |
|
|
4173 | { |
|
|
4174 | userdata *u = ev_userdata (EV_A); |
|
|
4175 | |
|
|
4176 | while (ev_pending_count (EV_A)) |
|
|
4177 | { |
|
|
4178 | wake_up_other_thread_in_some_magic_or_not_so_magic_way (); |
|
|
4179 | pthread_cond_wait (&u->invoke_cv, &u->lock); |
|
|
4180 | } |
|
|
4181 | } |
|
|
4182 | |
|
|
4183 | Now, whenever the main thread gets told to invoke pending watchers, it |
|
|
4184 | will grab the lock, call C<ev_invoke_pending> and then signal the loop |
|
|
4185 | thread to continue: |
|
|
4186 | |
|
|
4187 | static void |
|
|
4188 | real_invoke_pending (EV_P) |
|
|
4189 | { |
|
|
4190 | userdata *u = ev_userdata (EV_A); |
|
|
4191 | |
|
|
4192 | pthread_mutex_lock (&u->lock); |
|
|
4193 | ev_invoke_pending (EV_A); |
|
|
4194 | pthread_cond_signal (&u->invoke_cv); |
|
|
4195 | pthread_mutex_unlock (&u->lock); |
|
|
4196 | } |
|
|
4197 | |
|
|
4198 | Whenever you want to start/stop a watcher or do other modifications to an |
|
|
4199 | event loop, you will now have to lock: |
|
|
4200 | |
|
|
4201 | ev_timer timeout_watcher; |
|
|
4202 | userdata *u = ev_userdata (EV_A); |
|
|
4203 | |
|
|
4204 | ev_timer_init (&timeout_watcher, timeout_cb, 5.5, 0.); |
|
|
4205 | |
|
|
4206 | pthread_mutex_lock (&u->lock); |
|
|
4207 | ev_timer_start (EV_A_ &timeout_watcher); |
|
|
4208 | ev_async_send (EV_A_ &u->async_w); |
|
|
4209 | pthread_mutex_unlock (&u->lock); |
|
|
4210 | |
|
|
4211 | Note that sending the C<ev_async> watcher is required because otherwise |
|
|
4212 | an event loop currently blocking in the kernel will have no knowledge |
|
|
4213 | about the newly added timer. By waking up the loop it will pick up any new |
|
|
4214 | watchers in the next event loop iteration. |
|
|
4215 | |
3933 | =head3 COROUTINES |
4216 | =head3 COROUTINES |
3934 | |
4217 | |
3935 | Libev is very accommodating to coroutines ("cooperative threads"): |
4218 | Libev is very accommodating to coroutines ("cooperative threads"): |
3936 | libev fully supports nesting calls to its functions from different |
4219 | libev fully supports nesting calls to its functions from different |
3937 | coroutines (e.g. you can call C<ev_loop> on the same loop from two |
4220 | coroutines (e.g. you can call C<ev_loop> on the same loop from two |
3938 | different coroutines, and switch freely between both coroutines running the |
4221 | different coroutines, and switch freely between both coroutines running |
3939 | loop, as long as you don't confuse yourself). The only exception is that |
4222 | the loop, as long as you don't confuse yourself). The only exception is |
3940 | you must not do this from C<ev_periodic> reschedule callbacks. |
4223 | that you must not do this from C<ev_periodic> reschedule callbacks. |
3941 | |
4224 | |
3942 | Care has been taken to ensure that libev does not keep local state inside |
4225 | Care has been taken to ensure that libev does not keep local state inside |
3943 | C<ev_loop>, and other calls do not usually allow for coroutine switches as |
4226 | C<ev_loop>, and other calls do not usually allow for coroutine switches as |
3944 | they do not call any callbacks. |
4227 | they do not call any callbacks. |
3945 | |
4228 | |