… | |
… | |
62 | |
62 | |
63 | // unloop was called, so exit |
63 | // unloop was called, so exit |
64 | return 0; |
64 | return 0; |
65 | } |
65 | } |
66 | |
66 | |
67 | =head1 DESCRIPTION |
67 | =head1 ABOUT THIS DOCUMENT |
|
|
68 | |
|
|
69 | This document documents the libev software package. |
68 | |
70 | |
69 | The newest version of this document is also available as an html-formatted |
71 | The newest version of this document is also available as an html-formatted |
70 | web page you might find easier to navigate when reading it for the first |
72 | web page you might find easier to navigate when reading it for the first |
71 | time: L<http://pod.tst.eu/http://cvs.schmorp.de/libev/ev.pod>. |
73 | time: L<http://pod.tst.eu/http://cvs.schmorp.de/libev/ev.pod>. |
|
|
74 | |
|
|
75 | While this document tries to be as complete as possible in documenting |
|
|
76 | libev, its usage and the rationale behind its design, it is not a tutorial |
|
|
77 | on event-based programming, nor will it introduce event-based programming |
|
|
78 | with libev. |
|
|
79 | |
|
|
80 | Familarity with event based programming techniques in general is assumed |
|
|
81 | throughout this document. |
|
|
82 | |
|
|
83 | =head1 ABOUT LIBEV |
72 | |
84 | |
73 | Libev is an event loop: you register interest in certain events (such as a |
85 | Libev is an event loop: you register interest in certain events (such as a |
74 | file descriptor being readable or a timeout occurring), and it will manage |
86 | file descriptor being readable or a timeout occurring), and it will manage |
75 | these event sources and provide your program with events. |
87 | these event sources and provide your program with events. |
76 | |
88 | |
… | |
… | |
110 | name C<loop> (which is always of type C<ev_loop *>) will not have |
122 | name C<loop> (which is always of type C<ev_loop *>) will not have |
111 | this argument. |
123 | this argument. |
112 | |
124 | |
113 | =head2 TIME REPRESENTATION |
125 | =head2 TIME REPRESENTATION |
114 | |
126 | |
115 | Libev represents time as a single floating point number, representing the |
127 | Libev represents time as a single floating point number, representing |
116 | (fractional) number of seconds since the (POSIX) epoch (somewhere near |
128 | the (fractional) number of seconds since the (POSIX) epoch (somewhere |
117 | the beginning of 1970, details are complicated, don't ask). This type is |
129 | near the beginning of 1970, details are complicated, don't ask). This |
118 | called C<ev_tstamp>, which is what you should use too. It usually aliases |
130 | type is called C<ev_tstamp>, which is what you should use too. It usually |
119 | to the C<double> type in C, and when you need to do any calculations on |
131 | aliases to the C<double> type in C. When you need to do any calculations |
120 | it, you should treat it as some floating point value. Unlike the name |
132 | on it, you should treat it as some floating point value. Unlike the name |
121 | component C<stamp> might indicate, it is also used for time differences |
133 | component C<stamp> might indicate, it is also used for time differences |
122 | throughout libev. |
134 | throughout libev. |
123 | |
135 | |
124 | =head1 ERROR HANDLING |
136 | =head1 ERROR HANDLING |
125 | |
137 | |
… | |
… | |
609 | |
621 | |
610 | This value can sometimes be useful as a generation counter of sorts (it |
622 | This value can sometimes be useful as a generation counter of sorts (it |
611 | "ticks" the number of loop iterations), as it roughly corresponds with |
623 | "ticks" the number of loop iterations), as it roughly corresponds with |
612 | C<ev_prepare> and C<ev_check> calls. |
624 | C<ev_prepare> and C<ev_check> calls. |
613 | |
625 | |
|
|
626 | =item unsigned int ev_loop_depth (loop) |
|
|
627 | |
|
|
628 | Returns the number of times C<ev_loop> was entered minus the number of |
|
|
629 | times C<ev_loop> was exited, in other words, the recursion depth. |
|
|
630 | |
|
|
631 | Outside C<ev_loop>, this number is zero. In a callback, this number is |
|
|
632 | C<1>, unless C<ev_loop> was invoked recursively (or from another thread), |
|
|
633 | in which case it is higher. |
|
|
634 | |
|
|
635 | Leaving C<ev_loop> abnormally (setjmp/longjmp, cancelling the thread |
|
|
636 | etc.), doesn't count as exit. |
|
|
637 | |
614 | =item unsigned int ev_backend (loop) |
638 | =item unsigned int ev_backend (loop) |
615 | |
639 | |
616 | Returns one of the C<EVBACKEND_*> flags indicating the event backend in |
640 | Returns one of the C<EVBACKEND_*> flags indicating the event backend in |
617 | use. |
641 | use. |
618 | |
642 | |
… | |
… | |
632 | |
656 | |
633 | This function is rarely useful, but when some event callback runs for a |
657 | This function is rarely useful, but when some event callback runs for a |
634 | very long time without entering the event loop, updating libev's idea of |
658 | very long time without entering the event loop, updating libev's idea of |
635 | the current time is a good idea. |
659 | the current time is a good idea. |
636 | |
660 | |
637 | See also "The special problem of time updates" in the C<ev_timer> section. |
661 | See also L<The special problem of time updates> in the C<ev_timer> section. |
638 | |
662 | |
639 | =item ev_suspend (loop) |
663 | =item ev_suspend (loop) |
640 | |
664 | |
641 | =item ev_resume (loop) |
665 | =item ev_resume (loop) |
642 | |
666 | |
… | |
… | |
799 | |
823 | |
800 | By setting a higher I<io collect interval> you allow libev to spend more |
824 | By setting a higher I<io collect interval> you allow libev to spend more |
801 | time collecting I/O events, so you can handle more events per iteration, |
825 | time collecting I/O events, so you can handle more events per iteration, |
802 | at the cost of increasing latency. Timeouts (both C<ev_periodic> and |
826 | at the cost of increasing latency. Timeouts (both C<ev_periodic> and |
803 | C<ev_timer>) will be not affected. Setting this to a non-null value will |
827 | C<ev_timer>) will be not affected. Setting this to a non-null value will |
804 | introduce an additional C<ev_sleep ()> call into most loop iterations. |
828 | introduce an additional C<ev_sleep ()> call into most loop iterations. The |
|
|
829 | sleep time ensures that libev will not poll for I/O events more often then |
|
|
830 | once per this interval, on average. |
805 | |
831 | |
806 | Likewise, by setting a higher I<timeout collect interval> you allow libev |
832 | Likewise, by setting a higher I<timeout collect interval> you allow libev |
807 | to spend more time collecting timeouts, at the expense of increased |
833 | to spend more time collecting timeouts, at the expense of increased |
808 | latency/jitter/inexactness (the watcher callback will be called |
834 | latency/jitter/inexactness (the watcher callback will be called |
809 | later). C<ev_io> watchers will not be affected. Setting this to a non-null |
835 | later). C<ev_io> watchers will not be affected. Setting this to a non-null |
… | |
… | |
811 | |
837 | |
812 | Many (busy) programs can usually benefit by setting the I/O collect |
838 | Many (busy) programs can usually benefit by setting the I/O collect |
813 | interval to a value near C<0.1> or so, which is often enough for |
839 | interval to a value near C<0.1> or so, which is often enough for |
814 | interactive servers (of course not for games), likewise for timeouts. It |
840 | interactive servers (of course not for games), likewise for timeouts. It |
815 | usually doesn't make much sense to set it to a lower value than C<0.01>, |
841 | usually doesn't make much sense to set it to a lower value than C<0.01>, |
816 | as this approaches the timing granularity of most systems. |
842 | as this approaches the timing granularity of most systems. Note that if |
|
|
843 | you do transactions with the outside world and you can't increase the |
|
|
844 | parallelity, then this setting will limit your transaction rate (if you |
|
|
845 | need to poll once per transaction and the I/O collect interval is 0.01, |
|
|
846 | then you can't do more than 100 transations per second). |
817 | |
847 | |
818 | Setting the I<timeout collect interval> can improve the opportunity for |
848 | Setting the I<timeout collect interval> can improve the opportunity for |
819 | saving power, as the program will "bundle" timer callback invocations that |
849 | saving power, as the program will "bundle" timer callback invocations that |
820 | are "near" in time together, by delaying some, thus reducing the number of |
850 | are "near" in time together, by delaying some, thus reducing the number of |
821 | times the process sleeps and wakes up again. Another useful technique to |
851 | times the process sleeps and wakes up again. Another useful technique to |
822 | reduce iterations/wake-ups is to use C<ev_periodic> watchers and make sure |
852 | reduce iterations/wake-ups is to use C<ev_periodic> watchers and make sure |
823 | they fire on, say, one-second boundaries only. |
853 | they fire on, say, one-second boundaries only. |
|
|
854 | |
|
|
855 | Example: we only need 0.1s timeout granularity, and we wish not to poll |
|
|
856 | more often than 100 times per second: |
|
|
857 | |
|
|
858 | ev_set_timeout_collect_interval (EV_DEFAULT_UC_ 0.1); |
|
|
859 | ev_set_io_collect_interval (EV_DEFAULT_UC_ 0.01); |
|
|
860 | |
|
|
861 | =item ev_invoke_pending (loop) |
|
|
862 | |
|
|
863 | This call will simply invoke all pending watchers while resetting their |
|
|
864 | pending state. Normally, C<ev_loop> does this automatically when required, |
|
|
865 | but when overriding the invoke callback this call comes handy. |
|
|
866 | |
|
|
867 | =item int ev_pending_count (loop) |
|
|
868 | |
|
|
869 | Returns the number of pending watchers - zero indicates that no watchers |
|
|
870 | are pending. |
|
|
871 | |
|
|
872 | =item ev_set_invoke_pending_cb (loop, void (*invoke_pending_cb)(EV_P)) |
|
|
873 | |
|
|
874 | This overrides the invoke pending functionality of the loop: Instead of |
|
|
875 | invoking all pending watchers when there are any, C<ev_loop> will call |
|
|
876 | this callback instead. This is useful, for example, when you want to |
|
|
877 | invoke the actual watchers inside another context (another thread etc.). |
|
|
878 | |
|
|
879 | If you want to reset the callback, use C<ev_invoke_pending> as new |
|
|
880 | callback. |
|
|
881 | |
|
|
882 | =item ev_set_loop_release_cb (loop, void (*release)(EV_P), void (*acquire)(EV_P)) |
|
|
883 | |
|
|
884 | Sometimes you want to share the same loop between multiple threads. This |
|
|
885 | can be done relatively simply by putting mutex_lock/unlock calls around |
|
|
886 | each call to a libev function. |
|
|
887 | |
|
|
888 | However, C<ev_loop> can run an indefinite time, so it is not feasible to |
|
|
889 | wait for it to return. One way around this is to wake up the loop via |
|
|
890 | C<ev_unloop> and C<av_async_send>, another way is to set these I<release> |
|
|
891 | and I<acquire> callbacks on the loop. |
|
|
892 | |
|
|
893 | When set, then C<release> will be called just before the thread is |
|
|
894 | suspended waiting for new events, and C<acquire> is called just |
|
|
895 | afterwards. |
|
|
896 | |
|
|
897 | Ideally, C<release> will just call your mutex_unlock function, and |
|
|
898 | C<acquire> will just call the mutex_lock function again. |
|
|
899 | |
|
|
900 | While event loop modifications are allowed between invocations of |
|
|
901 | C<release> and C<acquire> (that's their only purpose after all), no |
|
|
902 | modifications done will affect the event loop, i.e. adding watchers will |
|
|
903 | have no effect on the set of file descriptors being watched, or the time |
|
|
904 | waited. USe an C<ev_async> watcher to wake up C<ev_loop> when you want it |
|
|
905 | to take note of any changes you made. |
|
|
906 | |
|
|
907 | In theory, threads executing C<ev_loop> will be async-cancel safe between |
|
|
908 | invocations of C<release> and C<acquire>. |
|
|
909 | |
|
|
910 | See also the locking example in the C<THREADS> section later in this |
|
|
911 | document. |
|
|
912 | |
|
|
913 | =item ev_set_userdata (loop, void *data) |
|
|
914 | |
|
|
915 | =item ev_userdata (loop) |
|
|
916 | |
|
|
917 | Set and retrieve a single C<void *> associated with a loop. When |
|
|
918 | C<ev_set_userdata> has never been called, then C<ev_userdata> returns |
|
|
919 | C<0.> |
|
|
920 | |
|
|
921 | These two functions can be used to associate arbitrary data with a loop, |
|
|
922 | and are intended solely for the C<invoke_pending_cb>, C<release> and |
|
|
923 | C<acquire> callbacks described above, but of course can be (ab-)used for |
|
|
924 | any other purpose as well. |
824 | |
925 | |
825 | =item ev_loop_verify (loop) |
926 | =item ev_loop_verify (loop) |
826 | |
927 | |
827 | This function only does something when C<EV_VERIFY> support has been |
928 | This function only does something when C<EV_VERIFY> support has been |
828 | compiled in, which is the default for non-minimal builds. It tries to go |
929 | compiled in, which is the default for non-minimal builds. It tries to go |
… | |
… | |
1172 | #include <stddef.h> |
1273 | #include <stddef.h> |
1173 | |
1274 | |
1174 | static void |
1275 | static void |
1175 | t1_cb (EV_P_ ev_timer *w, int revents) |
1276 | t1_cb (EV_P_ ev_timer *w, int revents) |
1176 | { |
1277 | { |
1177 | struct my_biggy big = (struct my_biggy * |
1278 | struct my_biggy big = (struct my_biggy *) |
1178 | (((char *)w) - offsetof (struct my_biggy, t1)); |
1279 | (((char *)w) - offsetof (struct my_biggy, t1)); |
1179 | } |
1280 | } |
1180 | |
1281 | |
1181 | static void |
1282 | static void |
1182 | t2_cb (EV_P_ ev_timer *w, int revents) |
1283 | t2_cb (EV_P_ ev_timer *w, int revents) |
1183 | { |
1284 | { |
1184 | struct my_biggy big = (struct my_biggy * |
1285 | struct my_biggy big = (struct my_biggy *) |
1185 | (((char *)w) - offsetof (struct my_biggy, t2)); |
1286 | (((char *)w) - offsetof (struct my_biggy, t2)); |
1186 | } |
1287 | } |
1187 | |
1288 | |
1188 | =head2 WATCHER PRIORITY MODELS |
1289 | =head2 WATCHER PRIORITY MODELS |
1189 | |
1290 | |
… | |
… | |
1265 | // with the default priority are receiving events. |
1366 | // with the default priority are receiving events. |
1266 | ev_idle_start (EV_A_ &idle); |
1367 | ev_idle_start (EV_A_ &idle); |
1267 | } |
1368 | } |
1268 | |
1369 | |
1269 | static void |
1370 | static void |
1270 | idle-cb (EV_P_ ev_idle *w, int revents) |
1371 | idle_cb (EV_P_ ev_idle *w, int revents) |
1271 | { |
1372 | { |
1272 | // actual processing |
1373 | // actual processing |
1273 | read (STDIN_FILENO, ...); |
1374 | read (STDIN_FILENO, ...); |
1274 | |
1375 | |
1275 | // have to start the I/O watcher again, as |
1376 | // have to start the I/O watcher again, as |
… | |
… | |
1320 | descriptors to non-blocking mode is also usually a good idea (but not |
1421 | descriptors to non-blocking mode is also usually a good idea (but not |
1321 | required if you know what you are doing). |
1422 | required if you know what you are doing). |
1322 | |
1423 | |
1323 | If you cannot use non-blocking mode, then force the use of a |
1424 | If you cannot use non-blocking mode, then force the use of a |
1324 | known-to-be-good backend (at the time of this writing, this includes only |
1425 | known-to-be-good backend (at the time of this writing, this includes only |
1325 | C<EVBACKEND_SELECT> and C<EVBACKEND_POLL>). |
1426 | C<EVBACKEND_SELECT> and C<EVBACKEND_POLL>). The same applies to file |
|
|
1427 | descriptors for which non-blocking operation makes no sense (such as |
|
|
1428 | files) - libev doesn't guarentee any specific behaviour in that case. |
1326 | |
1429 | |
1327 | Another thing you have to watch out for is that it is quite easy to |
1430 | Another thing you have to watch out for is that it is quite easy to |
1328 | receive "spurious" readiness notifications, that is your callback might |
1431 | receive "spurious" readiness notifications, that is your callback might |
1329 | be called with C<EV_READ> but a subsequent C<read>(2) will actually block |
1432 | be called with C<EV_READ> but a subsequent C<read>(2) will actually block |
1330 | because there is no data. Not only are some backends known to create a |
1433 | because there is no data. Not only are some backends known to create a |
… | |
… | |
1451 | year, it will still time out after (roughly) one hour. "Roughly" because |
1554 | year, it will still time out after (roughly) one hour. "Roughly" because |
1452 | detecting time jumps is hard, and some inaccuracies are unavoidable (the |
1555 | detecting time jumps is hard, and some inaccuracies are unavoidable (the |
1453 | monotonic clock option helps a lot here). |
1556 | monotonic clock option helps a lot here). |
1454 | |
1557 | |
1455 | The callback is guaranteed to be invoked only I<after> its timeout has |
1558 | The callback is guaranteed to be invoked only I<after> its timeout has |
1456 | passed. If multiple timers become ready during the same loop iteration |
1559 | passed (not I<at>, so on systems with very low-resolution clocks this |
1457 | then the ones with earlier time-out values are invoked before ones with |
1560 | might introduce a small delay). If multiple timers become ready during the |
1458 | later time-out values (but this is no longer true when a callback calls |
1561 | same loop iteration then the ones with earlier time-out values are invoked |
1459 | C<ev_loop> recursively). |
1562 | before ones of the same priority with later time-out values (but this is |
|
|
1563 | no longer true when a callback calls C<ev_loop> recursively). |
1460 | |
1564 | |
1461 | =head3 Be smart about timeouts |
1565 | =head3 Be smart about timeouts |
1462 | |
1566 | |
1463 | Many real-world problems involve some kind of timeout, usually for error |
1567 | Many real-world problems involve some kind of timeout, usually for error |
1464 | recovery. A typical example is an HTTP request - if the other side hangs, |
1568 | recovery. A typical example is an HTTP request - if the other side hangs, |
… | |
… | |
1508 | C<after> argument to C<ev_timer_set>, and only ever use the C<repeat> |
1612 | C<after> argument to C<ev_timer_set>, and only ever use the C<repeat> |
1509 | member and C<ev_timer_again>. |
1613 | member and C<ev_timer_again>. |
1510 | |
1614 | |
1511 | At start: |
1615 | At start: |
1512 | |
1616 | |
1513 | ev_timer_init (timer, callback); |
1617 | ev_init (timer, callback); |
1514 | timer->repeat = 60.; |
1618 | timer->repeat = 60.; |
1515 | ev_timer_again (loop, timer); |
1619 | ev_timer_again (loop, timer); |
1516 | |
1620 | |
1517 | Each time there is some activity: |
1621 | Each time there is some activity: |
1518 | |
1622 | |
… | |
… | |
1580 | |
1684 | |
1581 | To start the timer, simply initialise the watcher and set C<last_activity> |
1685 | To start the timer, simply initialise the watcher and set C<last_activity> |
1582 | to the current time (meaning we just have some activity :), then call the |
1686 | to the current time (meaning we just have some activity :), then call the |
1583 | callback, which will "do the right thing" and start the timer: |
1687 | callback, which will "do the right thing" and start the timer: |
1584 | |
1688 | |
1585 | ev_timer_init (timer, callback); |
1689 | ev_init (timer, callback); |
1586 | last_activity = ev_now (loop); |
1690 | last_activity = ev_now (loop); |
1587 | callback (loop, timer, EV_TIMEOUT); |
1691 | callback (loop, timer, EV_TIMEOUT); |
1588 | |
1692 | |
1589 | And when there is some activity, simply store the current time in |
1693 | And when there is some activity, simply store the current time in |
1590 | C<last_activity>, no libev calls at all: |
1694 | C<last_activity>, no libev calls at all: |
… | |
… | |
1650 | ev_timer_set (&timer, after + ev_now () - ev_time (), 0.); |
1754 | ev_timer_set (&timer, after + ev_now () - ev_time (), 0.); |
1651 | |
1755 | |
1652 | If the event loop is suspended for a long time, you can also force an |
1756 | If the event loop is suspended for a long time, you can also force an |
1653 | update of the time returned by C<ev_now ()> by calling C<ev_now_update |
1757 | update of the time returned by C<ev_now ()> by calling C<ev_now_update |
1654 | ()>. |
1758 | ()>. |
|
|
1759 | |
|
|
1760 | =head3 The special problems of suspended animation |
|
|
1761 | |
|
|
1762 | When you leave the server world it is quite customary to hit machines that |
|
|
1763 | can suspend/hibernate - what happens to the clocks during such a suspend? |
|
|
1764 | |
|
|
1765 | Some quick tests made with a Linux 2.6.28 indicate that a suspend freezes |
|
|
1766 | all processes, while the clocks (C<times>, C<CLOCK_MONOTONIC>) continue |
|
|
1767 | to run until the system is suspended, but they will not advance while the |
|
|
1768 | system is suspended. That means, on resume, it will be as if the program |
|
|
1769 | was frozen for a few seconds, but the suspend time will not be counted |
|
|
1770 | towards C<ev_timer> when a monotonic clock source is used. The real time |
|
|
1771 | clock advanced as expected, but if it is used as sole clocksource, then a |
|
|
1772 | long suspend would be detected as a time jump by libev, and timers would |
|
|
1773 | be adjusted accordingly. |
|
|
1774 | |
|
|
1775 | I would not be surprised to see different behaviour in different between |
|
|
1776 | operating systems, OS versions or even different hardware. |
|
|
1777 | |
|
|
1778 | The other form of suspend (job control, or sending a SIGSTOP) will see a |
|
|
1779 | time jump in the monotonic clocks and the realtime clock. If the program |
|
|
1780 | is suspended for a very long time, and monotonic clock sources are in use, |
|
|
1781 | then you can expect C<ev_timer>s to expire as the full suspension time |
|
|
1782 | will be counted towards the timers. When no monotonic clock source is in |
|
|
1783 | use, then libev will again assume a timejump and adjust accordingly. |
|
|
1784 | |
|
|
1785 | It might be beneficial for this latter case to call C<ev_suspend> |
|
|
1786 | and C<ev_resume> in code that handles C<SIGTSTP>, to at least get |
|
|
1787 | deterministic behaviour in this case (you can do nothing against |
|
|
1788 | C<SIGSTOP>). |
1655 | |
1789 | |
1656 | =head3 Watcher-Specific Functions and Data Members |
1790 | =head3 Watcher-Specific Functions and Data Members |
1657 | |
1791 | |
1658 | =over 4 |
1792 | =over 4 |
1659 | |
1793 | |
… | |
… | |
1987 | some child status changes (most typically when a child of yours dies or |
2121 | some child status changes (most typically when a child of yours dies or |
1988 | exits). It is permissible to install a child watcher I<after> the child |
2122 | exits). It is permissible to install a child watcher I<after> the child |
1989 | has been forked (which implies it might have already exited), as long |
2123 | has been forked (which implies it might have already exited), as long |
1990 | as the event loop isn't entered (or is continued from a watcher), i.e., |
2124 | as the event loop isn't entered (or is continued from a watcher), i.e., |
1991 | forking and then immediately registering a watcher for the child is fine, |
2125 | forking and then immediately registering a watcher for the child is fine, |
1992 | but forking and registering a watcher a few event loop iterations later is |
2126 | but forking and registering a watcher a few event loop iterations later or |
1993 | not. |
2127 | in the next callback invocation is not. |
1994 | |
2128 | |
1995 | Only the default event loop is capable of handling signals, and therefore |
2129 | Only the default event loop is capable of handling signals, and therefore |
1996 | you can only register child watchers in the default event loop. |
2130 | you can only register child watchers in the default event loop. |
|
|
2131 | |
|
|
2132 | Due to some design glitches inside libev, child watchers will always be |
|
|
2133 | handled at maximum priority (their priority is set to C<EV_MAXPRI> by |
|
|
2134 | libev) |
1997 | |
2135 | |
1998 | =head3 Process Interaction |
2136 | =head3 Process Interaction |
1999 | |
2137 | |
2000 | Libev grabs C<SIGCHLD> as soon as the default event loop is |
2138 | Libev grabs C<SIGCHLD> as soon as the default event loop is |
2001 | initialised. This is necessary to guarantee proper behaviour even if |
2139 | initialised. This is necessary to guarantee proper behaviour even if |
… | |
… | |
2353 | // no longer anything immediate to do. |
2491 | // no longer anything immediate to do. |
2354 | } |
2492 | } |
2355 | |
2493 | |
2356 | ev_idle *idle_watcher = malloc (sizeof (ev_idle)); |
2494 | ev_idle *idle_watcher = malloc (sizeof (ev_idle)); |
2357 | ev_idle_init (idle_watcher, idle_cb); |
2495 | ev_idle_init (idle_watcher, idle_cb); |
2358 | ev_idle_start (loop, idle_cb); |
2496 | ev_idle_start (loop, idle_watcher); |
2359 | |
2497 | |
2360 | |
2498 | |
2361 | =head2 C<ev_prepare> and C<ev_check> - customise your event loop! |
2499 | =head2 C<ev_prepare> and C<ev_check> - customise your event loop! |
2362 | |
2500 | |
2363 | Prepare and check watchers are usually (but not always) used in pairs: |
2501 | Prepare and check watchers are usually (but not always) used in pairs: |
… | |
… | |
2456 | struct pollfd fds [nfd]; |
2594 | struct pollfd fds [nfd]; |
2457 | // actual code will need to loop here and realloc etc. |
2595 | // actual code will need to loop here and realloc etc. |
2458 | adns_beforepoll (ads, fds, &nfd, &timeout, timeval_from (ev_time ())); |
2596 | adns_beforepoll (ads, fds, &nfd, &timeout, timeval_from (ev_time ())); |
2459 | |
2597 | |
2460 | /* the callback is illegal, but won't be called as we stop during check */ |
2598 | /* the callback is illegal, but won't be called as we stop during check */ |
2461 | ev_timer_init (&tw, 0, timeout * 1e-3); |
2599 | ev_timer_init (&tw, 0, timeout * 1e-3, 0.); |
2462 | ev_timer_start (loop, &tw); |
2600 | ev_timer_start (loop, &tw); |
2463 | |
2601 | |
2464 | // create one ev_io per pollfd |
2602 | // create one ev_io per pollfd |
2465 | for (int i = 0; i < nfd; ++i) |
2603 | for (int i = 0; i < nfd; ++i) |
2466 | { |
2604 | { |
… | |
… | |
2696 | event loop blocks next and before C<ev_check> watchers are being called, |
2834 | event loop blocks next and before C<ev_check> watchers are being called, |
2697 | and only in the child after the fork. If whoever good citizen calling |
2835 | and only in the child after the fork. If whoever good citizen calling |
2698 | C<ev_default_fork> cheats and calls it in the wrong process, the fork |
2836 | C<ev_default_fork> cheats and calls it in the wrong process, the fork |
2699 | handlers will be invoked, too, of course. |
2837 | handlers will be invoked, too, of course. |
2700 | |
2838 | |
|
|
2839 | =head3 The special problem of life after fork - how is it possible? |
|
|
2840 | |
|
|
2841 | Most uses of C<fork()> consist of forking, then some simple calls to ste |
|
|
2842 | up/change the process environment, followed by a call to C<exec()>. This |
|
|
2843 | sequence should be handled by libev without any problems. |
|
|
2844 | |
|
|
2845 | This changes when the application actually wants to do event handling |
|
|
2846 | in the child, or both parent in child, in effect "continuing" after the |
|
|
2847 | fork. |
|
|
2848 | |
|
|
2849 | The default mode of operation (for libev, with application help to detect |
|
|
2850 | forks) is to duplicate all the state in the child, as would be expected |
|
|
2851 | when I<either> the parent I<or> the child process continues. |
|
|
2852 | |
|
|
2853 | When both processes want to continue using libev, then this is usually the |
|
|
2854 | wrong result. In that case, usually one process (typically the parent) is |
|
|
2855 | supposed to continue with all watchers in place as before, while the other |
|
|
2856 | process typically wants to start fresh, i.e. without any active watchers. |
|
|
2857 | |
|
|
2858 | The cleanest and most efficient way to achieve that with libev is to |
|
|
2859 | simply create a new event loop, which of course will be "empty", and |
|
|
2860 | use that for new watchers. This has the advantage of not touching more |
|
|
2861 | memory than necessary, and thus avoiding the copy-on-write, and the |
|
|
2862 | disadvantage of having to use multiple event loops (which do not support |
|
|
2863 | signal watchers). |
|
|
2864 | |
|
|
2865 | When this is not possible, or you want to use the default loop for |
|
|
2866 | other reasons, then in the process that wants to start "fresh", call |
|
|
2867 | C<ev_default_destroy ()> followed by C<ev_default_loop (...)>. Destroying |
|
|
2868 | the default loop will "orphan" (not stop) all registered watchers, so you |
|
|
2869 | have to be careful not to execute code that modifies those watchers. Note |
|
|
2870 | also that in that case, you have to re-register any signal watchers. |
|
|
2871 | |
2701 | =head3 Watcher-Specific Functions and Data Members |
2872 | =head3 Watcher-Specific Functions and Data Members |
2702 | |
2873 | |
2703 | =over 4 |
2874 | =over 4 |
2704 | |
2875 | |
2705 | =item ev_fork_init (ev_signal *, callback) |
2876 | =item ev_fork_init (ev_signal *, callback) |
… | |
… | |
3595 | defined to be C<0>, then they are not. |
3766 | defined to be C<0>, then they are not. |
3596 | |
3767 | |
3597 | =item EV_MINIMAL |
3768 | =item EV_MINIMAL |
3598 | |
3769 | |
3599 | If you need to shave off some kilobytes of code at the expense of some |
3770 | If you need to shave off some kilobytes of code at the expense of some |
3600 | speed, define this symbol to C<1>. Currently this is used to override some |
3771 | speed (but with the full API), define this symbol to C<1>. Currently this |
3601 | inlining decisions, saves roughly 30% code size on amd64. It also selects a |
3772 | is used to override some inlining decisions, saves roughly 30% code size |
3602 | much smaller 2-heap for timer management over the default 4-heap. |
3773 | on amd64. It also selects a much smaller 2-heap for timer management over |
|
|
3774 | the default 4-heap. |
|
|
3775 | |
|
|
3776 | You can save even more by disabling watcher types you do not need |
|
|
3777 | and setting C<EV_MAXPRI> == C<EV_MINPRI>. Also, disabling C<assert> |
|
|
3778 | (C<-DNDEBUG>) will usually reduce code size a lot. |
|
|
3779 | |
|
|
3780 | Defining C<EV_MINIMAL> to C<2> will additionally reduce the core API to |
|
|
3781 | provide a bare-bones event library. See C<ev.h> for details on what parts |
|
|
3782 | of the API are still available, and do not complain if this subset changes |
|
|
3783 | over time. |
3603 | |
3784 | |
3604 | =item EV_PID_HASHSIZE |
3785 | =item EV_PID_HASHSIZE |
3605 | |
3786 | |
3606 | C<ev_child> watchers use a small hash table to distribute workload by |
3787 | C<ev_child> watchers use a small hash table to distribute workload by |
3607 | pid. The default size is C<16> (or C<1> with C<EV_MINIMAL>), usually more |
3788 | pid. The default size is C<16> (or C<1> with C<EV_MINIMAL>), usually more |
… | |
… | |
3793 | default loop and triggering an C<ev_async> watcher from the default loop |
3974 | default loop and triggering an C<ev_async> watcher from the default loop |
3794 | watcher callback into the event loop interested in the signal. |
3975 | watcher callback into the event loop interested in the signal. |
3795 | |
3976 | |
3796 | =back |
3977 | =back |
3797 | |
3978 | |
|
|
3979 | =head4 THREAD LOCKING EXAMPLE |
|
|
3980 | |
|
|
3981 | Here is a fictitious example of how to run an event loop in a different |
|
|
3982 | thread than where callbacks are being invoked and watchers are |
|
|
3983 | created/added/removed. |
|
|
3984 | |
|
|
3985 | For a real-world example, see the C<EV::Loop::Async> perl module, |
|
|
3986 | which uses exactly this technique (which is suited for many high-level |
|
|
3987 | languages). |
|
|
3988 | |
|
|
3989 | The example uses a pthread mutex to protect the loop data, a condition |
|
|
3990 | variable to wait for callback invocations, an async watcher to notify the |
|
|
3991 | event loop thread and an unspecified mechanism to wake up the main thread. |
|
|
3992 | |
|
|
3993 | First, you need to associate some data with the event loop: |
|
|
3994 | |
|
|
3995 | typedef struct { |
|
|
3996 | mutex_t lock; /* global loop lock */ |
|
|
3997 | ev_async async_w; |
|
|
3998 | thread_t tid; |
|
|
3999 | cond_t invoke_cv; |
|
|
4000 | } userdata; |
|
|
4001 | |
|
|
4002 | void prepare_loop (EV_P) |
|
|
4003 | { |
|
|
4004 | // for simplicity, we use a static userdata struct. |
|
|
4005 | static userdata u; |
|
|
4006 | |
|
|
4007 | ev_async_init (&u->async_w, async_cb); |
|
|
4008 | ev_async_start (EV_A_ &u->async_w); |
|
|
4009 | |
|
|
4010 | pthread_mutex_init (&u->lock, 0); |
|
|
4011 | pthread_cond_init (&u->invoke_cv, 0); |
|
|
4012 | |
|
|
4013 | // now associate this with the loop |
|
|
4014 | ev_set_userdata (EV_A_ u); |
|
|
4015 | ev_set_invoke_pending_cb (EV_A_ l_invoke); |
|
|
4016 | ev_set_loop_release_cb (EV_A_ l_release, l_acquire); |
|
|
4017 | |
|
|
4018 | // then create the thread running ev_loop |
|
|
4019 | pthread_create (&u->tid, 0, l_run, EV_A); |
|
|
4020 | } |
|
|
4021 | |
|
|
4022 | The callback for the C<ev_async> watcher does nothing: the watcher is used |
|
|
4023 | solely to wake up the event loop so it takes notice of any new watchers |
|
|
4024 | that might have been added: |
|
|
4025 | |
|
|
4026 | static void |
|
|
4027 | async_cb (EV_P_ ev_async *w, int revents) |
|
|
4028 | { |
|
|
4029 | // just used for the side effects |
|
|
4030 | } |
|
|
4031 | |
|
|
4032 | The C<l_release> and C<l_acquire> callbacks simply unlock/lock the mutex |
|
|
4033 | protecting the loop data, respectively. |
|
|
4034 | |
|
|
4035 | static void |
|
|
4036 | l_release (EV_P) |
|
|
4037 | { |
|
|
4038 | userdata *u = ev_userdata (EV_A); |
|
|
4039 | pthread_mutex_unlock (&u->lock); |
|
|
4040 | } |
|
|
4041 | |
|
|
4042 | static void |
|
|
4043 | l_acquire (EV_P) |
|
|
4044 | { |
|
|
4045 | userdata *u = ev_userdata (EV_A); |
|
|
4046 | pthread_mutex_lock (&u->lock); |
|
|
4047 | } |
|
|
4048 | |
|
|
4049 | The event loop thread first acquires the mutex, and then jumps straight |
|
|
4050 | into C<ev_loop>: |
|
|
4051 | |
|
|
4052 | void * |
|
|
4053 | l_run (void *thr_arg) |
|
|
4054 | { |
|
|
4055 | struct ev_loop *loop = (struct ev_loop *)thr_arg; |
|
|
4056 | |
|
|
4057 | l_acquire (EV_A); |
|
|
4058 | pthread_setcanceltype (PTHREAD_CANCEL_ASYNCHRONOUS, 0); |
|
|
4059 | ev_loop (EV_A_ 0); |
|
|
4060 | l_release (EV_A); |
|
|
4061 | |
|
|
4062 | return 0; |
|
|
4063 | } |
|
|
4064 | |
|
|
4065 | Instead of invoking all pending watchers, the C<l_invoke> callback will |
|
|
4066 | signal the main thread via some unspecified mechanism (signals? pipe |
|
|
4067 | writes? C<Async::Interrupt>?) and then waits until all pending watchers |
|
|
4068 | have been called (in a while loop because a) spurious wakeups are possible |
|
|
4069 | and b) skipping inter-thread-communication when there are no pending |
|
|
4070 | watchers is very beneficial): |
|
|
4071 | |
|
|
4072 | static void |
|
|
4073 | l_invoke (EV_P) |
|
|
4074 | { |
|
|
4075 | userdata *u = ev_userdata (EV_A); |
|
|
4076 | |
|
|
4077 | while (ev_pending_count (EV_A)) |
|
|
4078 | { |
|
|
4079 | wake_up_other_thread_in_some_magic_or_not_so_magic_way (); |
|
|
4080 | pthread_cond_wait (&u->invoke_cv, &u->lock); |
|
|
4081 | } |
|
|
4082 | } |
|
|
4083 | |
|
|
4084 | Now, whenever the main thread gets told to invoke pending watchers, it |
|
|
4085 | will grab the lock, call C<ev_invoke_pending> and then signal the loop |
|
|
4086 | thread to continue: |
|
|
4087 | |
|
|
4088 | static void |
|
|
4089 | real_invoke_pending (EV_P) |
|
|
4090 | { |
|
|
4091 | userdata *u = ev_userdata (EV_A); |
|
|
4092 | |
|
|
4093 | pthread_mutex_lock (&u->lock); |
|
|
4094 | ev_invoke_pending (EV_A); |
|
|
4095 | pthread_cond_signal (&u->invoke_cv); |
|
|
4096 | pthread_mutex_unlock (&u->lock); |
|
|
4097 | } |
|
|
4098 | |
|
|
4099 | Whenever you want to start/stop a watcher or do other modifications to an |
|
|
4100 | event loop, you will now have to lock: |
|
|
4101 | |
|
|
4102 | ev_timer timeout_watcher; |
|
|
4103 | userdata *u = ev_userdata (EV_A); |
|
|
4104 | |
|
|
4105 | ev_timer_init (&timeout_watcher, timeout_cb, 5.5, 0.); |
|
|
4106 | |
|
|
4107 | pthread_mutex_lock (&u->lock); |
|
|
4108 | ev_timer_start (EV_A_ &timeout_watcher); |
|
|
4109 | ev_async_send (EV_A_ &u->async_w); |
|
|
4110 | pthread_mutex_unlock (&u->lock); |
|
|
4111 | |
|
|
4112 | Note that sending the C<ev_async> watcher is required because otherwise |
|
|
4113 | an event loop currently blocking in the kernel will have no knowledge |
|
|
4114 | about the newly added timer. By waking up the loop it will pick up any new |
|
|
4115 | watchers in the next event loop iteration. |
|
|
4116 | |
3798 | =head3 COROUTINES |
4117 | =head3 COROUTINES |
3799 | |
4118 | |
3800 | Libev is very accommodating to coroutines ("cooperative threads"): |
4119 | Libev is very accommodating to coroutines ("cooperative threads"): |
3801 | libev fully supports nesting calls to its functions from different |
4120 | libev fully supports nesting calls to its functions from different |
3802 | coroutines (e.g. you can call C<ev_loop> on the same loop from two |
4121 | coroutines (e.g. you can call C<ev_loop> on the same loop from two |
3803 | different coroutines, and switch freely between both coroutines running the |
4122 | different coroutines, and switch freely between both coroutines running |
3804 | loop, as long as you don't confuse yourself). The only exception is that |
4123 | the loop, as long as you don't confuse yourself). The only exception is |
3805 | you must not do this from C<ev_periodic> reschedule callbacks. |
4124 | that you must not do this from C<ev_periodic> reschedule callbacks. |
3806 | |
4125 | |
3807 | Care has been taken to ensure that libev does not keep local state inside |
4126 | Care has been taken to ensure that libev does not keep local state inside |
3808 | C<ev_loop>, and other calls do not usually allow for coroutine switches as |
4127 | C<ev_loop>, and other calls do not usually allow for coroutine switches as |
3809 | they do not call any callbacks. |
4128 | they do not call any callbacks. |
3810 | |
4129 | |
… | |
… | |
3887 | way (note also that glib is the slowest event library known to man). |
4206 | way (note also that glib is the slowest event library known to man). |
3888 | |
4207 | |
3889 | There is no supported compilation method available on windows except |
4208 | There is no supported compilation method available on windows except |
3890 | embedding it into other applications. |
4209 | embedding it into other applications. |
3891 | |
4210 | |
|
|
4211 | Sensible signal handling is officially unsupported by Microsoft - libev |
|
|
4212 | tries its best, but under most conditions, signals will simply not work. |
|
|
4213 | |
3892 | Not a libev limitation but worth mentioning: windows apparently doesn't |
4214 | Not a libev limitation but worth mentioning: windows apparently doesn't |
3893 | accept large writes: instead of resulting in a partial write, windows will |
4215 | accept large writes: instead of resulting in a partial write, windows will |
3894 | either accept everything or return C<ENOBUFS> if the buffer is too large, |
4216 | either accept everything or return C<ENOBUFS> if the buffer is too large, |
3895 | so make sure you only write small amounts into your sockets (less than a |
4217 | so make sure you only write small amounts into your sockets (less than a |
3896 | megabyte seems safe, but this apparently depends on the amount of memory |
4218 | megabyte seems safe, but this apparently depends on the amount of memory |
… | |
… | |
3900 | the abysmal performance of winsockets, using a large number of sockets |
4222 | the abysmal performance of winsockets, using a large number of sockets |
3901 | is not recommended (and not reasonable). If your program needs to use |
4223 | is not recommended (and not reasonable). If your program needs to use |
3902 | more than a hundred or so sockets, then likely it needs to use a totally |
4224 | more than a hundred or so sockets, then likely it needs to use a totally |
3903 | different implementation for windows, as libev offers the POSIX readiness |
4225 | different implementation for windows, as libev offers the POSIX readiness |
3904 | notification model, which cannot be implemented efficiently on windows |
4226 | notification model, which cannot be implemented efficiently on windows |
3905 | (Microsoft monopoly games). |
4227 | (due to Microsoft monopoly games). |
3906 | |
4228 | |
3907 | A typical way to use libev under windows is to embed it (see the embedding |
4229 | A typical way to use libev under windows is to embed it (see the embedding |
3908 | section for details) and use the following F<evwrap.h> header file instead |
4230 | section for details) and use the following F<evwrap.h> header file instead |
3909 | of F<ev.h>: |
4231 | of F<ev.h>: |
3910 | |
4232 | |
… | |
… | |
3946 | |
4268 | |
3947 | Early versions of winsocket's select only supported waiting for a maximum |
4269 | Early versions of winsocket's select only supported waiting for a maximum |
3948 | of C<64> handles (probably owning to the fact that all windows kernels |
4270 | of C<64> handles (probably owning to the fact that all windows kernels |
3949 | can only wait for C<64> things at the same time internally; Microsoft |
4271 | can only wait for C<64> things at the same time internally; Microsoft |
3950 | recommends spawning a chain of threads and wait for 63 handles and the |
4272 | recommends spawning a chain of threads and wait for 63 handles and the |
3951 | previous thread in each. Great). |
4273 | previous thread in each. Sounds great!). |
3952 | |
4274 | |
3953 | Newer versions support more handles, but you need to define C<FD_SETSIZE> |
4275 | Newer versions support more handles, but you need to define C<FD_SETSIZE> |
3954 | to some high number (e.g. C<2048>) before compiling the winsocket select |
4276 | to some high number (e.g. C<2048>) before compiling the winsocket select |
3955 | call (which might be in libev or elsewhere, for example, perl does its own |
4277 | call (which might be in libev or elsewhere, for example, perl and many |
3956 | select emulation on windows). |
4278 | other interpreters do their own select emulation on windows). |
3957 | |
4279 | |
3958 | Another limit is the number of file descriptors in the Microsoft runtime |
4280 | Another limit is the number of file descriptors in the Microsoft runtime |
3959 | libraries, which by default is C<64> (there must be a hidden I<64> fetish |
4281 | libraries, which by default is C<64> (there must be a hidden I<64> |
3960 | or something like this inside Microsoft). You can increase this by calling |
4282 | fetish or something like this inside Microsoft). You can increase this |
3961 | C<_setmaxstdio>, which can increase this limit to C<2048> (another |
4283 | by calling C<_setmaxstdio>, which can increase this limit to C<2048> |
3962 | arbitrary limit), but is broken in many versions of the Microsoft runtime |
4284 | (another arbitrary limit), but is broken in many versions of the Microsoft |
3963 | libraries. |
|
|
3964 | |
|
|
3965 | This might get you to about C<512> or C<2048> sockets (depending on |
4285 | runtime libraries. This might get you to about C<512> or C<2048> sockets |
3966 | windows version and/or the phase of the moon). To get more, you need to |
4286 | (depending on windows version and/or the phase of the moon). To get more, |
3967 | wrap all I/O functions and provide your own fd management, but the cost of |
4287 | you need to wrap all I/O functions and provide your own fd management, but |
3968 | calling select (O(n²)) will likely make this unworkable. |
4288 | the cost of calling select (O(n²)) will likely make this unworkable. |
3969 | |
4289 | |
3970 | =back |
4290 | =back |
3971 | |
4291 | |
3972 | =head2 PORTABILITY REQUIREMENTS |
4292 | =head2 PORTABILITY REQUIREMENTS |
3973 | |
4293 | |
… | |
… | |
4016 | =item C<double> must hold a time value in seconds with enough accuracy |
4336 | =item C<double> must hold a time value in seconds with enough accuracy |
4017 | |
4337 | |
4018 | The type C<double> is used to represent timestamps. It is required to |
4338 | The type C<double> is used to represent timestamps. It is required to |
4019 | have at least 51 bits of mantissa (and 9 bits of exponent), which is good |
4339 | have at least 51 bits of mantissa (and 9 bits of exponent), which is good |
4020 | enough for at least into the year 4000. This requirement is fulfilled by |
4340 | enough for at least into the year 4000. This requirement is fulfilled by |
4021 | implementations implementing IEEE 754 (basically all existing ones). |
4341 | implementations implementing IEEE 754, which is basically all existing |
|
|
4342 | ones. With IEEE 754 doubles, you get microsecond accuracy until at least |
|
|
4343 | 2200. |
4022 | |
4344 | |
4023 | =back |
4345 | =back |
4024 | |
4346 | |
4025 | If you know of other additional requirements drop me a note. |
4347 | If you know of other additional requirements drop me a note. |
4026 | |
4348 | |