… | |
… | |
644 | |
644 | |
645 | This function is rarely useful, but when some event callback runs for a |
645 | This function is rarely useful, but when some event callback runs for a |
646 | very long time without entering the event loop, updating libev's idea of |
646 | very long time without entering the event loop, updating libev's idea of |
647 | the current time is a good idea. |
647 | the current time is a good idea. |
648 | |
648 | |
649 | See also "The special problem of time updates" in the C<ev_timer> section. |
649 | See also L<The special problem of time updates> in the C<ev_timer> section. |
650 | |
650 | |
651 | =item ev_suspend (loop) |
651 | =item ev_suspend (loop) |
652 | |
652 | |
653 | =item ev_resume (loop) |
653 | =item ev_resume (loop) |
654 | |
654 | |
… | |
… | |
811 | |
811 | |
812 | By setting a higher I<io collect interval> you allow libev to spend more |
812 | By setting a higher I<io collect interval> you allow libev to spend more |
813 | time collecting I/O events, so you can handle more events per iteration, |
813 | time collecting I/O events, so you can handle more events per iteration, |
814 | at the cost of increasing latency. Timeouts (both C<ev_periodic> and |
814 | at the cost of increasing latency. Timeouts (both C<ev_periodic> and |
815 | C<ev_timer>) will be not affected. Setting this to a non-null value will |
815 | C<ev_timer>) will be not affected. Setting this to a non-null value will |
816 | introduce an additional C<ev_sleep ()> call into most loop iterations. |
816 | introduce an additional C<ev_sleep ()> call into most loop iterations. The |
|
|
817 | sleep time ensures that libev will not poll for I/O events more often then |
|
|
818 | once per this interval, on average. |
817 | |
819 | |
818 | Likewise, by setting a higher I<timeout collect interval> you allow libev |
820 | Likewise, by setting a higher I<timeout collect interval> you allow libev |
819 | to spend more time collecting timeouts, at the expense of increased |
821 | to spend more time collecting timeouts, at the expense of increased |
820 | latency/jitter/inexactness (the watcher callback will be called |
822 | latency/jitter/inexactness (the watcher callback will be called |
821 | later). C<ev_io> watchers will not be affected. Setting this to a non-null |
823 | later). C<ev_io> watchers will not be affected. Setting this to a non-null |
… | |
… | |
823 | |
825 | |
824 | Many (busy) programs can usually benefit by setting the I/O collect |
826 | Many (busy) programs can usually benefit by setting the I/O collect |
825 | interval to a value near C<0.1> or so, which is often enough for |
827 | interval to a value near C<0.1> or so, which is often enough for |
826 | interactive servers (of course not for games), likewise for timeouts. It |
828 | interactive servers (of course not for games), likewise for timeouts. It |
827 | usually doesn't make much sense to set it to a lower value than C<0.01>, |
829 | usually doesn't make much sense to set it to a lower value than C<0.01>, |
828 | as this approaches the timing granularity of most systems. |
830 | as this approaches the timing granularity of most systems. Note that if |
|
|
831 | you do transactions with the outside world and you can't increase the |
|
|
832 | parallelity, then this setting will limit your transaction rate (if you |
|
|
833 | need to poll once per transaction and the I/O collect interval is 0.01, |
|
|
834 | then you can't do more than 100 transations per second). |
829 | |
835 | |
830 | Setting the I<timeout collect interval> can improve the opportunity for |
836 | Setting the I<timeout collect interval> can improve the opportunity for |
831 | saving power, as the program will "bundle" timer callback invocations that |
837 | saving power, as the program will "bundle" timer callback invocations that |
832 | are "near" in time together, by delaying some, thus reducing the number of |
838 | are "near" in time together, by delaying some, thus reducing the number of |
833 | times the process sleeps and wakes up again. Another useful technique to |
839 | times the process sleeps and wakes up again. Another useful technique to |
834 | reduce iterations/wake-ups is to use C<ev_periodic> watchers and make sure |
840 | reduce iterations/wake-ups is to use C<ev_periodic> watchers and make sure |
835 | they fire on, say, one-second boundaries only. |
841 | they fire on, say, one-second boundaries only. |
|
|
842 | |
|
|
843 | Example: we only need 0.1s timeout granularity, and we wish not to poll |
|
|
844 | more often than 100 times per second: |
|
|
845 | |
|
|
846 | ev_set_timeout_collect_interval (EV_DEFAULT_UC_ 0.1); |
|
|
847 | ev_set_io_collect_interval (EV_DEFAULT_UC_ 0.01); |
836 | |
848 | |
837 | =item ev_loop_verify (loop) |
849 | =item ev_loop_verify (loop) |
838 | |
850 | |
839 | This function only does something when C<EV_VERIFY> support has been |
851 | This function only does something when C<EV_VERIFY> support has been |
840 | compiled in, which is the default for non-minimal builds. It tries to go |
852 | compiled in, which is the default for non-minimal builds. It tries to go |
… | |
… | |
1184 | #include <stddef.h> |
1196 | #include <stddef.h> |
1185 | |
1197 | |
1186 | static void |
1198 | static void |
1187 | t1_cb (EV_P_ ev_timer *w, int revents) |
1199 | t1_cb (EV_P_ ev_timer *w, int revents) |
1188 | { |
1200 | { |
1189 | struct my_biggy big = (struct my_biggy * |
1201 | struct my_biggy big = (struct my_biggy *) |
1190 | (((char *)w) - offsetof (struct my_biggy, t1)); |
1202 | (((char *)w) - offsetof (struct my_biggy, t1)); |
1191 | } |
1203 | } |
1192 | |
1204 | |
1193 | static void |
1205 | static void |
1194 | t2_cb (EV_P_ ev_timer *w, int revents) |
1206 | t2_cb (EV_P_ ev_timer *w, int revents) |
1195 | { |
1207 | { |
1196 | struct my_biggy big = (struct my_biggy * |
1208 | struct my_biggy big = (struct my_biggy *) |
1197 | (((char *)w) - offsetof (struct my_biggy, t2)); |
1209 | (((char *)w) - offsetof (struct my_biggy, t2)); |
1198 | } |
1210 | } |
1199 | |
1211 | |
1200 | =head2 WATCHER PRIORITY MODELS |
1212 | =head2 WATCHER PRIORITY MODELS |
1201 | |
1213 | |
… | |
… | |
1277 | // with the default priority are receiving events. |
1289 | // with the default priority are receiving events. |
1278 | ev_idle_start (EV_A_ &idle); |
1290 | ev_idle_start (EV_A_ &idle); |
1279 | } |
1291 | } |
1280 | |
1292 | |
1281 | static void |
1293 | static void |
1282 | idle-cb (EV_P_ ev_idle *w, int revents) |
1294 | idle_cb (EV_P_ ev_idle *w, int revents) |
1283 | { |
1295 | { |
1284 | // actual processing |
1296 | // actual processing |
1285 | read (STDIN_FILENO, ...); |
1297 | read (STDIN_FILENO, ...); |
1286 | |
1298 | |
1287 | // have to start the I/O watcher again, as |
1299 | // have to start the I/O watcher again, as |
… | |
… | |
1332 | descriptors to non-blocking mode is also usually a good idea (but not |
1344 | descriptors to non-blocking mode is also usually a good idea (but not |
1333 | required if you know what you are doing). |
1345 | required if you know what you are doing). |
1334 | |
1346 | |
1335 | If you cannot use non-blocking mode, then force the use of a |
1347 | If you cannot use non-blocking mode, then force the use of a |
1336 | known-to-be-good backend (at the time of this writing, this includes only |
1348 | known-to-be-good backend (at the time of this writing, this includes only |
1337 | C<EVBACKEND_SELECT> and C<EVBACKEND_POLL>). |
1349 | C<EVBACKEND_SELECT> and C<EVBACKEND_POLL>). The same applies to file |
|
|
1350 | descriptors for which non-blocking operation makes no sense (such as |
|
|
1351 | files) - libev doesn't guarentee any specific behaviour in that case. |
1338 | |
1352 | |
1339 | Another thing you have to watch out for is that it is quite easy to |
1353 | Another thing you have to watch out for is that it is quite easy to |
1340 | receive "spurious" readiness notifications, that is your callback might |
1354 | receive "spurious" readiness notifications, that is your callback might |
1341 | be called with C<EV_READ> but a subsequent C<read>(2) will actually block |
1355 | be called with C<EV_READ> but a subsequent C<read>(2) will actually block |
1342 | because there is no data. Not only are some backends known to create a |
1356 | because there is no data. Not only are some backends known to create a |
… | |
… | |
1463 | year, it will still time out after (roughly) one hour. "Roughly" because |
1477 | year, it will still time out after (roughly) one hour. "Roughly" because |
1464 | detecting time jumps is hard, and some inaccuracies are unavoidable (the |
1478 | detecting time jumps is hard, and some inaccuracies are unavoidable (the |
1465 | monotonic clock option helps a lot here). |
1479 | monotonic clock option helps a lot here). |
1466 | |
1480 | |
1467 | The callback is guaranteed to be invoked only I<after> its timeout has |
1481 | The callback is guaranteed to be invoked only I<after> its timeout has |
1468 | passed. If multiple timers become ready during the same loop iteration |
1482 | passed (not I<at>, so on systems with very low-resolution clocks this |
1469 | then the ones with earlier time-out values are invoked before ones with |
1483 | might introduce a small delay). If multiple timers become ready during the |
|
|
1484 | same loop iteration then the ones with earlier time-out values are invoked |
1470 | later time-out values (but this is no longer true when a callback calls |
1485 | before ones with later time-out values (but this is no longer true when a |
1471 | C<ev_loop> recursively). |
1486 | callback calls C<ev_loop> recursively). |
1472 | |
1487 | |
1473 | =head3 Be smart about timeouts |
1488 | =head3 Be smart about timeouts |
1474 | |
1489 | |
1475 | Many real-world problems involve some kind of timeout, usually for error |
1490 | Many real-world problems involve some kind of timeout, usually for error |
1476 | recovery. A typical example is an HTTP request - if the other side hangs, |
1491 | recovery. A typical example is an HTTP request - if the other side hangs, |
… | |
… | |
1520 | C<after> argument to C<ev_timer_set>, and only ever use the C<repeat> |
1535 | C<after> argument to C<ev_timer_set>, and only ever use the C<repeat> |
1521 | member and C<ev_timer_again>. |
1536 | member and C<ev_timer_again>. |
1522 | |
1537 | |
1523 | At start: |
1538 | At start: |
1524 | |
1539 | |
1525 | ev_timer_init (timer, callback); |
1540 | ev_init (timer, callback); |
1526 | timer->repeat = 60.; |
1541 | timer->repeat = 60.; |
1527 | ev_timer_again (loop, timer); |
1542 | ev_timer_again (loop, timer); |
1528 | |
1543 | |
1529 | Each time there is some activity: |
1544 | Each time there is some activity: |
1530 | |
1545 | |
… | |
… | |
1592 | |
1607 | |
1593 | To start the timer, simply initialise the watcher and set C<last_activity> |
1608 | To start the timer, simply initialise the watcher and set C<last_activity> |
1594 | to the current time (meaning we just have some activity :), then call the |
1609 | to the current time (meaning we just have some activity :), then call the |
1595 | callback, which will "do the right thing" and start the timer: |
1610 | callback, which will "do the right thing" and start the timer: |
1596 | |
1611 | |
1597 | ev_timer_init (timer, callback); |
1612 | ev_init (timer, callback); |
1598 | last_activity = ev_now (loop); |
1613 | last_activity = ev_now (loop); |
1599 | callback (loop, timer, EV_TIMEOUT); |
1614 | callback (loop, timer, EV_TIMEOUT); |
1600 | |
1615 | |
1601 | And when there is some activity, simply store the current time in |
1616 | And when there is some activity, simply store the current time in |
1602 | C<last_activity>, no libev calls at all: |
1617 | C<last_activity>, no libev calls at all: |
… | |
… | |
1999 | some child status changes (most typically when a child of yours dies or |
2014 | some child status changes (most typically when a child of yours dies or |
2000 | exits). It is permissible to install a child watcher I<after> the child |
2015 | exits). It is permissible to install a child watcher I<after> the child |
2001 | has been forked (which implies it might have already exited), as long |
2016 | has been forked (which implies it might have already exited), as long |
2002 | as the event loop isn't entered (or is continued from a watcher), i.e., |
2017 | as the event loop isn't entered (or is continued from a watcher), i.e., |
2003 | forking and then immediately registering a watcher for the child is fine, |
2018 | forking and then immediately registering a watcher for the child is fine, |
2004 | but forking and registering a watcher a few event loop iterations later is |
2019 | but forking and registering a watcher a few event loop iterations later or |
2005 | not. |
2020 | in the next callback invocation is not. |
2006 | |
2021 | |
2007 | Only the default event loop is capable of handling signals, and therefore |
2022 | Only the default event loop is capable of handling signals, and therefore |
2008 | you can only register child watchers in the default event loop. |
2023 | you can only register child watchers in the default event loop. |
2009 | |
2024 | |
2010 | =head3 Process Interaction |
2025 | =head3 Process Interaction |
… | |
… | |
2365 | // no longer anything immediate to do. |
2380 | // no longer anything immediate to do. |
2366 | } |
2381 | } |
2367 | |
2382 | |
2368 | ev_idle *idle_watcher = malloc (sizeof (ev_idle)); |
2383 | ev_idle *idle_watcher = malloc (sizeof (ev_idle)); |
2369 | ev_idle_init (idle_watcher, idle_cb); |
2384 | ev_idle_init (idle_watcher, idle_cb); |
2370 | ev_idle_start (loop, idle_cb); |
2385 | ev_idle_start (loop, idle_watcher); |
2371 | |
2386 | |
2372 | |
2387 | |
2373 | =head2 C<ev_prepare> and C<ev_check> - customise your event loop! |
2388 | =head2 C<ev_prepare> and C<ev_check> - customise your event loop! |
2374 | |
2389 | |
2375 | Prepare and check watchers are usually (but not always) used in pairs: |
2390 | Prepare and check watchers are usually (but not always) used in pairs: |
… | |
… | |
2468 | struct pollfd fds [nfd]; |
2483 | struct pollfd fds [nfd]; |
2469 | // actual code will need to loop here and realloc etc. |
2484 | // actual code will need to loop here and realloc etc. |
2470 | adns_beforepoll (ads, fds, &nfd, &timeout, timeval_from (ev_time ())); |
2485 | adns_beforepoll (ads, fds, &nfd, &timeout, timeval_from (ev_time ())); |
2471 | |
2486 | |
2472 | /* the callback is illegal, but won't be called as we stop during check */ |
2487 | /* the callback is illegal, but won't be called as we stop during check */ |
2473 | ev_timer_init (&tw, 0, timeout * 1e-3); |
2488 | ev_timer_init (&tw, 0, timeout * 1e-3, 0.); |
2474 | ev_timer_start (loop, &tw); |
2489 | ev_timer_start (loop, &tw); |
2475 | |
2490 | |
2476 | // create one ev_io per pollfd |
2491 | // create one ev_io per pollfd |
2477 | for (int i = 0; i < nfd; ++i) |
2492 | for (int i = 0; i < nfd; ++i) |
2478 | { |
2493 | { |
… | |
… | |
2708 | event loop blocks next and before C<ev_check> watchers are being called, |
2723 | event loop blocks next and before C<ev_check> watchers are being called, |
2709 | and only in the child after the fork. If whoever good citizen calling |
2724 | and only in the child after the fork. If whoever good citizen calling |
2710 | C<ev_default_fork> cheats and calls it in the wrong process, the fork |
2725 | C<ev_default_fork> cheats and calls it in the wrong process, the fork |
2711 | handlers will be invoked, too, of course. |
2726 | handlers will be invoked, too, of course. |
2712 | |
2727 | |
|
|
2728 | =head3 The special problem of life after fork - how is it possible? |
|
|
2729 | |
|
|
2730 | Most uses of C<fork()> consist of forking, then some simple calls to ste |
|
|
2731 | up/change the process environment, followed by a call to C<exec()>. This |
|
|
2732 | sequence should be handled by libev without any problems. |
|
|
2733 | |
|
|
2734 | This changes when the application actually wants to do event handling |
|
|
2735 | in the child, or both parent in child, in effect "continuing" after the |
|
|
2736 | fork. |
|
|
2737 | |
|
|
2738 | The default mode of operation (for libev, with application help to detect |
|
|
2739 | forks) is to duplicate all the state in the child, as would be expected |
|
|
2740 | when I<either> the parent I<or> the child process continues. |
|
|
2741 | |
|
|
2742 | When both processes want to continue using libev, then this is usually the |
|
|
2743 | wrong result. In that case, usually one process (typically the parent) is |
|
|
2744 | supposed to continue with all watchers in place as before, while the other |
|
|
2745 | process typically wants to start fresh, i.e. without any active watchers. |
|
|
2746 | |
|
|
2747 | The cleanest and most efficient way to achieve that with libev is to |
|
|
2748 | simply create a new event loop, which of course will be "empty", and |
|
|
2749 | use that for new watchers. This has the advantage of not touching more |
|
|
2750 | memory than necessary, and thus avoiding the copy-on-write, and the |
|
|
2751 | disadvantage of having to use multiple event loops (which do not support |
|
|
2752 | signal watchers). |
|
|
2753 | |
|
|
2754 | When this is not possible, or you want to use the default loop for |
|
|
2755 | other reasons, then in the process that wants to start "fresh", call |
|
|
2756 | C<ev_default_destroy ()> followed by C<ev_default_loop (...)>. Destroying |
|
|
2757 | the default loop will "orphan" (not stop) all registered watchers, so you |
|
|
2758 | have to be careful not to execute code that modifies those watchers. Note |
|
|
2759 | also that in that case, you have to re-register any signal watchers. |
|
|
2760 | |
2713 | =head3 Watcher-Specific Functions and Data Members |
2761 | =head3 Watcher-Specific Functions and Data Members |
2714 | |
2762 | |
2715 | =over 4 |
2763 | =over 4 |
2716 | |
2764 | |
2717 | =item ev_fork_init (ev_signal *, callback) |
2765 | =item ev_fork_init (ev_signal *, callback) |
… | |
… | |
3899 | way (note also that glib is the slowest event library known to man). |
3947 | way (note also that glib is the slowest event library known to man). |
3900 | |
3948 | |
3901 | There is no supported compilation method available on windows except |
3949 | There is no supported compilation method available on windows except |
3902 | embedding it into other applications. |
3950 | embedding it into other applications. |
3903 | |
3951 | |
|
|
3952 | Sensible signal handling is officially unsupported by Microsoft - libev |
|
|
3953 | tries its best, but under most conditions, signals will simply not work. |
|
|
3954 | |
3904 | Not a libev limitation but worth mentioning: windows apparently doesn't |
3955 | Not a libev limitation but worth mentioning: windows apparently doesn't |
3905 | accept large writes: instead of resulting in a partial write, windows will |
3956 | accept large writes: instead of resulting in a partial write, windows will |
3906 | either accept everything or return C<ENOBUFS> if the buffer is too large, |
3957 | either accept everything or return C<ENOBUFS> if the buffer is too large, |
3907 | so make sure you only write small amounts into your sockets (less than a |
3958 | so make sure you only write small amounts into your sockets (less than a |
3908 | megabyte seems safe, but this apparently depends on the amount of memory |
3959 | megabyte seems safe, but this apparently depends on the amount of memory |
… | |
… | |
3912 | the abysmal performance of winsockets, using a large number of sockets |
3963 | the abysmal performance of winsockets, using a large number of sockets |
3913 | is not recommended (and not reasonable). If your program needs to use |
3964 | is not recommended (and not reasonable). If your program needs to use |
3914 | more than a hundred or so sockets, then likely it needs to use a totally |
3965 | more than a hundred or so sockets, then likely it needs to use a totally |
3915 | different implementation for windows, as libev offers the POSIX readiness |
3966 | different implementation for windows, as libev offers the POSIX readiness |
3916 | notification model, which cannot be implemented efficiently on windows |
3967 | notification model, which cannot be implemented efficiently on windows |
3917 | (Microsoft monopoly games). |
3968 | (due to Microsoft monopoly games). |
3918 | |
3969 | |
3919 | A typical way to use libev under windows is to embed it (see the embedding |
3970 | A typical way to use libev under windows is to embed it (see the embedding |
3920 | section for details) and use the following F<evwrap.h> header file instead |
3971 | section for details) and use the following F<evwrap.h> header file instead |
3921 | of F<ev.h>: |
3972 | of F<ev.h>: |
3922 | |
3973 | |
… | |
… | |
3958 | |
4009 | |
3959 | Early versions of winsocket's select only supported waiting for a maximum |
4010 | Early versions of winsocket's select only supported waiting for a maximum |
3960 | of C<64> handles (probably owning to the fact that all windows kernels |
4011 | of C<64> handles (probably owning to the fact that all windows kernels |
3961 | can only wait for C<64> things at the same time internally; Microsoft |
4012 | can only wait for C<64> things at the same time internally; Microsoft |
3962 | recommends spawning a chain of threads and wait for 63 handles and the |
4013 | recommends spawning a chain of threads and wait for 63 handles and the |
3963 | previous thread in each. Great). |
4014 | previous thread in each. Sounds great!). |
3964 | |
4015 | |
3965 | Newer versions support more handles, but you need to define C<FD_SETSIZE> |
4016 | Newer versions support more handles, but you need to define C<FD_SETSIZE> |
3966 | to some high number (e.g. C<2048>) before compiling the winsocket select |
4017 | to some high number (e.g. C<2048>) before compiling the winsocket select |
3967 | call (which might be in libev or elsewhere, for example, perl does its own |
4018 | call (which might be in libev or elsewhere, for example, perl and many |
3968 | select emulation on windows). |
4019 | other interpreters do their own select emulation on windows). |
3969 | |
4020 | |
3970 | Another limit is the number of file descriptors in the Microsoft runtime |
4021 | Another limit is the number of file descriptors in the Microsoft runtime |
3971 | libraries, which by default is C<64> (there must be a hidden I<64> fetish |
4022 | libraries, which by default is C<64> (there must be a hidden I<64> |
3972 | or something like this inside Microsoft). You can increase this by calling |
4023 | fetish or something like this inside Microsoft). You can increase this |
3973 | C<_setmaxstdio>, which can increase this limit to C<2048> (another |
4024 | by calling C<_setmaxstdio>, which can increase this limit to C<2048> |
3974 | arbitrary limit), but is broken in many versions of the Microsoft runtime |
4025 | (another arbitrary limit), but is broken in many versions of the Microsoft |
3975 | libraries. |
|
|
3976 | |
|
|
3977 | This might get you to about C<512> or C<2048> sockets (depending on |
4026 | runtime libraries. This might get you to about C<512> or C<2048> sockets |
3978 | windows version and/or the phase of the moon). To get more, you need to |
4027 | (depending on windows version and/or the phase of the moon). To get more, |
3979 | wrap all I/O functions and provide your own fd management, but the cost of |
4028 | you need to wrap all I/O functions and provide your own fd management, but |
3980 | calling select (O(n²)) will likely make this unworkable. |
4029 | the cost of calling select (O(n²)) will likely make this unworkable. |
3981 | |
4030 | |
3982 | =back |
4031 | =back |
3983 | |
4032 | |
3984 | =head2 PORTABILITY REQUIREMENTS |
4033 | =head2 PORTABILITY REQUIREMENTS |
3985 | |
4034 | |