… | |
… | |
620 | happily wraps around with enough iterations. |
620 | happily wraps around with enough iterations. |
621 | |
621 | |
622 | This value can sometimes be useful as a generation counter of sorts (it |
622 | This value can sometimes be useful as a generation counter of sorts (it |
623 | "ticks" the number of loop iterations), as it roughly corresponds with |
623 | "ticks" the number of loop iterations), as it roughly corresponds with |
624 | C<ev_prepare> and C<ev_check> calls. |
624 | C<ev_prepare> and C<ev_check> calls. |
|
|
625 | |
|
|
626 | =item unsigned int ev_loop_depth (loop) |
|
|
627 | |
|
|
628 | Returns the number of times C<ev_loop> was entered minus the number of |
|
|
629 | times C<ev_loop> was exited, in other words, the recursion depth. |
|
|
630 | |
|
|
631 | Outside C<ev_loop>, this number is zero. In a callback, this number is |
|
|
632 | C<1>, unless C<ev_loop> was invoked recursively (or from another thread), |
|
|
633 | in which case it is higher. |
|
|
634 | |
|
|
635 | Leaving C<ev_loop> abnormally (setjmp/longjmp, cancelling the thread |
|
|
636 | etc.), doesn't count as exit. |
625 | |
637 | |
626 | =item unsigned int ev_backend (loop) |
638 | =item unsigned int ev_backend (loop) |
627 | |
639 | |
628 | Returns one of the C<EVBACKEND_*> flags indicating the event backend in |
640 | Returns one of the C<EVBACKEND_*> flags indicating the event backend in |
629 | use. |
641 | use. |
… | |
… | |
811 | |
823 | |
812 | By setting a higher I<io collect interval> you allow libev to spend more |
824 | By setting a higher I<io collect interval> you allow libev to spend more |
813 | time collecting I/O events, so you can handle more events per iteration, |
825 | time collecting I/O events, so you can handle more events per iteration, |
814 | at the cost of increasing latency. Timeouts (both C<ev_periodic> and |
826 | at the cost of increasing latency. Timeouts (both C<ev_periodic> and |
815 | C<ev_timer>) will be not affected. Setting this to a non-null value will |
827 | C<ev_timer>) will be not affected. Setting this to a non-null value will |
816 | introduce an additional C<ev_sleep ()> call into most loop iterations. |
828 | introduce an additional C<ev_sleep ()> call into most loop iterations. The |
|
|
829 | sleep time ensures that libev will not poll for I/O events more often then |
|
|
830 | once per this interval, on average. |
817 | |
831 | |
818 | Likewise, by setting a higher I<timeout collect interval> you allow libev |
832 | Likewise, by setting a higher I<timeout collect interval> you allow libev |
819 | to spend more time collecting timeouts, at the expense of increased |
833 | to spend more time collecting timeouts, at the expense of increased |
820 | latency/jitter/inexactness (the watcher callback will be called |
834 | latency/jitter/inexactness (the watcher callback will be called |
821 | later). C<ev_io> watchers will not be affected. Setting this to a non-null |
835 | later). C<ev_io> watchers will not be affected. Setting this to a non-null |
… | |
… | |
823 | |
837 | |
824 | Many (busy) programs can usually benefit by setting the I/O collect |
838 | Many (busy) programs can usually benefit by setting the I/O collect |
825 | interval to a value near C<0.1> or so, which is often enough for |
839 | interval to a value near C<0.1> or so, which is often enough for |
826 | interactive servers (of course not for games), likewise for timeouts. It |
840 | interactive servers (of course not for games), likewise for timeouts. It |
827 | usually doesn't make much sense to set it to a lower value than C<0.01>, |
841 | usually doesn't make much sense to set it to a lower value than C<0.01>, |
828 | as this approaches the timing granularity of most systems. |
842 | as this approaches the timing granularity of most systems. Note that if |
|
|
843 | you do transactions with the outside world and you can't increase the |
|
|
844 | parallelity, then this setting will limit your transaction rate (if you |
|
|
845 | need to poll once per transaction and the I/O collect interval is 0.01, |
|
|
846 | then you can't do more than 100 transations per second). |
829 | |
847 | |
830 | Setting the I<timeout collect interval> can improve the opportunity for |
848 | Setting the I<timeout collect interval> can improve the opportunity for |
831 | saving power, as the program will "bundle" timer callback invocations that |
849 | saving power, as the program will "bundle" timer callback invocations that |
832 | are "near" in time together, by delaying some, thus reducing the number of |
850 | are "near" in time together, by delaying some, thus reducing the number of |
833 | times the process sleeps and wakes up again. Another useful technique to |
851 | times the process sleeps and wakes up again. Another useful technique to |
834 | reduce iterations/wake-ups is to use C<ev_periodic> watchers and make sure |
852 | reduce iterations/wake-ups is to use C<ev_periodic> watchers and make sure |
835 | they fire on, say, one-second boundaries only. |
853 | they fire on, say, one-second boundaries only. |
|
|
854 | |
|
|
855 | Example: we only need 0.1s timeout granularity, and we wish not to poll |
|
|
856 | more often than 100 times per second: |
|
|
857 | |
|
|
858 | ev_set_timeout_collect_interval (EV_DEFAULT_UC_ 0.1); |
|
|
859 | ev_set_io_collect_interval (EV_DEFAULT_UC_ 0.01); |
|
|
860 | |
|
|
861 | =item ev_invoke_pending (loop) |
|
|
862 | |
|
|
863 | This call will simply invoke all pending watchers while resetting their |
|
|
864 | pending state. Normally, C<ev_loop> does this automatically when required, |
|
|
865 | but when overriding the invoke callback this call comes handy. |
|
|
866 | |
|
|
867 | =item ev_set_invoke_pending_cb (loop, void (*invoke_pending_cb)(EV_P)) |
|
|
868 | |
|
|
869 | This overrides the invoke pending functionality of the loop: Instead of |
|
|
870 | invoking all pending watchers when there are any, C<ev_loop> will call |
|
|
871 | this callback instead. This is useful, for example, when you want to |
|
|
872 | invoke the actual watchers inside another context (another thread etc.). |
|
|
873 | |
|
|
874 | If you want to reset the callback, use C<ev_invoke_pending> as new |
|
|
875 | callback. |
|
|
876 | |
|
|
877 | =item ev_set_loop_release_cb (loop, void (*release)(EV_P), void (*acquire)(EV_P)) |
|
|
878 | |
|
|
879 | Sometimes you want to share the same loop between multiple threads. This |
|
|
880 | can be done relatively simply by putting mutex_lock/unlock calls around |
|
|
881 | each call to a libev function. |
|
|
882 | |
|
|
883 | However, C<ev_loop> can run an indefinite time, so it is not feasible to |
|
|
884 | wait for it to return. One way around this is to wake up the loop via |
|
|
885 | C<ev_unloop> and C<av_async_send>, another way is to set these I<release> |
|
|
886 | and I<acquire> callbacks on the loop. |
|
|
887 | |
|
|
888 | When set, then C<release> will be called just before the thread is |
|
|
889 | suspended waiting for new events, and C<acquire> is called just |
|
|
890 | afterwards. |
|
|
891 | |
|
|
892 | Ideally, C<release> will just call your mutex_unlock function, and |
|
|
893 | C<acquire> will just call the mutex_lock function again. |
|
|
894 | |
|
|
895 | =item ev_set_userdata (loop, void *data) |
|
|
896 | |
|
|
897 | =item ev_userdata (loop) |
|
|
898 | |
|
|
899 | Set and retrieve a single C<void *> associated with a loop. When |
|
|
900 | C<ev_set_userdata> has never been called, then C<ev_userdata> returns |
|
|
901 | C<0.> |
|
|
902 | |
|
|
903 | These two functions can be used to associate arbitrary data with a loop, |
|
|
904 | and are intended solely for the C<invoke_pending_cb>, C<release> and |
|
|
905 | C<acquire> callbacks described above, but of course can be (ab-)used for |
|
|
906 | any other purpose as well. |
836 | |
907 | |
837 | =item ev_loop_verify (loop) |
908 | =item ev_loop_verify (loop) |
838 | |
909 | |
839 | This function only does something when C<EV_VERIFY> support has been |
910 | This function only does something when C<EV_VERIFY> support has been |
840 | compiled in, which is the default for non-minimal builds. It tries to go |
911 | compiled in, which is the default for non-minimal builds. It tries to go |
… | |
… | |
1184 | #include <stddef.h> |
1255 | #include <stddef.h> |
1185 | |
1256 | |
1186 | static void |
1257 | static void |
1187 | t1_cb (EV_P_ ev_timer *w, int revents) |
1258 | t1_cb (EV_P_ ev_timer *w, int revents) |
1188 | { |
1259 | { |
1189 | struct my_biggy big = (struct my_biggy * |
1260 | struct my_biggy big = (struct my_biggy *) |
1190 | (((char *)w) - offsetof (struct my_biggy, t1)); |
1261 | (((char *)w) - offsetof (struct my_biggy, t1)); |
1191 | } |
1262 | } |
1192 | |
1263 | |
1193 | static void |
1264 | static void |
1194 | t2_cb (EV_P_ ev_timer *w, int revents) |
1265 | t2_cb (EV_P_ ev_timer *w, int revents) |
1195 | { |
1266 | { |
1196 | struct my_biggy big = (struct my_biggy * |
1267 | struct my_biggy big = (struct my_biggy *) |
1197 | (((char *)w) - offsetof (struct my_biggy, t2)); |
1268 | (((char *)w) - offsetof (struct my_biggy, t2)); |
1198 | } |
1269 | } |
1199 | |
1270 | |
1200 | =head2 WATCHER PRIORITY MODELS |
1271 | =head2 WATCHER PRIORITY MODELS |
1201 | |
1272 | |
… | |
… | |
1277 | // with the default priority are receiving events. |
1348 | // with the default priority are receiving events. |
1278 | ev_idle_start (EV_A_ &idle); |
1349 | ev_idle_start (EV_A_ &idle); |
1279 | } |
1350 | } |
1280 | |
1351 | |
1281 | static void |
1352 | static void |
1282 | idle-cb (EV_P_ ev_idle *w, int revents) |
1353 | idle_cb (EV_P_ ev_idle *w, int revents) |
1283 | { |
1354 | { |
1284 | // actual processing |
1355 | // actual processing |
1285 | read (STDIN_FILENO, ...); |
1356 | read (STDIN_FILENO, ...); |
1286 | |
1357 | |
1287 | // have to start the I/O watcher again, as |
1358 | // have to start the I/O watcher again, as |
… | |
… | |
1332 | descriptors to non-blocking mode is also usually a good idea (but not |
1403 | descriptors to non-blocking mode is also usually a good idea (but not |
1333 | required if you know what you are doing). |
1404 | required if you know what you are doing). |
1334 | |
1405 | |
1335 | If you cannot use non-blocking mode, then force the use of a |
1406 | If you cannot use non-blocking mode, then force the use of a |
1336 | known-to-be-good backend (at the time of this writing, this includes only |
1407 | known-to-be-good backend (at the time of this writing, this includes only |
1337 | C<EVBACKEND_SELECT> and C<EVBACKEND_POLL>). |
1408 | C<EVBACKEND_SELECT> and C<EVBACKEND_POLL>). The same applies to file |
|
|
1409 | descriptors for which non-blocking operation makes no sense (such as |
|
|
1410 | files) - libev doesn't guarentee any specific behaviour in that case. |
1338 | |
1411 | |
1339 | Another thing you have to watch out for is that it is quite easy to |
1412 | Another thing you have to watch out for is that it is quite easy to |
1340 | receive "spurious" readiness notifications, that is your callback might |
1413 | receive "spurious" readiness notifications, that is your callback might |
1341 | be called with C<EV_READ> but a subsequent C<read>(2) will actually block |
1414 | be called with C<EV_READ> but a subsequent C<read>(2) will actually block |
1342 | because there is no data. Not only are some backends known to create a |
1415 | because there is no data. Not only are some backends known to create a |
… | |
… | |
1463 | year, it will still time out after (roughly) one hour. "Roughly" because |
1536 | year, it will still time out after (roughly) one hour. "Roughly" because |
1464 | detecting time jumps is hard, and some inaccuracies are unavoidable (the |
1537 | detecting time jumps is hard, and some inaccuracies are unavoidable (the |
1465 | monotonic clock option helps a lot here). |
1538 | monotonic clock option helps a lot here). |
1466 | |
1539 | |
1467 | The callback is guaranteed to be invoked only I<after> its timeout has |
1540 | The callback is guaranteed to be invoked only I<after> its timeout has |
1468 | passed. If multiple timers become ready during the same loop iteration |
1541 | passed (not I<at>, so on systems with very low-resolution clocks this |
1469 | then the ones with earlier time-out values are invoked before ones with |
1542 | might introduce a small delay). If multiple timers become ready during the |
1470 | later time-out values (but this is no longer true when a callback calls |
1543 | same loop iteration then the ones with earlier time-out values are invoked |
1471 | C<ev_loop> recursively). |
1544 | before ones of the same priority with later time-out values (but this is |
|
|
1545 | no longer true when a callback calls C<ev_loop> recursively). |
1472 | |
1546 | |
1473 | =head3 Be smart about timeouts |
1547 | =head3 Be smart about timeouts |
1474 | |
1548 | |
1475 | Many real-world problems involve some kind of timeout, usually for error |
1549 | Many real-world problems involve some kind of timeout, usually for error |
1476 | recovery. A typical example is an HTTP request - if the other side hangs, |
1550 | recovery. A typical example is an HTTP request - if the other side hangs, |
… | |
… | |
1520 | C<after> argument to C<ev_timer_set>, and only ever use the C<repeat> |
1594 | C<after> argument to C<ev_timer_set>, and only ever use the C<repeat> |
1521 | member and C<ev_timer_again>. |
1595 | member and C<ev_timer_again>. |
1522 | |
1596 | |
1523 | At start: |
1597 | At start: |
1524 | |
1598 | |
1525 | ev_timer_init (timer, callback); |
1599 | ev_init (timer, callback); |
1526 | timer->repeat = 60.; |
1600 | timer->repeat = 60.; |
1527 | ev_timer_again (loop, timer); |
1601 | ev_timer_again (loop, timer); |
1528 | |
1602 | |
1529 | Each time there is some activity: |
1603 | Each time there is some activity: |
1530 | |
1604 | |
… | |
… | |
1592 | |
1666 | |
1593 | To start the timer, simply initialise the watcher and set C<last_activity> |
1667 | To start the timer, simply initialise the watcher and set C<last_activity> |
1594 | to the current time (meaning we just have some activity :), then call the |
1668 | to the current time (meaning we just have some activity :), then call the |
1595 | callback, which will "do the right thing" and start the timer: |
1669 | callback, which will "do the right thing" and start the timer: |
1596 | |
1670 | |
1597 | ev_timer_init (timer, callback); |
1671 | ev_init (timer, callback); |
1598 | last_activity = ev_now (loop); |
1672 | last_activity = ev_now (loop); |
1599 | callback (loop, timer, EV_TIMEOUT); |
1673 | callback (loop, timer, EV_TIMEOUT); |
1600 | |
1674 | |
1601 | And when there is some activity, simply store the current time in |
1675 | And when there is some activity, simply store the current time in |
1602 | C<last_activity>, no libev calls at all: |
1676 | C<last_activity>, no libev calls at all: |
… | |
… | |
1999 | some child status changes (most typically when a child of yours dies or |
2073 | some child status changes (most typically when a child of yours dies or |
2000 | exits). It is permissible to install a child watcher I<after> the child |
2074 | exits). It is permissible to install a child watcher I<after> the child |
2001 | has been forked (which implies it might have already exited), as long |
2075 | has been forked (which implies it might have already exited), as long |
2002 | as the event loop isn't entered (or is continued from a watcher), i.e., |
2076 | as the event loop isn't entered (or is continued from a watcher), i.e., |
2003 | forking and then immediately registering a watcher for the child is fine, |
2077 | forking and then immediately registering a watcher for the child is fine, |
2004 | but forking and registering a watcher a few event loop iterations later is |
2078 | but forking and registering a watcher a few event loop iterations later or |
2005 | not. |
2079 | in the next callback invocation is not. |
2006 | |
2080 | |
2007 | Only the default event loop is capable of handling signals, and therefore |
2081 | Only the default event loop is capable of handling signals, and therefore |
2008 | you can only register child watchers in the default event loop. |
2082 | you can only register child watchers in the default event loop. |
|
|
2083 | |
|
|
2084 | Due to some design glitches inside libev, child watchers will always be |
|
|
2085 | handled at maximum priority (their priority is set to C<EV_MAXPRI> by |
|
|
2086 | libev) |
2009 | |
2087 | |
2010 | =head3 Process Interaction |
2088 | =head3 Process Interaction |
2011 | |
2089 | |
2012 | Libev grabs C<SIGCHLD> as soon as the default event loop is |
2090 | Libev grabs C<SIGCHLD> as soon as the default event loop is |
2013 | initialised. This is necessary to guarantee proper behaviour even if |
2091 | initialised. This is necessary to guarantee proper behaviour even if |
… | |
… | |
2365 | // no longer anything immediate to do. |
2443 | // no longer anything immediate to do. |
2366 | } |
2444 | } |
2367 | |
2445 | |
2368 | ev_idle *idle_watcher = malloc (sizeof (ev_idle)); |
2446 | ev_idle *idle_watcher = malloc (sizeof (ev_idle)); |
2369 | ev_idle_init (idle_watcher, idle_cb); |
2447 | ev_idle_init (idle_watcher, idle_cb); |
2370 | ev_idle_start (loop, idle_cb); |
2448 | ev_idle_start (loop, idle_watcher); |
2371 | |
2449 | |
2372 | |
2450 | |
2373 | =head2 C<ev_prepare> and C<ev_check> - customise your event loop! |
2451 | =head2 C<ev_prepare> and C<ev_check> - customise your event loop! |
2374 | |
2452 | |
2375 | Prepare and check watchers are usually (but not always) used in pairs: |
2453 | Prepare and check watchers are usually (but not always) used in pairs: |
… | |
… | |
2468 | struct pollfd fds [nfd]; |
2546 | struct pollfd fds [nfd]; |
2469 | // actual code will need to loop here and realloc etc. |
2547 | // actual code will need to loop here and realloc etc. |
2470 | adns_beforepoll (ads, fds, &nfd, &timeout, timeval_from (ev_time ())); |
2548 | adns_beforepoll (ads, fds, &nfd, &timeout, timeval_from (ev_time ())); |
2471 | |
2549 | |
2472 | /* the callback is illegal, but won't be called as we stop during check */ |
2550 | /* the callback is illegal, but won't be called as we stop during check */ |
2473 | ev_timer_init (&tw, 0, timeout * 1e-3); |
2551 | ev_timer_init (&tw, 0, timeout * 1e-3, 0.); |
2474 | ev_timer_start (loop, &tw); |
2552 | ev_timer_start (loop, &tw); |
2475 | |
2553 | |
2476 | // create one ev_io per pollfd |
2554 | // create one ev_io per pollfd |
2477 | for (int i = 0; i < nfd; ++i) |
2555 | for (int i = 0; i < nfd; ++i) |
2478 | { |
2556 | { |
… | |
… | |
3640 | defined to be C<0>, then they are not. |
3718 | defined to be C<0>, then they are not. |
3641 | |
3719 | |
3642 | =item EV_MINIMAL |
3720 | =item EV_MINIMAL |
3643 | |
3721 | |
3644 | If you need to shave off some kilobytes of code at the expense of some |
3722 | If you need to shave off some kilobytes of code at the expense of some |
3645 | speed, define this symbol to C<1>. Currently this is used to override some |
3723 | speed (but with the full API), define this symbol to C<1>. Currently this |
3646 | inlining decisions, saves roughly 30% code size on amd64. It also selects a |
3724 | is used to override some inlining decisions, saves roughly 30% code size |
3647 | much smaller 2-heap for timer management over the default 4-heap. |
3725 | on amd64. It also selects a much smaller 2-heap for timer management over |
|
|
3726 | the default 4-heap. |
|
|
3727 | |
|
|
3728 | You can save even more by disabling watcher types you do not need |
|
|
3729 | and setting C<EV_MAXPRI> == C<EV_MINPRI>. Also, disabling C<assert> |
|
|
3730 | (C<-DNDEBUG>) will usually reduce code size a lot. |
|
|
3731 | |
|
|
3732 | Defining C<EV_MINIMAL> to C<2> will additionally reduce the core API to |
|
|
3733 | provide a bare-bones event library. See C<ev.h> for details on what parts |
|
|
3734 | of the API are still available, and do not complain if this subset changes |
|
|
3735 | over time. |
3648 | |
3736 | |
3649 | =item EV_PID_HASHSIZE |
3737 | =item EV_PID_HASHSIZE |
3650 | |
3738 | |
3651 | C<ev_child> watchers use a small hash table to distribute workload by |
3739 | C<ev_child> watchers use a small hash table to distribute workload by |
3652 | pid. The default size is C<16> (or C<1> with C<EV_MINIMAL>), usually more |
3740 | pid. The default size is C<16> (or C<1> with C<EV_MINIMAL>), usually more |
… | |
… | |
3838 | default loop and triggering an C<ev_async> watcher from the default loop |
3926 | default loop and triggering an C<ev_async> watcher from the default loop |
3839 | watcher callback into the event loop interested in the signal. |
3927 | watcher callback into the event loop interested in the signal. |
3840 | |
3928 | |
3841 | =back |
3929 | =back |
3842 | |
3930 | |
|
|
3931 | =head4 THREAD LOCKING EXAMPLE |
|
|
3932 | |
3843 | =head3 COROUTINES |
3933 | =head3 COROUTINES |
3844 | |
3934 | |
3845 | Libev is very accommodating to coroutines ("cooperative threads"): |
3935 | Libev is very accommodating to coroutines ("cooperative threads"): |
3846 | libev fully supports nesting calls to its functions from different |
3936 | libev fully supports nesting calls to its functions from different |
3847 | coroutines (e.g. you can call C<ev_loop> on the same loop from two |
3937 | coroutines (e.g. you can call C<ev_loop> on the same loop from two |
… | |
… | |
3932 | way (note also that glib is the slowest event library known to man). |
4022 | way (note also that glib is the slowest event library known to man). |
3933 | |
4023 | |
3934 | There is no supported compilation method available on windows except |
4024 | There is no supported compilation method available on windows except |
3935 | embedding it into other applications. |
4025 | embedding it into other applications. |
3936 | |
4026 | |
|
|
4027 | Sensible signal handling is officially unsupported by Microsoft - libev |
|
|
4028 | tries its best, but under most conditions, signals will simply not work. |
|
|
4029 | |
3937 | Not a libev limitation but worth mentioning: windows apparently doesn't |
4030 | Not a libev limitation but worth mentioning: windows apparently doesn't |
3938 | accept large writes: instead of resulting in a partial write, windows will |
4031 | accept large writes: instead of resulting in a partial write, windows will |
3939 | either accept everything or return C<ENOBUFS> if the buffer is too large, |
4032 | either accept everything or return C<ENOBUFS> if the buffer is too large, |
3940 | so make sure you only write small amounts into your sockets (less than a |
4033 | so make sure you only write small amounts into your sockets (less than a |
3941 | megabyte seems safe, but this apparently depends on the amount of memory |
4034 | megabyte seems safe, but this apparently depends on the amount of memory |
… | |
… | |
3945 | the abysmal performance of winsockets, using a large number of sockets |
4038 | the abysmal performance of winsockets, using a large number of sockets |
3946 | is not recommended (and not reasonable). If your program needs to use |
4039 | is not recommended (and not reasonable). If your program needs to use |
3947 | more than a hundred or so sockets, then likely it needs to use a totally |
4040 | more than a hundred or so sockets, then likely it needs to use a totally |
3948 | different implementation for windows, as libev offers the POSIX readiness |
4041 | different implementation for windows, as libev offers the POSIX readiness |
3949 | notification model, which cannot be implemented efficiently on windows |
4042 | notification model, which cannot be implemented efficiently on windows |
3950 | (Microsoft monopoly games). |
4043 | (due to Microsoft monopoly games). |
3951 | |
4044 | |
3952 | A typical way to use libev under windows is to embed it (see the embedding |
4045 | A typical way to use libev under windows is to embed it (see the embedding |
3953 | section for details) and use the following F<evwrap.h> header file instead |
4046 | section for details) and use the following F<evwrap.h> header file instead |
3954 | of F<ev.h>: |
4047 | of F<ev.h>: |
3955 | |
4048 | |
… | |
… | |
3991 | |
4084 | |
3992 | Early versions of winsocket's select only supported waiting for a maximum |
4085 | Early versions of winsocket's select only supported waiting for a maximum |
3993 | of C<64> handles (probably owning to the fact that all windows kernels |
4086 | of C<64> handles (probably owning to the fact that all windows kernels |
3994 | can only wait for C<64> things at the same time internally; Microsoft |
4087 | can only wait for C<64> things at the same time internally; Microsoft |
3995 | recommends spawning a chain of threads and wait for 63 handles and the |
4088 | recommends spawning a chain of threads and wait for 63 handles and the |
3996 | previous thread in each. Great). |
4089 | previous thread in each. Sounds great!). |
3997 | |
4090 | |
3998 | Newer versions support more handles, but you need to define C<FD_SETSIZE> |
4091 | Newer versions support more handles, but you need to define C<FD_SETSIZE> |
3999 | to some high number (e.g. C<2048>) before compiling the winsocket select |
4092 | to some high number (e.g. C<2048>) before compiling the winsocket select |
4000 | call (which might be in libev or elsewhere, for example, perl does its own |
4093 | call (which might be in libev or elsewhere, for example, perl and many |
4001 | select emulation on windows). |
4094 | other interpreters do their own select emulation on windows). |
4002 | |
4095 | |
4003 | Another limit is the number of file descriptors in the Microsoft runtime |
4096 | Another limit is the number of file descriptors in the Microsoft runtime |
4004 | libraries, which by default is C<64> (there must be a hidden I<64> fetish |
4097 | libraries, which by default is C<64> (there must be a hidden I<64> |
4005 | or something like this inside Microsoft). You can increase this by calling |
4098 | fetish or something like this inside Microsoft). You can increase this |
4006 | C<_setmaxstdio>, which can increase this limit to C<2048> (another |
4099 | by calling C<_setmaxstdio>, which can increase this limit to C<2048> |
4007 | arbitrary limit), but is broken in many versions of the Microsoft runtime |
4100 | (another arbitrary limit), but is broken in many versions of the Microsoft |
4008 | libraries. |
|
|
4009 | |
|
|
4010 | This might get you to about C<512> or C<2048> sockets (depending on |
4101 | runtime libraries. This might get you to about C<512> or C<2048> sockets |
4011 | windows version and/or the phase of the moon). To get more, you need to |
4102 | (depending on windows version and/or the phase of the moon). To get more, |
4012 | wrap all I/O functions and provide your own fd management, but the cost of |
4103 | you need to wrap all I/O functions and provide your own fd management, but |
4013 | calling select (O(n²)) will likely make this unworkable. |
4104 | the cost of calling select (O(n²)) will likely make this unworkable. |
4014 | |
4105 | |
4015 | =back |
4106 | =back |
4016 | |
4107 | |
4017 | =head2 PORTABILITY REQUIREMENTS |
4108 | =head2 PORTABILITY REQUIREMENTS |
4018 | |
4109 | |
… | |
… | |
4061 | =item C<double> must hold a time value in seconds with enough accuracy |
4152 | =item C<double> must hold a time value in seconds with enough accuracy |
4062 | |
4153 | |
4063 | The type C<double> is used to represent timestamps. It is required to |
4154 | The type C<double> is used to represent timestamps. It is required to |
4064 | have at least 51 bits of mantissa (and 9 bits of exponent), which is good |
4155 | have at least 51 bits of mantissa (and 9 bits of exponent), which is good |
4065 | enough for at least into the year 4000. This requirement is fulfilled by |
4156 | enough for at least into the year 4000. This requirement is fulfilled by |
4066 | implementations implementing IEEE 754 (basically all existing ones). |
4157 | implementations implementing IEEE 754, which is basically all existing |
|
|
4158 | ones. With IEEE 754 doubles, you get microsecond accuracy until at least |
|
|
4159 | 2200. |
4067 | |
4160 | |
4068 | =back |
4161 | =back |
4069 | |
4162 | |
4070 | If you know of other additional requirements drop me a note. |
4163 | If you know of other additional requirements drop me a note. |
4071 | |
4164 | |