… | |
… | |
4 | |
4 | |
5 | =head1 SYNOPSIS |
5 | =head1 SYNOPSIS |
6 | |
6 | |
7 | #include <ev.h> |
7 | #include <ev.h> |
8 | |
8 | |
9 | =head1 EXAMPLE PROGRAM |
9 | =head2 EXAMPLE PROGRAM |
10 | |
10 | |
11 | #include <ev.h> |
11 | #include <ev.h> |
12 | |
12 | |
13 | ev_io stdin_watcher; |
13 | ev_io stdin_watcher; |
14 | ev_timer timeout_watcher; |
14 | ev_timer timeout_watcher; |
… | |
… | |
65 | You register interest in certain events by registering so-called I<event |
65 | You register interest in certain events by registering so-called I<event |
66 | watchers>, which are relatively small C structures you initialise with the |
66 | watchers>, which are relatively small C structures you initialise with the |
67 | details of the event, and then hand it over to libev by I<starting> the |
67 | details of the event, and then hand it over to libev by I<starting> the |
68 | watcher. |
68 | watcher. |
69 | |
69 | |
70 | =head1 FEATURES |
70 | =head2 FEATURES |
71 | |
71 | |
72 | Libev supports C<select>, C<poll>, the Linux-specific C<epoll>, the |
72 | Libev supports C<select>, C<poll>, the Linux-specific C<epoll>, the |
73 | BSD-specific C<kqueue> and the Solaris-specific event port mechanisms |
73 | BSD-specific C<kqueue> and the Solaris-specific event port mechanisms |
74 | for file descriptor events (C<ev_io>), the Linux C<inotify> interface |
74 | for file descriptor events (C<ev_io>), the Linux C<inotify> interface |
75 | (for C<ev_stat>), relative timers (C<ev_timer>), absolute timers |
75 | (for C<ev_stat>), relative timers (C<ev_timer>), absolute timers |
… | |
… | |
82 | |
82 | |
83 | It also is quite fast (see this |
83 | It also is quite fast (see this |
84 | L<benchmark|http://libev.schmorp.de/bench.html> comparing it to libevent |
84 | L<benchmark|http://libev.schmorp.de/bench.html> comparing it to libevent |
85 | for example). |
85 | for example). |
86 | |
86 | |
87 | =head1 CONVENTIONS |
87 | =head2 CONVENTIONS |
88 | |
88 | |
89 | Libev is very configurable. In this manual the default configuration will |
89 | Libev is very configurable. In this manual the default configuration will |
90 | be described, which supports multiple event loops. For more info about |
90 | be described, which supports multiple event loops. For more info about |
91 | various configuration options please have a look at B<EMBED> section in |
91 | various configuration options please have a look at B<EMBED> section in |
92 | this manual. If libev was configured without support for multiple event |
92 | this manual. If libev was configured without support for multiple event |
93 | loops, then all functions taking an initial argument of name C<loop> |
93 | loops, then all functions taking an initial argument of name C<loop> |
94 | (which is always of type C<struct ev_loop *>) will not have this argument. |
94 | (which is always of type C<struct ev_loop *>) will not have this argument. |
95 | |
95 | |
96 | =head1 TIME REPRESENTATION |
96 | =head2 TIME REPRESENTATION |
97 | |
97 | |
98 | Libev represents time as a single floating point number, representing the |
98 | Libev represents time as a single floating point number, representing the |
99 | (fractional) number of seconds since the (POSIX) epoch (somewhere near |
99 | (fractional) number of seconds since the (POSIX) epoch (somewhere near |
100 | the beginning of 1970, details are complicated, don't ask). This type is |
100 | the beginning of 1970, details are complicated, don't ask). This type is |
101 | called C<ev_tstamp>, which is what you should use too. It usually aliases |
101 | called C<ev_tstamp>, which is what you should use too. It usually aliases |
… | |
… | |
306 | =item C<EVBACKEND_SELECT> (value 1, portable select backend) |
306 | =item C<EVBACKEND_SELECT> (value 1, portable select backend) |
307 | |
307 | |
308 | This is your standard select(2) backend. Not I<completely> standard, as |
308 | This is your standard select(2) backend. Not I<completely> standard, as |
309 | libev tries to roll its own fd_set with no limits on the number of fds, |
309 | libev tries to roll its own fd_set with no limits on the number of fds, |
310 | but if that fails, expect a fairly low limit on the number of fds when |
310 | but if that fails, expect a fairly low limit on the number of fds when |
311 | using this backend. It doesn't scale too well (O(highest_fd)), but its usually |
311 | using this backend. It doesn't scale too well (O(highest_fd)), but its |
312 | the fastest backend for a low number of fds. |
312 | usually the fastest backend for a low number of (low-numbered :) fds. |
|
|
313 | |
|
|
314 | To get good performance out of this backend you need a high amount of |
|
|
315 | parallelity (most of the file descriptors should be busy). If you are |
|
|
316 | writing a server, you should C<accept ()> in a loop to accept as many |
|
|
317 | connections as possible during one iteration. You might also want to have |
|
|
318 | a look at C<ev_set_io_collect_interval ()> to increase the amount of |
|
|
319 | readyness notifications you get per iteration. |
313 | |
320 | |
314 | =item C<EVBACKEND_POLL> (value 2, poll backend, available everywhere except on windows) |
321 | =item C<EVBACKEND_POLL> (value 2, poll backend, available everywhere except on windows) |
315 | |
322 | |
316 | And this is your standard poll(2) backend. It's more complicated than |
323 | And this is your standard poll(2) backend. It's more complicated |
317 | select, but handles sparse fds better and has no artificial limit on the |
324 | than select, but handles sparse fds better and has no artificial |
318 | number of fds you can use (except it will slow down considerably with a |
325 | limit on the number of fds you can use (except it will slow down |
319 | lot of inactive fds). It scales similarly to select, i.e. O(total_fds). |
326 | considerably with a lot of inactive fds). It scales similarly to select, |
|
|
327 | i.e. O(total_fds). See the entry for C<EVBACKEND_SELECT>, above, for |
|
|
328 | performance tips. |
320 | |
329 | |
321 | =item C<EVBACKEND_EPOLL> (value 4, Linux) |
330 | =item C<EVBACKEND_EPOLL> (value 4, Linux) |
322 | |
331 | |
323 | For few fds, this backend is a bit little slower than poll and select, |
332 | For few fds, this backend is a bit little slower than poll and select, |
324 | but it scales phenomenally better. While poll and select usually scale |
333 | but it scales phenomenally better. While poll and select usually scale |
325 | like O(total_fds) where n is the total number of fds (or the highest fd), |
334 | like O(total_fds) where n is the total number of fds (or the highest fd), |
326 | epoll scales either O(1) or O(active_fds). The epoll design has a number |
335 | epoll scales either O(1) or O(active_fds). The epoll design has a number |
327 | of shortcomings, such as silently dropping events in some hard-to-detect |
336 | of shortcomings, such as silently dropping events in some hard-to-detect |
328 | cases and rewiring a syscall per fd change, no fork support and bad |
337 | cases and rewiring a syscall per fd change, no fork support and bad |
329 | support for dup: |
338 | support for dup. |
330 | |
339 | |
331 | While stopping, setting and starting an I/O watcher in the same iteration |
340 | While stopping, setting and starting an I/O watcher in the same iteration |
332 | will result in some caching, there is still a syscall per such incident |
341 | will result in some caching, there is still a syscall per such incident |
333 | (because the fd could point to a different file description now), so its |
342 | (because the fd could point to a different file description now), so its |
334 | best to avoid that. Also, C<dup ()>'ed file descriptors might not work |
343 | best to avoid that. Also, C<dup ()>'ed file descriptors might not work |
… | |
… | |
336 | |
345 | |
337 | Please note that epoll sometimes generates spurious notifications, so you |
346 | Please note that epoll sometimes generates spurious notifications, so you |
338 | need to use non-blocking I/O or other means to avoid blocking when no data |
347 | need to use non-blocking I/O or other means to avoid blocking when no data |
339 | (or space) is available. |
348 | (or space) is available. |
340 | |
349 | |
|
|
350 | Best performance from this backend is achieved by not unregistering all |
|
|
351 | watchers for a file descriptor until it has been closed, if possible, i.e. |
|
|
352 | keep at least one watcher active per fd at all times. |
|
|
353 | |
|
|
354 | While nominally embeddeble in other event loops, this feature is broken in |
|
|
355 | all kernel versions tested so far. |
|
|
356 | |
341 | =item C<EVBACKEND_KQUEUE> (value 8, most BSD clones) |
357 | =item C<EVBACKEND_KQUEUE> (value 8, most BSD clones) |
342 | |
358 | |
343 | Kqueue deserves special mention, as at the time of this writing, it |
359 | Kqueue deserves special mention, as at the time of this writing, it |
344 | was broken on I<all> BSDs (usually it doesn't work with anything but |
360 | was broken on all BSDs except NetBSD (usually it doesn't work reliably |
345 | sockets and pipes, except on Darwin, where of course it's completely |
361 | with anything but sockets and pipes, except on Darwin, where of course |
346 | useless. On NetBSD, it seems to work for all the FD types I tested, so it |
|
|
347 | is used by default there). For this reason it's not being "autodetected" |
362 | it's completely useless). For this reason it's not being "autodetected" |
348 | unless you explicitly specify it explicitly in the flags (i.e. using |
363 | unless you explicitly specify it explicitly in the flags (i.e. using |
349 | C<EVBACKEND_KQUEUE>) or libev was compiled on a known-to-be-good (-enough) |
364 | C<EVBACKEND_KQUEUE>) or libev was compiled on a known-to-be-good (-enough) |
350 | system like NetBSD. |
365 | system like NetBSD. |
351 | |
366 | |
|
|
367 | You still can embed kqueue into a normal poll or select backend and use it |
|
|
368 | only for sockets (after having made sure that sockets work with kqueue on |
|
|
369 | the target platform). See C<ev_embed> watchers for more info. |
|
|
370 | |
352 | It scales in the same way as the epoll backend, but the interface to the |
371 | It scales in the same way as the epoll backend, but the interface to the |
353 | kernel is more efficient (which says nothing about its actual speed, |
372 | kernel is more efficient (which says nothing about its actual speed, of |
354 | of course). While stopping, setting and starting an I/O watcher does |
373 | course). While stopping, setting and starting an I/O watcher does never |
355 | never cause an extra syscall as with epoll, it still adds up to two event |
374 | cause an extra syscall as with C<EVBACKEND_EPOLL>, it still adds up to |
356 | changes per incident, support for C<fork ()> is very bad and it drops fds |
375 | two event changes per incident, support for C<fork ()> is very bad and it |
357 | silently in similarly hard-to-detetc cases. |
376 | drops fds silently in similarly hard-to-detect cases. |
|
|
377 | |
|
|
378 | This backend usually performs well under most conditions. |
|
|
379 | |
|
|
380 | While nominally embeddable in other event loops, this doesn't work |
|
|
381 | everywhere, so you might need to test for this. And since it is broken |
|
|
382 | almost everywhere, you should only use it when you have a lot of sockets |
|
|
383 | (for which it usually works), by embedding it into another event loop |
|
|
384 | (e.g. C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>) and using it only for |
|
|
385 | sockets. |
358 | |
386 | |
359 | =item C<EVBACKEND_DEVPOLL> (value 16, Solaris 8) |
387 | =item C<EVBACKEND_DEVPOLL> (value 16, Solaris 8) |
360 | |
388 | |
361 | This is not implemented yet (and might never be). |
389 | This is not implemented yet (and might never be, unless you send me an |
|
|
390 | implementation). According to reports, C</dev/poll> only supports sockets |
|
|
391 | and is not embeddable, which would limit the usefulness of this backend |
|
|
392 | immensely. |
362 | |
393 | |
363 | =item C<EVBACKEND_PORT> (value 32, Solaris 10) |
394 | =item C<EVBACKEND_PORT> (value 32, Solaris 10) |
364 | |
395 | |
365 | This uses the Solaris 10 event port mechanism. As with everything on Solaris, |
396 | This uses the Solaris 10 event port mechanism. As with everything on Solaris, |
366 | it's really slow, but it still scales very well (O(active_fds)). |
397 | it's really slow, but it still scales very well (O(active_fds)). |
367 | |
398 | |
368 | Please note that solaris event ports can deliver a lot of spurious |
399 | Please note that solaris event ports can deliver a lot of spurious |
369 | notifications, so you need to use non-blocking I/O or other means to avoid |
400 | notifications, so you need to use non-blocking I/O or other means to avoid |
370 | blocking when no data (or space) is available. |
401 | blocking when no data (or space) is available. |
371 | |
402 | |
|
|
403 | While this backend scales well, it requires one system call per active |
|
|
404 | file descriptor per loop iteration. For small and medium numbers of file |
|
|
405 | descriptors a "slow" C<EVBACKEND_SELECT> or C<EVBACKEND_POLL> backend |
|
|
406 | might perform better. |
|
|
407 | |
372 | =item C<EVBACKEND_ALL> |
408 | =item C<EVBACKEND_ALL> |
373 | |
409 | |
374 | Try all backends (even potentially broken ones that wouldn't be tried |
410 | Try all backends (even potentially broken ones that wouldn't be tried |
375 | with C<EVFLAG_AUTO>). Since this is a mask, you can do stuff such as |
411 | with C<EVFLAG_AUTO>). Since this is a mask, you can do stuff such as |
376 | C<EVBACKEND_ALL & ~EVBACKEND_KQUEUE>. |
412 | C<EVBACKEND_ALL & ~EVBACKEND_KQUEUE>. |
|
|
413 | |
|
|
414 | It is definitely not recommended to use this flag. |
377 | |
415 | |
378 | =back |
416 | =back |
379 | |
417 | |
380 | If one or more of these are ored into the flags value, then only these |
418 | If one or more of these are ored into the flags value, then only these |
381 | backends will be tried (in the reverse order as given here). If none are |
419 | backends will be tried (in the reverse order as given here). If none are |
… | |
… | |
596 | overhead for the actual polling but can deliver many events at once. |
634 | overhead for the actual polling but can deliver many events at once. |
597 | |
635 | |
598 | By setting a higher I<io collect interval> you allow libev to spend more |
636 | By setting a higher I<io collect interval> you allow libev to spend more |
599 | time collecting I/O events, so you can handle more events per iteration, |
637 | time collecting I/O events, so you can handle more events per iteration, |
600 | at the cost of increasing latency. Timeouts (both C<ev_periodic> and |
638 | at the cost of increasing latency. Timeouts (both C<ev_periodic> and |
601 | C<ev_timer>) will be not affected. |
639 | C<ev_timer>) will be not affected. Setting this to a non-null value will |
|
|
640 | introduce an additional C<ev_sleep ()> call into most loop iterations. |
602 | |
641 | |
603 | Likewise, by setting a higher I<timeout collect interval> you allow libev |
642 | Likewise, by setting a higher I<timeout collect interval> you allow libev |
604 | to spend more time collecting timeouts, at the expense of increased |
643 | to spend more time collecting timeouts, at the expense of increased |
605 | latency (the watcher callback will be called later). C<ev_io> watchers |
644 | latency (the watcher callback will be called later). C<ev_io> watchers |
606 | will not be affected. |
645 | will not be affected. Setting this to a non-null value will not introduce |
|
|
646 | any overhead in libev. |
607 | |
647 | |
608 | Many programs can usually benefit by setting the io collect interval to |
648 | Many (busy) programs can usually benefit by setting the io collect |
609 | a value near C<0.1> or so, which is often enough for interactive servers |
649 | interval to a value near C<0.1> or so, which is often enough for |
610 | (of course not for games), likewise for timeouts. It usually doesn't make |
650 | interactive servers (of course not for games), likewise for timeouts. It |
611 | much sense to set it to a lower value than C<0.01>, as this approsaches |
651 | usually doesn't make much sense to set it to a lower value than C<0.01>, |
612 | the timing granularity of most systems. |
652 | as this approsaches the timing granularity of most systems. |
613 | |
653 | |
614 | =back |
654 | =back |
615 | |
655 | |
616 | |
656 | |
617 | =head1 ANATOMY OF A WATCHER |
657 | =head1 ANATOMY OF A WATCHER |
… | |
… | |
943 | In general you can register as many read and/or write event watchers per |
983 | In general you can register as many read and/or write event watchers per |
944 | fd as you want (as long as you don't confuse yourself). Setting all file |
984 | fd as you want (as long as you don't confuse yourself). Setting all file |
945 | descriptors to non-blocking mode is also usually a good idea (but not |
985 | descriptors to non-blocking mode is also usually a good idea (but not |
946 | required if you know what you are doing). |
986 | required if you know what you are doing). |
947 | |
987 | |
948 | You have to be careful with dup'ed file descriptors, though. Some backends |
|
|
949 | (the linux epoll backend is a notable example) cannot handle dup'ed file |
|
|
950 | descriptors correctly if you register interest in two or more fds pointing |
|
|
951 | to the same underlying file/socket/etc. description (that is, they share |
|
|
952 | the same underlying "file open"). |
|
|
953 | |
|
|
954 | If you must do this, then force the use of a known-to-be-good backend |
988 | If you must do this, then force the use of a known-to-be-good backend |
955 | (at the time of this writing, this includes only C<EVBACKEND_SELECT> and |
989 | (at the time of this writing, this includes only C<EVBACKEND_SELECT> and |
956 | C<EVBACKEND_POLL>). |
990 | C<EVBACKEND_POLL>). |
957 | |
991 | |
958 | Another thing you have to watch out for is that it is quite easy to |
992 | Another thing you have to watch out for is that it is quite easy to |
… | |
… | |
992 | optimisations to libev. |
1026 | optimisations to libev. |
993 | |
1027 | |
994 | =head3 The special problem of dup'ed file descriptors |
1028 | =head3 The special problem of dup'ed file descriptors |
995 | |
1029 | |
996 | Some backends (e.g. epoll), cannot register events for file descriptors, |
1030 | Some backends (e.g. epoll), cannot register events for file descriptors, |
997 | but only events for the underlying file descriptions. That menas when you |
1031 | but only events for the underlying file descriptions. That means when you |
998 | have C<dup ()>'ed file descriptors and register events for them, only one |
1032 | have C<dup ()>'ed file descriptors or weirder constellations, and register |
999 | file descriptor might actually receive events. |
1033 | events for them, only one file descriptor might actually receive events. |
1000 | |
1034 | |
1001 | There is no workaorund possible except not registering events |
1035 | There is no workaround possible except not registering events |
1002 | for potentially C<dup ()>'ed file descriptors or to resort to |
1036 | for potentially C<dup ()>'ed file descriptors, or to resort to |
1003 | C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>. |
1037 | C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>. |
1004 | |
1038 | |
1005 | =head3 The special problem of fork |
1039 | =head3 The special problem of fork |
1006 | |
1040 | |
1007 | Some backends (epoll, kqueue) do not support C<fork ()> at all or exhibit |
1041 | Some backends (epoll, kqueue) do not support C<fork ()> at all or exhibit |
… | |
… | |
1033 | =item int events [read-only] |
1067 | =item int events [read-only] |
1034 | |
1068 | |
1035 | The events being watched. |
1069 | The events being watched. |
1036 | |
1070 | |
1037 | =back |
1071 | =back |
|
|
1072 | |
|
|
1073 | =head3 Examples |
1038 | |
1074 | |
1039 | Example: Call C<stdin_readable_cb> when STDIN_FILENO has become, well |
1075 | Example: Call C<stdin_readable_cb> when STDIN_FILENO has become, well |
1040 | readable, but only once. Since it is likely line-buffered, you could |
1076 | readable, but only once. Since it is likely line-buffered, you could |
1041 | attempt to read a whole line in the callback. |
1077 | attempt to read a whole line in the callback. |
1042 | |
1078 | |
… | |
… | |
1140 | or C<ev_timer_again> is called and determines the next timeout (if any), |
1176 | or C<ev_timer_again> is called and determines the next timeout (if any), |
1141 | which is also when any modifications are taken into account. |
1177 | which is also when any modifications are taken into account. |
1142 | |
1178 | |
1143 | =back |
1179 | =back |
1144 | |
1180 | |
|
|
1181 | =head3 Examples |
|
|
1182 | |
1145 | Example: Create a timer that fires after 60 seconds. |
1183 | Example: Create a timer that fires after 60 seconds. |
1146 | |
1184 | |
1147 | static void |
1185 | static void |
1148 | one_minute_cb (struct ev_loop *loop, struct ev_timer *w, int revents) |
1186 | one_minute_cb (struct ev_loop *loop, struct ev_timer *w, int revents) |
1149 | { |
1187 | { |
… | |
… | |
1306 | When active, contains the absolute time that the watcher is supposed to |
1344 | When active, contains the absolute time that the watcher is supposed to |
1307 | trigger next. |
1345 | trigger next. |
1308 | |
1346 | |
1309 | =back |
1347 | =back |
1310 | |
1348 | |
|
|
1349 | =head3 Examples |
|
|
1350 | |
1311 | Example: Call a callback every hour, or, more precisely, whenever the |
1351 | Example: Call a callback every hour, or, more precisely, whenever the |
1312 | system clock is divisible by 3600. The callback invocation times have |
1352 | system clock is divisible by 3600. The callback invocation times have |
1313 | potentially a lot of jittering, but good long-term stability. |
1353 | potentially a lot of jittering, but good long-term stability. |
1314 | |
1354 | |
1315 | static void |
1355 | static void |
… | |
… | |
1406 | |
1446 | |
1407 | The process exit/trace status caused by C<rpid> (see your systems |
1447 | The process exit/trace status caused by C<rpid> (see your systems |
1408 | C<waitpid> and C<sys/wait.h> documentation for details). |
1448 | C<waitpid> and C<sys/wait.h> documentation for details). |
1409 | |
1449 | |
1410 | =back |
1450 | =back |
|
|
1451 | |
|
|
1452 | =head3 Examples |
1411 | |
1453 | |
1412 | Example: Try to exit cleanly on SIGINT and SIGTERM. |
1454 | Example: Try to exit cleanly on SIGINT and SIGTERM. |
1413 | |
1455 | |
1414 | static void |
1456 | static void |
1415 | sigint_cb (struct ev_loop *loop, struct ev_signal *w, int revents) |
1457 | sigint_cb (struct ev_loop *loop, struct ev_signal *w, int revents) |
… | |
… | |
1456 | semantics of C<ev_stat> watchers, which means that libev sometimes needs |
1498 | semantics of C<ev_stat> watchers, which means that libev sometimes needs |
1457 | to fall back to regular polling again even with inotify, but changes are |
1499 | to fall back to regular polling again even with inotify, but changes are |
1458 | usually detected immediately, and if the file exists there will be no |
1500 | usually detected immediately, and if the file exists there will be no |
1459 | polling. |
1501 | polling. |
1460 | |
1502 | |
|
|
1503 | =head3 Inotify |
|
|
1504 | |
|
|
1505 | When C<inotify (7)> support has been compiled into libev (generally only |
|
|
1506 | available on Linux) and present at runtime, it will be used to speed up |
|
|
1507 | change detection where possible. The inotify descriptor will be created lazily |
|
|
1508 | when the first C<ev_stat> watcher is being started. |
|
|
1509 | |
|
|
1510 | Inotify presense does not change the semantics of C<ev_stat> watchers |
|
|
1511 | except that changes might be detected earlier, and in some cases, to avoid |
|
|
1512 | making regular C<stat> calls. Even in the presense of inotify support |
|
|
1513 | there are many cases where libev has to resort to regular C<stat> polling. |
|
|
1514 | |
|
|
1515 | (There is no support for kqueue, as apparently it cannot be used to |
|
|
1516 | implement this functionality, due to the requirement of having a file |
|
|
1517 | descriptor open on the object at all times). |
|
|
1518 | |
|
|
1519 | =head3 The special problem of stat time resolution |
|
|
1520 | |
|
|
1521 | The C<stat ()> syscall only supports full-second resolution portably, and |
|
|
1522 | even on systems where the resolution is higher, many filesystems still |
|
|
1523 | only support whole seconds. |
|
|
1524 | |
|
|
1525 | That means that, if the time is the only thing that changes, you might |
|
|
1526 | miss updates: on the first update, C<ev_stat> detects a change and calls |
|
|
1527 | your callback, which does something. When there is another update within |
|
|
1528 | the same second, C<ev_stat> will be unable to detect it. |
|
|
1529 | |
|
|
1530 | The solution to this is to delay acting on a change for a second (or till |
|
|
1531 | the next second boundary), using a roughly one-second delay C<ev_timer> |
|
|
1532 | (C<ev_timer_set (w, 0., 1.01); ev_timer_again (loop, w)>). The C<.01> |
|
|
1533 | is added to work around small timing inconsistencies of some operating |
|
|
1534 | systems. |
|
|
1535 | |
1461 | =head3 Watcher-Specific Functions and Data Members |
1536 | =head3 Watcher-Specific Functions and Data Members |
1462 | |
1537 | |
1463 | =over 4 |
1538 | =over 4 |
1464 | |
1539 | |
1465 | =item ev_stat_init (ev_stat *, callback, const char *path, ev_tstamp interval) |
1540 | =item ev_stat_init (ev_stat *, callback, const char *path, ev_tstamp interval) |
… | |
… | |
1502 | =item const char *path [read-only] |
1577 | =item const char *path [read-only] |
1503 | |
1578 | |
1504 | The filesystem path that is being watched. |
1579 | The filesystem path that is being watched. |
1505 | |
1580 | |
1506 | =back |
1581 | =back |
|
|
1582 | |
|
|
1583 | =head3 Examples |
1507 | |
1584 | |
1508 | Example: Watch C</etc/passwd> for attribute changes. |
1585 | Example: Watch C</etc/passwd> for attribute changes. |
1509 | |
1586 | |
1510 | static void |
1587 | static void |
1511 | passwd_cb (struct ev_loop *loop, ev_stat *w, int revents) |
1588 | passwd_cb (struct ev_loop *loop, ev_stat *w, int revents) |
… | |
… | |
1524 | } |
1601 | } |
1525 | |
1602 | |
1526 | ... |
1603 | ... |
1527 | ev_stat passwd; |
1604 | ev_stat passwd; |
1528 | |
1605 | |
1529 | ev_stat_init (&passwd, passwd_cb, "/etc/passwd"); |
1606 | ev_stat_init (&passwd, passwd_cb, "/etc/passwd", 0.); |
1530 | ev_stat_start (loop, &passwd); |
1607 | ev_stat_start (loop, &passwd); |
|
|
1608 | |
|
|
1609 | Example: Like above, but additionally use a one-second delay so we do not |
|
|
1610 | miss updates (however, frequent updates will delay processing, too, so |
|
|
1611 | one might do the work both on C<ev_stat> callback invocation I<and> on |
|
|
1612 | C<ev_timer> callback invocation). |
|
|
1613 | |
|
|
1614 | static ev_stat passwd; |
|
|
1615 | static ev_timer timer; |
|
|
1616 | |
|
|
1617 | static void |
|
|
1618 | timer_cb (EV_P_ ev_timer *w, int revents) |
|
|
1619 | { |
|
|
1620 | ev_timer_stop (EV_A_ w); |
|
|
1621 | |
|
|
1622 | /* now it's one second after the most recent passwd change */ |
|
|
1623 | } |
|
|
1624 | |
|
|
1625 | static void |
|
|
1626 | stat_cb (EV_P_ ev_stat *w, int revents) |
|
|
1627 | { |
|
|
1628 | /* reset the one-second timer */ |
|
|
1629 | ev_timer_again (EV_A_ &timer); |
|
|
1630 | } |
|
|
1631 | |
|
|
1632 | ... |
|
|
1633 | ev_stat_init (&passwd, stat_cb, "/etc/passwd", 0.); |
|
|
1634 | ev_stat_start (loop, &passwd); |
|
|
1635 | ev_timer_init (&timer, timer_cb, 0., 1.01); |
1531 | |
1636 | |
1532 | |
1637 | |
1533 | =head2 C<ev_idle> - when you've got nothing better to do... |
1638 | =head2 C<ev_idle> - when you've got nothing better to do... |
1534 | |
1639 | |
1535 | Idle watchers trigger events when no other events of the same or higher |
1640 | Idle watchers trigger events when no other events of the same or higher |
… | |
… | |
1560 | Initialises and configures the idle watcher - it has no parameters of any |
1665 | Initialises and configures the idle watcher - it has no parameters of any |
1561 | kind. There is a C<ev_idle_set> macro, but using it is utterly pointless, |
1666 | kind. There is a C<ev_idle_set> macro, but using it is utterly pointless, |
1562 | believe me. |
1667 | believe me. |
1563 | |
1668 | |
1564 | =back |
1669 | =back |
|
|
1670 | |
|
|
1671 | =head3 Examples |
1565 | |
1672 | |
1566 | Example: Dynamically allocate an C<ev_idle> watcher, start it, and in the |
1673 | Example: Dynamically allocate an C<ev_idle> watcher, start it, and in the |
1567 | callback, free it. Also, use no error checking, as usual. |
1674 | callback, free it. Also, use no error checking, as usual. |
1568 | |
1675 | |
1569 | static void |
1676 | static void |
… | |
… | |
1621 | |
1728 | |
1622 | It is recommended to give C<ev_check> watchers highest (C<EV_MAXPRI>) |
1729 | It is recommended to give C<ev_check> watchers highest (C<EV_MAXPRI>) |
1623 | priority, to ensure that they are being run before any other watchers |
1730 | priority, to ensure that they are being run before any other watchers |
1624 | after the poll. Also, C<ev_check> watchers (and C<ev_prepare> watchers, |
1731 | after the poll. Also, C<ev_check> watchers (and C<ev_prepare> watchers, |
1625 | too) should not activate ("feed") events into libev. While libev fully |
1732 | too) should not activate ("feed") events into libev. While libev fully |
1626 | supports this, they will be called before other C<ev_check> watchers did |
1733 | supports this, they will be called before other C<ev_check> watchers |
1627 | their job. As C<ev_check> watchers are often used to embed other event |
1734 | did their job. As C<ev_check> watchers are often used to embed other |
1628 | loops those other event loops might be in an unusable state until their |
1735 | (non-libev) event loops those other event loops might be in an unusable |
1629 | C<ev_check> watcher ran (always remind yourself to coexist peacefully with |
1736 | state until their C<ev_check> watcher ran (always remind yourself to |
1630 | others). |
1737 | coexist peacefully with others). |
1631 | |
1738 | |
1632 | =head3 Watcher-Specific Functions and Data Members |
1739 | =head3 Watcher-Specific Functions and Data Members |
1633 | |
1740 | |
1634 | =over 4 |
1741 | =over 4 |
1635 | |
1742 | |
… | |
… | |
1640 | Initialises and configures the prepare or check watcher - they have no |
1747 | Initialises and configures the prepare or check watcher - they have no |
1641 | parameters of any kind. There are C<ev_prepare_set> and C<ev_check_set> |
1748 | parameters of any kind. There are C<ev_prepare_set> and C<ev_check_set> |
1642 | macros, but using them is utterly, utterly and completely pointless. |
1749 | macros, but using them is utterly, utterly and completely pointless. |
1643 | |
1750 | |
1644 | =back |
1751 | =back |
|
|
1752 | |
|
|
1753 | =head3 Examples |
1645 | |
1754 | |
1646 | There are a number of principal ways to embed other event loops or modules |
1755 | There are a number of principal ways to embed other event loops or modules |
1647 | into libev. Here are some ideas on how to include libadns into libev |
1756 | into libev. Here are some ideas on how to include libadns into libev |
1648 | (there is a Perl module named C<EV::ADNS> that does this, which you could |
1757 | (there is a Perl module named C<EV::ADNS> that does this, which you could |
1649 | use for an actually working example. Another Perl module named C<EV::Glib> |
1758 | use for an actually working example. Another Perl module named C<EV::Glib> |
… | |
… | |
1774 | =head2 C<ev_embed> - when one backend isn't enough... |
1883 | =head2 C<ev_embed> - when one backend isn't enough... |
1775 | |
1884 | |
1776 | This is a rather advanced watcher type that lets you embed one event loop |
1885 | This is a rather advanced watcher type that lets you embed one event loop |
1777 | into another (currently only C<ev_io> events are supported in the embedded |
1886 | into another (currently only C<ev_io> events are supported in the embedded |
1778 | loop, other types of watchers might be handled in a delayed or incorrect |
1887 | loop, other types of watchers might be handled in a delayed or incorrect |
1779 | fashion and must not be used). (See portability notes, below). |
1888 | fashion and must not be used). |
1780 | |
1889 | |
1781 | There are primarily two reasons you would want that: work around bugs and |
1890 | There are primarily two reasons you would want that: work around bugs and |
1782 | prioritise I/O. |
1891 | prioritise I/O. |
1783 | |
1892 | |
1784 | As an example for a bug workaround, the kqueue backend might only support |
1893 | As an example for a bug workaround, the kqueue backend might only support |
… | |
… | |
1818 | portable one. |
1927 | portable one. |
1819 | |
1928 | |
1820 | So when you want to use this feature you will always have to be prepared |
1929 | So when you want to use this feature you will always have to be prepared |
1821 | that you cannot get an embeddable loop. The recommended way to get around |
1930 | that you cannot get an embeddable loop. The recommended way to get around |
1822 | this is to have a separate variables for your embeddable loop, try to |
1931 | this is to have a separate variables for your embeddable loop, try to |
1823 | create it, and if that fails, use the normal loop for everything: |
1932 | create it, and if that fails, use the normal loop for everything. |
|
|
1933 | |
|
|
1934 | =head3 Watcher-Specific Functions and Data Members |
|
|
1935 | |
|
|
1936 | =over 4 |
|
|
1937 | |
|
|
1938 | =item ev_embed_init (ev_embed *, callback, struct ev_loop *embedded_loop) |
|
|
1939 | |
|
|
1940 | =item ev_embed_set (ev_embed *, callback, struct ev_loop *embedded_loop) |
|
|
1941 | |
|
|
1942 | Configures the watcher to embed the given loop, which must be |
|
|
1943 | embeddable. If the callback is C<0>, then C<ev_embed_sweep> will be |
|
|
1944 | invoked automatically, otherwise it is the responsibility of the callback |
|
|
1945 | to invoke it (it will continue to be called until the sweep has been done, |
|
|
1946 | if you do not want thta, you need to temporarily stop the embed watcher). |
|
|
1947 | |
|
|
1948 | =item ev_embed_sweep (loop, ev_embed *) |
|
|
1949 | |
|
|
1950 | Make a single, non-blocking sweep over the embedded loop. This works |
|
|
1951 | similarly to C<ev_loop (embedded_loop, EVLOOP_NONBLOCK)>, but in the most |
|
|
1952 | apropriate way for embedded loops. |
|
|
1953 | |
|
|
1954 | =item struct ev_loop *other [read-only] |
|
|
1955 | |
|
|
1956 | The embedded event loop. |
|
|
1957 | |
|
|
1958 | =back |
|
|
1959 | |
|
|
1960 | =head3 Examples |
|
|
1961 | |
|
|
1962 | Example: Try to get an embeddable event loop and embed it into the default |
|
|
1963 | event loop. If that is not possible, use the default loop. The default |
|
|
1964 | loop is stored in C<loop_hi>, while the mebeddable loop is stored in |
|
|
1965 | C<loop_lo> (which is C<loop_hi> in the acse no embeddable loop can be |
|
|
1966 | used). |
1824 | |
1967 | |
1825 | struct ev_loop *loop_hi = ev_default_init (0); |
1968 | struct ev_loop *loop_hi = ev_default_init (0); |
1826 | struct ev_loop *loop_lo = 0; |
1969 | struct ev_loop *loop_lo = 0; |
1827 | struct ev_embed embed; |
1970 | struct ev_embed embed; |
1828 | |
1971 | |
… | |
… | |
1839 | ev_embed_start (loop_hi, &embed); |
1982 | ev_embed_start (loop_hi, &embed); |
1840 | } |
1983 | } |
1841 | else |
1984 | else |
1842 | loop_lo = loop_hi; |
1985 | loop_lo = loop_hi; |
1843 | |
1986 | |
1844 | =head2 Portability notes |
1987 | Example: Check if kqueue is available but not recommended and create |
|
|
1988 | a kqueue backend for use with sockets (which usually work with any |
|
|
1989 | kqueue implementation). Store the kqueue/socket-only event loop in |
|
|
1990 | C<loop_socket>. (One might optionally use C<EVFLAG_NOENV>, too). |
1845 | |
1991 | |
1846 | Kqueue is nominally embeddable, but this is broken on all BSDs that I |
1992 | struct ev_loop *loop = ev_default_init (0); |
1847 | tried, in various ways. Usually the embedded event loop will simply never |
1993 | struct ev_loop *loop_socket = 0; |
1848 | receive events, sometimes it will only trigger a few times, sometimes in a |
1994 | struct ev_embed embed; |
1849 | loop. Epoll is also nominally embeddable, but many Linux kernel versions |
1995 | |
1850 | will always eport the epoll fd as ready, even when no events are pending. |
1996 | if (ev_supported_backends () & ~ev_recommended_backends () & EVBACKEND_KQUEUE) |
|
|
1997 | if ((loop_socket = ev_loop_new (EVBACKEND_KQUEUE)) |
|
|
1998 | { |
|
|
1999 | ev_embed_init (&embed, 0, loop_socket); |
|
|
2000 | ev_embed_start (loop, &embed); |
|
|
2001 | } |
1851 | |
2002 | |
1852 | While libev allows embedding these backends (they are contained in |
2003 | if (!loop_socket) |
1853 | C<ev_embeddable_backends ()>), take extreme care that it will actually |
2004 | loop_socket = loop; |
1854 | work. |
|
|
1855 | |
2005 | |
1856 | When in doubt, create a dynamic event loop forced to use sockets (this |
2006 | // now use loop_socket for all sockets, and loop for everything else |
1857 | usually works) and possibly another thread and a pipe or so to report to |
|
|
1858 | your main event loop. |
|
|
1859 | |
|
|
1860 | =head3 Watcher-Specific Functions and Data Members |
|
|
1861 | |
|
|
1862 | =over 4 |
|
|
1863 | |
|
|
1864 | =item ev_embed_init (ev_embed *, callback, struct ev_loop *embedded_loop) |
|
|
1865 | |
|
|
1866 | =item ev_embed_set (ev_embed *, callback, struct ev_loop *embedded_loop) |
|
|
1867 | |
|
|
1868 | Configures the watcher to embed the given loop, which must be |
|
|
1869 | embeddable. If the callback is C<0>, then C<ev_embed_sweep> will be |
|
|
1870 | invoked automatically, otherwise it is the responsibility of the callback |
|
|
1871 | to invoke it (it will continue to be called until the sweep has been done, |
|
|
1872 | if you do not want thta, you need to temporarily stop the embed watcher). |
|
|
1873 | |
|
|
1874 | =item ev_embed_sweep (loop, ev_embed *) |
|
|
1875 | |
|
|
1876 | Make a single, non-blocking sweep over the embedded loop. This works |
|
|
1877 | similarly to C<ev_loop (embedded_loop, EVLOOP_NONBLOCK)>, but in the most |
|
|
1878 | apropriate way for embedded loops. |
|
|
1879 | |
|
|
1880 | =item struct ev_loop *other [read-only] |
|
|
1881 | |
|
|
1882 | The embedded event loop. |
|
|
1883 | |
|
|
1884 | =back |
|
|
1885 | |
2007 | |
1886 | |
2008 | |
1887 | =head2 C<ev_fork> - the audacity to resume the event loop after a fork |
2009 | =head2 C<ev_fork> - the audacity to resume the event loop after a fork |
1888 | |
2010 | |
1889 | Fork watchers are called when a C<fork ()> was detected (usually because |
2011 | Fork watchers are called when a C<fork ()> was detected (usually because |
… | |
… | |
2368 | wants osf handles on win32 (this is the case when the select to |
2490 | wants osf handles on win32 (this is the case when the select to |
2369 | be used is the winsock select). This means that it will call |
2491 | be used is the winsock select). This means that it will call |
2370 | C<_get_osfhandle> on the fd to convert it to an OS handle. Otherwise, |
2492 | C<_get_osfhandle> on the fd to convert it to an OS handle. Otherwise, |
2371 | it is assumed that all these functions actually work on fds, even |
2493 | it is assumed that all these functions actually work on fds, even |
2372 | on win32. Should not be defined on non-win32 platforms. |
2494 | on win32. Should not be defined on non-win32 platforms. |
|
|
2495 | |
|
|
2496 | =item EV_FD_TO_WIN32_HANDLE |
|
|
2497 | |
|
|
2498 | If C<EV_SELECT_IS_WINSOCKET> is enabled, then libev needs a way to map |
|
|
2499 | file descriptors to socket handles. When not defining this symbol (the |
|
|
2500 | default), then libev will call C<_get_osfhandle>, which is usually |
|
|
2501 | correct. In some cases, programs use their own file descriptor management, |
|
|
2502 | in which case they can provide this function to map fds to socket handles. |
2373 | |
2503 | |
2374 | =item EV_USE_POLL |
2504 | =item EV_USE_POLL |
2375 | |
2505 | |
2376 | If defined to be C<1>, libev will compile in support for the C<poll>(2) |
2506 | If defined to be C<1>, libev will compile in support for the C<poll>(2) |
2377 | backend. Otherwise it will be enabled on non-win32 platforms. It |
2507 | backend. Otherwise it will be enabled on non-win32 platforms. It |
… | |
… | |
2414 | be detected at runtime. |
2544 | be detected at runtime. |
2415 | |
2545 | |
2416 | =item EV_H |
2546 | =item EV_H |
2417 | |
2547 | |
2418 | The name of the F<ev.h> header file used to include it. The default if |
2548 | The name of the F<ev.h> header file used to include it. The default if |
2419 | undefined is C<< <ev.h> >> in F<event.h> and C<"ev.h"> in F<ev.c>. This |
2549 | undefined is C<"ev.h"> in F<event.h> and F<ev.c>. This can be used to |
2420 | can be used to virtually rename the F<ev.h> header file in case of conflicts. |
2550 | virtually rename the F<ev.h> header file in case of conflicts. |
2421 | |
2551 | |
2422 | =item EV_CONFIG_H |
2552 | =item EV_CONFIG_H |
2423 | |
2553 | |
2424 | If C<EV_STANDALONE> isn't C<1>, this variable can be used to override |
2554 | If C<EV_STANDALONE> isn't C<1>, this variable can be used to override |
2425 | F<ev.c>'s idea of where to find the F<config.h> file, similarly to |
2555 | F<ev.c>'s idea of where to find the F<config.h> file, similarly to |
2426 | C<EV_H>, above. |
2556 | C<EV_H>, above. |
2427 | |
2557 | |
2428 | =item EV_EVENT_H |
2558 | =item EV_EVENT_H |
2429 | |
2559 | |
2430 | Similarly to C<EV_H>, this macro can be used to override F<event.c>'s idea |
2560 | Similarly to C<EV_H>, this macro can be used to override F<event.c>'s idea |
2431 | of how the F<event.h> header can be found. |
2561 | of how the F<event.h> header can be found, the dfeault is C<"event.h">. |
2432 | |
2562 | |
2433 | =item EV_PROTOTYPES |
2563 | =item EV_PROTOTYPES |
2434 | |
2564 | |
2435 | If defined to be C<0>, then F<ev.h> will not define any function |
2565 | If defined to be C<0>, then F<ev.h> will not define any function |
2436 | prototypes, but still define all the structs and other symbols. This is |
2566 | prototypes, but still define all the structs and other symbols. This is |
… | |
… | |
2502 | than enough. If you need to manage thousands of children you might want to |
2632 | than enough. If you need to manage thousands of children you might want to |
2503 | increase this value (I<must> be a power of two). |
2633 | increase this value (I<must> be a power of two). |
2504 | |
2634 | |
2505 | =item EV_INOTIFY_HASHSIZE |
2635 | =item EV_INOTIFY_HASHSIZE |
2506 | |
2636 | |
2507 | C<ev_staz> watchers use a small hash table to distribute workload by |
2637 | C<ev_stat> watchers use a small hash table to distribute workload by |
2508 | inotify watch id. The default size is C<16> (or C<1> with C<EV_MINIMAL>), |
2638 | inotify watch id. The default size is C<16> (or C<1> with C<EV_MINIMAL>), |
2509 | usually more than enough. If you need to manage thousands of C<ev_stat> |
2639 | usually more than enough. If you need to manage thousands of C<ev_stat> |
2510 | watchers you might want to increase this value (I<must> be a power of |
2640 | watchers you might want to increase this value (I<must> be a power of |
2511 | two). |
2641 | two). |
2512 | |
2642 | |
… | |
… | |
2608 | |
2738 | |
2609 | =item Starting and stopping timer/periodic watchers: O(log skipped_other_timers) |
2739 | =item Starting and stopping timer/periodic watchers: O(log skipped_other_timers) |
2610 | |
2740 | |
2611 | This means that, when you have a watcher that triggers in one hour and |
2741 | This means that, when you have a watcher that triggers in one hour and |
2612 | there are 100 watchers that would trigger before that then inserting will |
2742 | there are 100 watchers that would trigger before that then inserting will |
2613 | have to skip those 100 watchers. |
2743 | have to skip roughly seven (C<ld 100>) of these watchers. |
2614 | |
2744 | |
2615 | =item Changing timer/periodic watchers (by autorepeat, again): O(log skipped_other_timers) |
2745 | =item Changing timer/periodic watchers (by autorepeat or calling again): O(log skipped_other_timers) |
2616 | |
2746 | |
2617 | That means that for changing a timer costs less than removing/adding them |
2747 | That means that changing a timer costs less than removing/adding them |
2618 | as only the relative motion in the event queue has to be paid for. |
2748 | as only the relative motion in the event queue has to be paid for. |
2619 | |
2749 | |
2620 | =item Starting io/check/prepare/idle/signal/child watchers: O(1) |
2750 | =item Starting io/check/prepare/idle/signal/child watchers: O(1) |
2621 | |
2751 | |
2622 | These just add the watcher into an array or at the head of a list. |
2752 | These just add the watcher into an array or at the head of a list. |
|
|
2753 | |
2623 | =item Stopping check/prepare/idle watchers: O(1) |
2754 | =item Stopping check/prepare/idle watchers: O(1) |
2624 | |
2755 | |
2625 | =item Stopping an io/signal/child watcher: O(number_of_watchers_for_this_(fd/signal/pid % EV_PID_HASHSIZE)) |
2756 | =item Stopping an io/signal/child watcher: O(number_of_watchers_for_this_(fd/signal/pid % EV_PID_HASHSIZE)) |
2626 | |
2757 | |
2627 | These watchers are stored in lists then need to be walked to find the |
2758 | These watchers are stored in lists then need to be walked to find the |
2628 | correct watcher to remove. The lists are usually short (you don't usually |
2759 | correct watcher to remove. The lists are usually short (you don't usually |
2629 | have many watchers waiting for the same fd or signal). |
2760 | have many watchers waiting for the same fd or signal). |
2630 | |
2761 | |
2631 | =item Finding the next timer per loop iteration: O(1) |
2762 | =item Finding the next timer in each loop iteration: O(1) |
|
|
2763 | |
|
|
2764 | By virtue of using a binary heap, the next timer is always found at the |
|
|
2765 | beginning of the storage array. |
2632 | |
2766 | |
2633 | =item Each change on a file descriptor per loop iteration: O(number_of_watchers_for_this_fd) |
2767 | =item Each change on a file descriptor per loop iteration: O(number_of_watchers_for_this_fd) |
2634 | |
2768 | |
2635 | A change means an I/O watcher gets started or stopped, which requires |
2769 | A change means an I/O watcher gets started or stopped, which requires |
2636 | libev to recalculate its status (and possibly tell the kernel). |
2770 | libev to recalculate its status (and possibly tell the kernel, depending |
|
|
2771 | on backend and wether C<ev_io_set> was used). |
2637 | |
2772 | |
2638 | =item Activating one watcher: O(1) |
2773 | =item Activating one watcher (putting it into the pending state): O(1) |
2639 | |
2774 | |
2640 | =item Priority handling: O(number_of_priorities) |
2775 | =item Priority handling: O(number_of_priorities) |
2641 | |
2776 | |
2642 | Priorities are implemented by allocating some space for each |
2777 | Priorities are implemented by allocating some space for each |
2643 | priority. When doing priority-based operations, libev usually has to |
2778 | priority. When doing priority-based operations, libev usually has to |
2644 | linearly search all the priorities. |
2779 | linearly search all the priorities, but starting/stopping and activating |
|
|
2780 | watchers becomes O(1) w.r.t. prioritiy handling. |
2645 | |
2781 | |
2646 | =back |
2782 | =back |
2647 | |
2783 | |
2648 | |
2784 | |
|
|
2785 | =head1 Win32 platform limitations and workarounds |
|
|
2786 | |
|
|
2787 | Win32 doesn't support any of the standards (e.g. POSIX) that libev |
|
|
2788 | requires, and its I/O model is fundamentally incompatible with the POSIX |
|
|
2789 | model. Libev still offers limited functionality on this platform in |
|
|
2790 | the form of the C<EVBACKEND_SELECT> backend, and only supports socket |
|
|
2791 | descriptors. This only applies when using Win32 natively, not when using |
|
|
2792 | e.g. cygwin. |
|
|
2793 | |
|
|
2794 | There is no supported compilation method available on windows except |
|
|
2795 | embedding it into other applications. |
|
|
2796 | |
|
|
2797 | Due to the many, low, and arbitrary limits on the win32 platform and the |
|
|
2798 | abysmal performance of winsockets, using a large number of sockets is not |
|
|
2799 | recommended (and not reasonable). If your program needs to use more than |
|
|
2800 | a hundred or so sockets, then likely it needs to use a totally different |
|
|
2801 | implementation for windows, as libev offers the POSIX model, which cannot |
|
|
2802 | be implemented efficiently on windows (microsoft monopoly games). |
|
|
2803 | |
|
|
2804 | =over 4 |
|
|
2805 | |
|
|
2806 | =item The winsocket select function |
|
|
2807 | |
|
|
2808 | The winsocket C<select> function doesn't follow POSIX in that it requires |
|
|
2809 | socket I<handles> and not socket I<file descriptors>. This makes select |
|
|
2810 | very inefficient, and also requires a mapping from file descriptors |
|
|
2811 | to socket handles. See the discussion of the C<EV_SELECT_USE_FD_SET>, |
|
|
2812 | C<EV_SELECT_IS_WINSOCKET> and C<EV_FD_TO_WIN32_HANDLE> preprocessor |
|
|
2813 | symbols for more info. |
|
|
2814 | |
|
|
2815 | The configuration for a "naked" win32 using the microsoft runtime |
|
|
2816 | libraries and raw winsocket select is: |
|
|
2817 | |
|
|
2818 | #define EV_USE_SELECT 1 |
|
|
2819 | #define EV_SELECT_IS_WINSOCKET 1 /* forces EV_SELECT_USE_FD_SET, too */ |
|
|
2820 | |
|
|
2821 | Note that winsockets handling of fd sets is O(n), so you can easily get a |
|
|
2822 | complexity in the O(n²) range when using win32. |
|
|
2823 | |
|
|
2824 | =item Limited number of file descriptors |
|
|
2825 | |
|
|
2826 | Windows has numerous arbitrary (and low) limits on things. Early versions |
|
|
2827 | of winsocket's select only supported waiting for a max. of C<64> handles |
|
|
2828 | (probably owning to the fact that all windows kernels can only wait for |
|
|
2829 | C<64> things at the same time internally; microsoft recommends spawning a |
|
|
2830 | chain of threads and wait for 63 handles and the previous thread in each). |
|
|
2831 | |
|
|
2832 | Newer versions support more handles, but you need to define C<FD_SETSIZE> |
|
|
2833 | to some high number (e.g. C<2048>) before compiling the winsocket select |
|
|
2834 | call (which might be in libev or elsewhere, for example, perl does its own |
|
|
2835 | select emulation on windows). |
|
|
2836 | |
|
|
2837 | Another limit is the number of file descriptors in the microsoft runtime |
|
|
2838 | libraries, which by default is C<64> (there must be a hidden I<64> fetish |
|
|
2839 | or something like this inside microsoft). You can increase this by calling |
|
|
2840 | C<_setmaxstdio>, which can increase this limit to C<2048> (another |
|
|
2841 | arbitrary limit), but is broken in many versions of the microsoft runtime |
|
|
2842 | libraries. |
|
|
2843 | |
|
|
2844 | This might get you to about C<512> or C<2048> sockets (depending on |
|
|
2845 | windows version and/or the phase of the moon). To get more, you need to |
|
|
2846 | wrap all I/O functions and provide your own fd management, but the cost of |
|
|
2847 | calling select (O(n²)) will likely make this unworkable. |
|
|
2848 | |
|
|
2849 | =back |
|
|
2850 | |
|
|
2851 | |
2649 | =head1 AUTHOR |
2852 | =head1 AUTHOR |
2650 | |
2853 | |
2651 | Marc Lehmann <libev@schmorp.de>. |
2854 | Marc Lehmann <libev@schmorp.de>. |
2652 | |
2855 | |