… | |
… | |
115 | |
115 | |
116 | Returns the current time as libev would use it. Please note that the |
116 | Returns the current time as libev would use it. Please note that the |
117 | C<ev_now> function is usually faster and also often returns the timestamp |
117 | C<ev_now> function is usually faster and also often returns the timestamp |
118 | you actually want to know. |
118 | you actually want to know. |
119 | |
119 | |
|
|
120 | =item ev_sleep (ev_tstamp interval) |
|
|
121 | |
|
|
122 | Sleep for the given interval: The current thread will be blocked until |
|
|
123 | either it is interrupted or the given time interval has passed. Basically |
|
|
124 | this is a subsecond-resolution C<sleep ()>. |
|
|
125 | |
120 | =item int ev_version_major () |
126 | =item int ev_version_major () |
121 | |
127 | |
122 | =item int ev_version_minor () |
128 | =item int ev_version_minor () |
123 | |
129 | |
124 | You can find out the major and minor ABI version numbers of the library |
130 | You can find out the major and minor ABI version numbers of the library |
… | |
… | |
300 | =item C<EVBACKEND_SELECT> (value 1, portable select backend) |
306 | =item C<EVBACKEND_SELECT> (value 1, portable select backend) |
301 | |
307 | |
302 | This is your standard select(2) backend. Not I<completely> standard, as |
308 | This is your standard select(2) backend. Not I<completely> standard, as |
303 | libev tries to roll its own fd_set with no limits on the number of fds, |
309 | libev tries to roll its own fd_set with no limits on the number of fds, |
304 | but if that fails, expect a fairly low limit on the number of fds when |
310 | but if that fails, expect a fairly low limit on the number of fds when |
305 | using this backend. It doesn't scale too well (O(highest_fd)), but its usually |
311 | using this backend. It doesn't scale too well (O(highest_fd)), but its |
306 | the fastest backend for a low number of fds. |
312 | usually the fastest backend for a low number of (low-numbered :) fds. |
|
|
313 | |
|
|
314 | To get good performance out of this backend you need a high amount of |
|
|
315 | parallelity (most of the file descriptors should be busy). If you are |
|
|
316 | writing a server, you should C<accept ()> in a loop to accept as many |
|
|
317 | connections as possible during one iteration. You might also want to have |
|
|
318 | a look at C<ev_set_io_collect_interval ()> to increase the amount of |
|
|
319 | readyness notifications you get per iteration. |
307 | |
320 | |
308 | =item C<EVBACKEND_POLL> (value 2, poll backend, available everywhere except on windows) |
321 | =item C<EVBACKEND_POLL> (value 2, poll backend, available everywhere except on windows) |
309 | |
322 | |
310 | And this is your standard poll(2) backend. It's more complicated than |
323 | And this is your standard poll(2) backend. It's more complicated |
311 | select, but handles sparse fds better and has no artificial limit on the |
324 | than select, but handles sparse fds better and has no artificial |
312 | number of fds you can use (except it will slow down considerably with a |
325 | limit on the number of fds you can use (except it will slow down |
313 | lot of inactive fds). It scales similarly to select, i.e. O(total_fds). |
326 | considerably with a lot of inactive fds). It scales similarly to select, |
|
|
327 | i.e. O(total_fds). See the entry for C<EVBACKEND_SELECT>, above, for |
|
|
328 | performance tips. |
314 | |
329 | |
315 | =item C<EVBACKEND_EPOLL> (value 4, Linux) |
330 | =item C<EVBACKEND_EPOLL> (value 4, Linux) |
316 | |
331 | |
317 | For few fds, this backend is a bit little slower than poll and select, |
332 | For few fds, this backend is a bit little slower than poll and select, |
318 | but it scales phenomenally better. While poll and select usually scale |
333 | but it scales phenomenally better. While poll and select usually scale |
319 | like O(total_fds) where n is the total number of fds (or the highest fd), |
334 | like O(total_fds) where n is the total number of fds (or the highest fd), |
320 | epoll scales either O(1) or O(active_fds). The epoll design has a number |
335 | epoll scales either O(1) or O(active_fds). The epoll design has a number |
321 | of shortcomings, such as silently dropping events in some hard-to-detect |
336 | of shortcomings, such as silently dropping events in some hard-to-detect |
322 | cases and rewuiring a syscall per fd change, no fork support and bad |
337 | cases and rewiring a syscall per fd change, no fork support and bad |
323 | support for dup: |
338 | support for dup. |
324 | |
339 | |
325 | While stopping, setting and starting an I/O watcher in the same iteration |
340 | While stopping, setting and starting an I/O watcher in the same iteration |
326 | will result in some caching, there is still a syscall per such incident |
341 | will result in some caching, there is still a syscall per such incident |
327 | (because the fd could point to a different file description now), so its |
342 | (because the fd could point to a different file description now), so its |
328 | best to avoid that. Also, C<dup ()>'ed file descriptors might not work |
343 | best to avoid that. Also, C<dup ()>'ed file descriptors might not work |
… | |
… | |
330 | |
345 | |
331 | Please note that epoll sometimes generates spurious notifications, so you |
346 | Please note that epoll sometimes generates spurious notifications, so you |
332 | need to use non-blocking I/O or other means to avoid blocking when no data |
347 | need to use non-blocking I/O or other means to avoid blocking when no data |
333 | (or space) is available. |
348 | (or space) is available. |
334 | |
349 | |
|
|
350 | Best performance from this backend is achieved by not unregistering all |
|
|
351 | watchers for a file descriptor until it has been closed, if possible, i.e. |
|
|
352 | keep at least one watcher active per fd at all times. |
|
|
353 | |
|
|
354 | While nominally embeddeble in other event loops, this feature is broken in |
|
|
355 | all kernel versions tested so far. |
|
|
356 | |
335 | =item C<EVBACKEND_KQUEUE> (value 8, most BSD clones) |
357 | =item C<EVBACKEND_KQUEUE> (value 8, most BSD clones) |
336 | |
358 | |
337 | Kqueue deserves special mention, as at the time of this writing, it |
359 | Kqueue deserves special mention, as at the time of this writing, it |
338 | was broken on I<all> BSDs (usually it doesn't work with anything but |
360 | was broken on all BSDs except NetBSD (usually it doesn't work reliably |
339 | sockets and pipes, except on Darwin, where of course it's completely |
361 | with anything but sockets and pipes, except on Darwin, where of course |
340 | useless. On NetBSD, it seems to work for all the FD types I tested, so it |
|
|
341 | is used by default there). For this reason it's not being "autodetected" |
362 | it's completely useless). For this reason it's not being "autodetected" |
342 | unless you explicitly specify it explicitly in the flags (i.e. using |
363 | unless you explicitly specify it explicitly in the flags (i.e. using |
343 | C<EVBACKEND_KQUEUE>) or libev was compiled on a known-to-be-good (-enough) |
364 | C<EVBACKEND_KQUEUE>) or libev was compiled on a known-to-be-good (-enough) |
344 | system like NetBSD. |
365 | system like NetBSD. |
345 | |
366 | |
|
|
367 | You still can embed kqueue into a normal poll or select backend and use it |
|
|
368 | only for sockets (after having made sure that sockets work with kqueue on |
|
|
369 | the target platform). See C<ev_embed> watchers for more info. |
|
|
370 | |
346 | It scales in the same way as the epoll backend, but the interface to the |
371 | It scales in the same way as the epoll backend, but the interface to the |
347 | kernel is more efficient (which says nothing about its actual speed, |
372 | kernel is more efficient (which says nothing about its actual speed, of |
348 | of course). While stopping, setting and starting an I/O watcher does |
373 | course). While stopping, setting and starting an I/O watcher does never |
349 | never cause an extra syscall as with epoll, it still adds up to two event |
374 | cause an extra syscall as with C<EVBACKEND_EPOLL>, it still adds up to |
350 | changes per incident, support for C<fork ()> is very bad and it drops fds |
375 | two event changes per incident, support for C<fork ()> is very bad and it |
351 | silently in similarly hard-to-detetc cases. |
376 | drops fds silently in similarly hard-to-detect cases. |
|
|
377 | |
|
|
378 | This backend usually performs well under most conditions. |
|
|
379 | |
|
|
380 | While nominally embeddable in other event loops, this doesn't work |
|
|
381 | everywhere, so you might need to test for this. And since it is broken |
|
|
382 | almost everywhere, you should only use it when you have a lot of sockets |
|
|
383 | (for which it usually works), by embedding it into another event loop |
|
|
384 | (e.g. C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>) and using it only for |
|
|
385 | sockets. |
352 | |
386 | |
353 | =item C<EVBACKEND_DEVPOLL> (value 16, Solaris 8) |
387 | =item C<EVBACKEND_DEVPOLL> (value 16, Solaris 8) |
354 | |
388 | |
355 | This is not implemented yet (and might never be). |
389 | This is not implemented yet (and might never be, unless you send me an |
|
|
390 | implementation). According to reports, C</dev/poll> only supports sockets |
|
|
391 | and is not embeddable, which would limit the usefulness of this backend |
|
|
392 | immensely. |
356 | |
393 | |
357 | =item C<EVBACKEND_PORT> (value 32, Solaris 10) |
394 | =item C<EVBACKEND_PORT> (value 32, Solaris 10) |
358 | |
395 | |
359 | This uses the Solaris 10 event port mechanism. As with everything on Solaris, |
396 | This uses the Solaris 10 event port mechanism. As with everything on Solaris, |
360 | it's really slow, but it still scales very well (O(active_fds)). |
397 | it's really slow, but it still scales very well (O(active_fds)). |
361 | |
398 | |
362 | Please note that solaris event ports can deliver a lot of spurious |
399 | Please note that solaris event ports can deliver a lot of spurious |
363 | notifications, so you need to use non-blocking I/O or other means to avoid |
400 | notifications, so you need to use non-blocking I/O or other means to avoid |
364 | blocking when no data (or space) is available. |
401 | blocking when no data (or space) is available. |
365 | |
402 | |
|
|
403 | While this backend scales well, it requires one system call per active |
|
|
404 | file descriptor per loop iteration. For small and medium numbers of file |
|
|
405 | descriptors a "slow" C<EVBACKEND_SELECT> or C<EVBACKEND_POLL> backend |
|
|
406 | might perform better. |
|
|
407 | |
366 | =item C<EVBACKEND_ALL> |
408 | =item C<EVBACKEND_ALL> |
367 | |
409 | |
368 | Try all backends (even potentially broken ones that wouldn't be tried |
410 | Try all backends (even potentially broken ones that wouldn't be tried |
369 | with C<EVFLAG_AUTO>). Since this is a mask, you can do stuff such as |
411 | with C<EVFLAG_AUTO>). Since this is a mask, you can do stuff such as |
370 | C<EVBACKEND_ALL & ~EVBACKEND_KQUEUE>. |
412 | C<EVBACKEND_ALL & ~EVBACKEND_KQUEUE>. |
|
|
413 | |
|
|
414 | It is definitely not recommended to use this flag. |
371 | |
415 | |
372 | =back |
416 | =back |
373 | |
417 | |
374 | If one or more of these are ored into the flags value, then only these |
418 | If one or more of these are ored into the flags value, then only these |
375 | backends will be tried (in the reverse order as given here). If none are |
419 | backends will be tried (in the reverse order as given here). If none are |
… | |
… | |
569 | Example: For some weird reason, unregister the above signal handler again. |
613 | Example: For some weird reason, unregister the above signal handler again. |
570 | |
614 | |
571 | ev_ref (loop); |
615 | ev_ref (loop); |
572 | ev_signal_stop (loop, &exitsig); |
616 | ev_signal_stop (loop, &exitsig); |
573 | |
617 | |
|
|
618 | =item ev_set_io_collect_interval (loop, ev_tstamp interval) |
|
|
619 | |
|
|
620 | =item ev_set_timeout_collect_interval (loop, ev_tstamp interval) |
|
|
621 | |
|
|
622 | These advanced functions influence the time that libev will spend waiting |
|
|
623 | for events. Both are by default C<0>, meaning that libev will try to |
|
|
624 | invoke timer/periodic callbacks and I/O callbacks with minimum latency. |
|
|
625 | |
|
|
626 | Setting these to a higher value (the C<interval> I<must> be >= C<0>) |
|
|
627 | allows libev to delay invocation of I/O and timer/periodic callbacks to |
|
|
628 | increase efficiency of loop iterations. |
|
|
629 | |
|
|
630 | The background is that sometimes your program runs just fast enough to |
|
|
631 | handle one (or very few) event(s) per loop iteration. While this makes |
|
|
632 | the program responsive, it also wastes a lot of CPU time to poll for new |
|
|
633 | events, especially with backends like C<select ()> which have a high |
|
|
634 | overhead for the actual polling but can deliver many events at once. |
|
|
635 | |
|
|
636 | By setting a higher I<io collect interval> you allow libev to spend more |
|
|
637 | time collecting I/O events, so you can handle more events per iteration, |
|
|
638 | at the cost of increasing latency. Timeouts (both C<ev_periodic> and |
|
|
639 | C<ev_timer>) will be not affected. Setting this to a non-null value will |
|
|
640 | introduce an additional C<ev_sleep ()> call into most loop iterations. |
|
|
641 | |
|
|
642 | Likewise, by setting a higher I<timeout collect interval> you allow libev |
|
|
643 | to spend more time collecting timeouts, at the expense of increased |
|
|
644 | latency (the watcher callback will be called later). C<ev_io> watchers |
|
|
645 | will not be affected. Setting this to a non-null value will not introduce |
|
|
646 | any overhead in libev. |
|
|
647 | |
|
|
648 | Many (busy) programs can usually benefit by setting the io collect |
|
|
649 | interval to a value near C<0.1> or so, which is often enough for |
|
|
650 | interactive servers (of course not for games), likewise for timeouts. It |
|
|
651 | usually doesn't make much sense to set it to a lower value than C<0.01>, |
|
|
652 | as this approsaches the timing granularity of most systems. |
|
|
653 | |
574 | =back |
654 | =back |
575 | |
655 | |
576 | |
656 | |
577 | =head1 ANATOMY OF A WATCHER |
657 | =head1 ANATOMY OF A WATCHER |
578 | |
658 | |
… | |
… | |
952 | optimisations to libev. |
1032 | optimisations to libev. |
953 | |
1033 | |
954 | =head3 The special problem of dup'ed file descriptors |
1034 | =head3 The special problem of dup'ed file descriptors |
955 | |
1035 | |
956 | Some backends (e.g. epoll), cannot register events for file descriptors, |
1036 | Some backends (e.g. epoll), cannot register events for file descriptors, |
957 | but only events for the underlying file descriptions. That menas when you |
1037 | but only events for the underlying file descriptions. That means when you |
958 | have C<dup ()>'ed file descriptors and register events for them, only one |
1038 | have C<dup ()>'ed file descriptors and register events for them, only one |
959 | file descriptor might actually receive events. |
1039 | file descriptor might actually receive events. |
960 | |
1040 | |
961 | There is no workaorund possible except not registering events |
1041 | There is no workaround possible except not registering events |
962 | for potentially C<dup ()>'ed file descriptors or to resort to |
1042 | for potentially C<dup ()>'ed file descriptors, or to resort to |
963 | C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>. |
1043 | C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>. |
964 | |
1044 | |
965 | =head3 The special problem of fork |
1045 | =head3 The special problem of fork |
966 | |
1046 | |
967 | Some backends (epoll, kqueue) do not support C<fork ()> at all or exhibit |
1047 | Some backends (epoll, kqueue) do not support C<fork ()> at all or exhibit |
… | |
… | |
1581 | |
1661 | |
1582 | It is recommended to give C<ev_check> watchers highest (C<EV_MAXPRI>) |
1662 | It is recommended to give C<ev_check> watchers highest (C<EV_MAXPRI>) |
1583 | priority, to ensure that they are being run before any other watchers |
1663 | priority, to ensure that they are being run before any other watchers |
1584 | after the poll. Also, C<ev_check> watchers (and C<ev_prepare> watchers, |
1664 | after the poll. Also, C<ev_check> watchers (and C<ev_prepare> watchers, |
1585 | too) should not activate ("feed") events into libev. While libev fully |
1665 | too) should not activate ("feed") events into libev. While libev fully |
1586 | supports this, they will be called before other C<ev_check> watchers did |
1666 | supports this, they will be called before other C<ev_check> watchers |
1587 | their job. As C<ev_check> watchers are often used to embed other event |
1667 | did their job. As C<ev_check> watchers are often used to embed other |
1588 | loops those other event loops might be in an unusable state until their |
1668 | (non-libev) event loops those other event loops might be in an unusable |
1589 | C<ev_check> watcher ran (always remind yourself to coexist peacefully with |
1669 | state until their C<ev_check> watcher ran (always remind yourself to |
1590 | others). |
1670 | coexist peacefully with others). |
1591 | |
1671 | |
1592 | =head3 Watcher-Specific Functions and Data Members |
1672 | =head3 Watcher-Specific Functions and Data Members |
1593 | |
1673 | |
1594 | =over 4 |
1674 | =over 4 |
1595 | |
1675 | |
… | |
… | |
1734 | =head2 C<ev_embed> - when one backend isn't enough... |
1814 | =head2 C<ev_embed> - when one backend isn't enough... |
1735 | |
1815 | |
1736 | This is a rather advanced watcher type that lets you embed one event loop |
1816 | This is a rather advanced watcher type that lets you embed one event loop |
1737 | into another (currently only C<ev_io> events are supported in the embedded |
1817 | into another (currently only C<ev_io> events are supported in the embedded |
1738 | loop, other types of watchers might be handled in a delayed or incorrect |
1818 | loop, other types of watchers might be handled in a delayed or incorrect |
1739 | fashion and must not be used). (See portability notes, below). |
1819 | fashion and must not be used). |
1740 | |
1820 | |
1741 | There are primarily two reasons you would want that: work around bugs and |
1821 | There are primarily two reasons you would want that: work around bugs and |
1742 | prioritise I/O. |
1822 | prioritise I/O. |
1743 | |
1823 | |
1744 | As an example for a bug workaround, the kqueue backend might only support |
1824 | As an example for a bug workaround, the kqueue backend might only support |
… | |
… | |
1799 | ev_embed_start (loop_hi, &embed); |
1879 | ev_embed_start (loop_hi, &embed); |
1800 | } |
1880 | } |
1801 | else |
1881 | else |
1802 | loop_lo = loop_hi; |
1882 | loop_lo = loop_hi; |
1803 | |
1883 | |
1804 | =head2 Portability notes |
|
|
1805 | |
|
|
1806 | Kqueue is nominally embeddable, but this is broken on all BSDs that I |
|
|
1807 | tried, in various ways. Usually the embedded event loop will simply never |
|
|
1808 | receive events, sometimes it will only trigger a few times, sometimes in a |
|
|
1809 | loop. Epoll is also nominally embeddable, but many Linux kernel versions |
|
|
1810 | will always eport the epoll fd as ready, even when no events are pending. |
|
|
1811 | |
|
|
1812 | While libev allows embedding these backends (they are contained in |
|
|
1813 | C<ev_embeddable_backends ()>), take extreme care that it will actually |
|
|
1814 | work. |
|
|
1815 | |
|
|
1816 | When in doubt, create a dynamic event loop forced to use sockets (this |
|
|
1817 | usually works) and possibly another thread and a pipe or so to report to |
|
|
1818 | your main event loop. |
|
|
1819 | |
|
|
1820 | =head3 Watcher-Specific Functions and Data Members |
1884 | =head3 Watcher-Specific Functions and Data Members |
1821 | |
1885 | |
1822 | =over 4 |
1886 | =over 4 |
1823 | |
1887 | |
1824 | =item ev_embed_init (ev_embed *, callback, struct ev_loop *embedded_loop) |
1888 | =item ev_embed_init (ev_embed *, callback, struct ev_loop *embedded_loop) |
… | |
… | |
2296 | realtime clock option at compiletime (and assume its availability at |
2360 | realtime clock option at compiletime (and assume its availability at |
2297 | runtime if successful). Otherwise no use of the realtime clock option will |
2361 | runtime if successful). Otherwise no use of the realtime clock option will |
2298 | be attempted. This effectively replaces C<gettimeofday> by C<clock_get |
2362 | be attempted. This effectively replaces C<gettimeofday> by C<clock_get |
2299 | (CLOCK_REALTIME, ...)> and will not normally affect correctness. See the |
2363 | (CLOCK_REALTIME, ...)> and will not normally affect correctness. See the |
2300 | note about libraries in the description of C<EV_USE_MONOTONIC>, though. |
2364 | note about libraries in the description of C<EV_USE_MONOTONIC>, though. |
|
|
2365 | |
|
|
2366 | =item EV_USE_NANOSLEEP |
|
|
2367 | |
|
|
2368 | If defined to be C<1>, libev will assume that C<nanosleep ()> is available |
|
|
2369 | and will use it for delays. Otherwise it will use C<select ()>. |
2301 | |
2370 | |
2302 | =item EV_USE_SELECT |
2371 | =item EV_USE_SELECT |
2303 | |
2372 | |
2304 | If undefined or defined to be C<1>, libev will compile in support for the |
2373 | If undefined or defined to be C<1>, libev will compile in support for the |
2305 | C<select>(2) backend. No attempt at autodetection will be done: if no |
2374 | C<select>(2) backend. No attempt at autodetection will be done: if no |