… | |
… | |
105 | details of the event, and then hand it over to libev by I<starting> the |
105 | details of the event, and then hand it over to libev by I<starting> the |
106 | watcher. |
106 | watcher. |
107 | |
107 | |
108 | =head2 FEATURES |
108 | =head2 FEATURES |
109 | |
109 | |
110 | Libev supports C<select>, C<poll>, the Linux-specific C<epoll>, the |
110 | Libev supports C<select>, C<poll>, the Linux-specific aio and C<epoll> |
111 | BSD-specific C<kqueue> and the Solaris-specific event port mechanisms |
111 | interfaces, the BSD-specific C<kqueue> and the Solaris-specific event port |
112 | for file descriptor events (C<ev_io>), the Linux C<inotify> interface |
112 | mechanisms for file descriptor events (C<ev_io>), the Linux C<inotify> |
113 | (for C<ev_stat>), Linux eventfd/signalfd (for faster and cleaner |
113 | interface (for C<ev_stat>), Linux eventfd/signalfd (for faster and cleaner |
114 | inter-thread wakeup (C<ev_async>)/signal handling (C<ev_signal>)) relative |
114 | inter-thread wakeup (C<ev_async>)/signal handling (C<ev_signal>)) relative |
115 | timers (C<ev_timer>), absolute timers with customised rescheduling |
115 | timers (C<ev_timer>), absolute timers with customised rescheduling |
116 | (C<ev_periodic>), synchronous signals (C<ev_signal>), process status |
116 | (C<ev_periodic>), synchronous signals (C<ev_signal>), process status |
117 | change events (C<ev_child>), and event watchers dealing with the event |
117 | change events (C<ev_child>), and event watchers dealing with the event |
118 | loop mechanism itself (C<ev_idle>, C<ev_embed>, C<ev_prepare> and |
118 | loop mechanism itself (C<ev_idle>, C<ev_embed>, C<ev_prepare> and |
… | |
… | |
159 | When libev detects a usage error such as a negative timer interval, then |
159 | When libev detects a usage error such as a negative timer interval, then |
160 | it will print a diagnostic message and abort (via the C<assert> mechanism, |
160 | it will print a diagnostic message and abort (via the C<assert> mechanism, |
161 | so C<NDEBUG> will disable this checking): these are programming errors in |
161 | so C<NDEBUG> will disable this checking): these are programming errors in |
162 | the libev caller and need to be fixed there. |
162 | the libev caller and need to be fixed there. |
163 | |
163 | |
|
|
164 | Via the C<EV_FREQUENT> macro you can compile in and/or enable extensive |
|
|
165 | consistency checking code inside libev that can be used to check for |
|
|
166 | internal inconsistencies, suually caused by application bugs. |
|
|
167 | |
164 | Libev also has a few internal error-checking C<assert>ions, and also has |
168 | Libev also has a few internal error-checking C<assert>ions. These do not |
165 | extensive consistency checking code. These do not trigger under normal |
|
|
166 | circumstances, as they indicate either a bug in libev or worse. |
169 | trigger under normal circumstances, as they indicate either a bug in libev |
|
|
170 | or worse. |
167 | |
171 | |
168 | |
172 | |
169 | =head1 GLOBAL FUNCTIONS |
173 | =head1 GLOBAL FUNCTIONS |
170 | |
174 | |
171 | These functions can be called anytime, even before initialising the |
175 | These functions can be called anytime, even before initialising the |
… | |
… | |
265 | |
269 | |
266 | You could override this function in high-availability programs to, say, |
270 | You could override this function in high-availability programs to, say, |
267 | free some memory if it cannot allocate memory, to use a special allocator, |
271 | free some memory if it cannot allocate memory, to use a special allocator, |
268 | or even to sleep a while and retry until some memory is available. |
272 | or even to sleep a while and retry until some memory is available. |
269 | |
273 | |
|
|
274 | Example: The following is the C<realloc> function that libev itself uses |
|
|
275 | which should work with C<realloc> and C<free> functions of all kinds and |
|
|
276 | is probably a good basis for your own implementation. |
|
|
277 | |
|
|
278 | static void * |
|
|
279 | ev_realloc_emul (void *ptr, long size) EV_NOEXCEPT |
|
|
280 | { |
|
|
281 | if (size) |
|
|
282 | return realloc (ptr, size); |
|
|
283 | |
|
|
284 | free (ptr); |
|
|
285 | return 0; |
|
|
286 | } |
|
|
287 | |
270 | Example: Replace the libev allocator with one that waits a bit and then |
288 | Example: Replace the libev allocator with one that waits a bit and then |
271 | retries (example requires a standards-compliant C<realloc>). |
289 | retries. |
272 | |
290 | |
273 | static void * |
291 | static void * |
274 | persistent_realloc (void *ptr, size_t size) |
292 | persistent_realloc (void *ptr, size_t size) |
275 | { |
293 | { |
|
|
294 | if (!size) |
|
|
295 | { |
|
|
296 | free (ptr); |
|
|
297 | return 0; |
|
|
298 | } |
|
|
299 | |
276 | for (;;) |
300 | for (;;) |
277 | { |
301 | { |
278 | void *newptr = realloc (ptr, size); |
302 | void *newptr = realloc (ptr, size); |
279 | |
303 | |
280 | if (newptr) |
304 | if (newptr) |
… | |
… | |
411 | make libev check for a fork in each iteration by enabling this flag. |
435 | make libev check for a fork in each iteration by enabling this flag. |
412 | |
436 | |
413 | This works by calling C<getpid ()> on every iteration of the loop, |
437 | This works by calling C<getpid ()> on every iteration of the loop, |
414 | and thus this might slow down your event loop if you do a lot of loop |
438 | and thus this might slow down your event loop if you do a lot of loop |
415 | iterations and little real work, but is usually not noticeable (on my |
439 | iterations and little real work, but is usually not noticeable (on my |
416 | GNU/Linux system for example, C<getpid> is actually a simple 5-insn sequence |
440 | GNU/Linux system for example, C<getpid> is actually a simple 5-insn |
417 | without a system call and thus I<very> fast, but my GNU/Linux system also has |
441 | sequence without a system call and thus I<very> fast, but my GNU/Linux |
418 | C<pthread_atfork> which is even faster). |
442 | system also has C<pthread_atfork> which is even faster). (Update: glibc |
|
|
443 | versions 2.25 apparently removed the C<getpid> optimisation again). |
419 | |
444 | |
420 | The big advantage of this flag is that you can forget about fork (and |
445 | The big advantage of this flag is that you can forget about fork (and |
421 | forget about forgetting to tell libev about forking) when you use this |
446 | forget about forgetting to tell libev about forking, although you still |
422 | flag. |
447 | have to ignore C<SIGPIPE>) when you use this flag. |
423 | |
448 | |
424 | This flag setting cannot be overridden or specified in the C<LIBEV_FLAGS> |
449 | This flag setting cannot be overridden or specified in the C<LIBEV_FLAGS> |
425 | environment variable. |
450 | environment variable. |
426 | |
451 | |
427 | =item C<EVFLAG_NOINOTIFY> |
452 | =item C<EVFLAG_NOINOTIFY> |
… | |
… | |
490 | This backend maps C<EV_READ> to C<POLLIN | POLLERR | POLLHUP>, and |
515 | This backend maps C<EV_READ> to C<POLLIN | POLLERR | POLLHUP>, and |
491 | C<EV_WRITE> to C<POLLOUT | POLLERR | POLLHUP>. |
516 | C<EV_WRITE> to C<POLLOUT | POLLERR | POLLHUP>. |
492 | |
517 | |
493 | =item C<EVBACKEND_EPOLL> (value 4, Linux) |
518 | =item C<EVBACKEND_EPOLL> (value 4, Linux) |
494 | |
519 | |
495 | Use the linux-specific epoll(7) interface (for both pre- and post-2.6.9 |
520 | Use the Linux-specific epoll(7) interface (for both pre- and post-2.6.9 |
496 | kernels). |
521 | kernels). |
497 | |
522 | |
498 | For few fds, this backend is a bit little slower than poll and select, but |
523 | For few fds, this backend is a bit little slower than poll and select, but |
499 | it scales phenomenally better. While poll and select usually scale like |
524 | it scales phenomenally better. While poll and select usually scale like |
500 | O(total_fds) where total_fds is the total number of fds (or the highest |
525 | O(total_fds) where total_fds is the total number of fds (or the highest |
… | |
… | |
546 | All this means that, in practice, C<EVBACKEND_SELECT> can be as fast or |
571 | All this means that, in practice, C<EVBACKEND_SELECT> can be as fast or |
547 | faster than epoll for maybe up to a hundred file descriptors, depending on |
572 | faster than epoll for maybe up to a hundred file descriptors, depending on |
548 | the usage. So sad. |
573 | the usage. So sad. |
549 | |
574 | |
550 | While nominally embeddable in other event loops, this feature is broken in |
575 | While nominally embeddable in other event loops, this feature is broken in |
551 | all kernel versions tested so far. |
576 | a lot of kernel revisions, but probably(!) works in current versions. |
552 | |
577 | |
553 | This backend maps C<EV_READ> and C<EV_WRITE> in the same way as |
578 | This backend maps C<EV_READ> and C<EV_WRITE> in the same way as |
554 | C<EVBACKEND_POLL>. |
579 | C<EVBACKEND_POLL>. |
555 | |
580 | |
|
|
581 | =item C<EVBACKEND_LINUXAIO> (value 64, Linux) |
|
|
582 | |
|
|
583 | Use the Linux-specific Linux AIO (I<not> C<< aio(7) >> but C<< |
|
|
584 | io_submit(2) >>) event interface available in post-4.18 kernels (but libev |
|
|
585 | only tries to use it in 4.19+). |
|
|
586 | |
|
|
587 | This is another Linux train wreck of an event interface. |
|
|
588 | |
|
|
589 | If this backend works for you (as of this writing, it was very |
|
|
590 | experimental), it is the best event interface available on Linux and might |
|
|
591 | be well worth enabling it - if it isn't available in your kernel this will |
|
|
592 | be detected and this backend will be skipped. |
|
|
593 | |
|
|
594 | This backend can batch oneshot requests and supports a user-space ring |
|
|
595 | buffer to receive events. It also doesn't suffer from most of the design |
|
|
596 | problems of epoll (such as not being able to remove event sources from |
|
|
597 | the epoll set), and generally sounds too good to be true. Because, this |
|
|
598 | being the Linux kernel, of course it suffers from a whole new set of |
|
|
599 | limitations, forcing you to fall back to epoll, inheriting all its design |
|
|
600 | issues. |
|
|
601 | |
|
|
602 | For one, it is not easily embeddable (but probably could be done using |
|
|
603 | an event fd at some extra overhead). It also is subject to a system wide |
|
|
604 | limit that can be configured in F</proc/sys/fs/aio-max-nr>. If no AIO |
|
|
605 | requests are left, this backend will be skipped during initialisation, and |
|
|
606 | will switch to epoll when the loop is active. |
|
|
607 | |
|
|
608 | Most problematic in practice, however, is that not all file descriptors |
|
|
609 | work with it. For example, in Linux 5.1, TCP sockets, pipes, event fds, |
|
|
610 | files, F</dev/null> and many others are supported, but ttys do not work |
|
|
611 | properly (a known bug that the kernel developers don't care about, see |
|
|
612 | L<https://lore.kernel.org/patchwork/patch/1047453/>), so this is not |
|
|
613 | (yet?) a generic event polling interface. |
|
|
614 | |
|
|
615 | Overall, it seems the Linux developers just don't want it to have a |
|
|
616 | generic event handling mechanism other than C<select> or C<poll>. |
|
|
617 | |
|
|
618 | To work around all these problem, the current version of libev uses its |
|
|
619 | epoll backend as a fallback for file descriptor types that do not work. Or |
|
|
620 | falls back completely to epoll if the kernel acts up. |
|
|
621 | |
|
|
622 | This backend maps C<EV_READ> and C<EV_WRITE> in the same way as |
|
|
623 | C<EVBACKEND_POLL>. |
|
|
624 | |
556 | =item C<EVBACKEND_KQUEUE> (value 8, most BSD clones) |
625 | =item C<EVBACKEND_KQUEUE> (value 8, most BSD clones) |
557 | |
626 | |
558 | Kqueue deserves special mention, as at the time of this writing, it |
627 | Kqueue deserves special mention, as at the time this backend was |
559 | was broken on all BSDs except NetBSD (usually it doesn't work reliably |
628 | implemented, it was broken on all BSDs except NetBSD (usually it doesn't |
560 | with anything but sockets and pipes, except on Darwin, where of course |
629 | work reliably with anything but sockets and pipes, except on Darwin, |
561 | it's completely useless). Unlike epoll, however, whose brokenness |
630 | where of course it's completely useless). Unlike epoll, however, whose |
562 | is by design, these kqueue bugs can (and eventually will) be fixed |
631 | brokenness is by design, these kqueue bugs can be (and mostly have been) |
563 | without API changes to existing programs. For this reason it's not being |
632 | fixed without API changes to existing programs. For this reason it's not |
564 | "auto-detected" unless you explicitly specify it in the flags (i.e. using |
633 | being "auto-detected" on all platforms unless you explicitly specify it |
565 | C<EVBACKEND_KQUEUE>) or libev was compiled on a known-to-be-good (-enough) |
634 | in the flags (i.e. using C<EVBACKEND_KQUEUE>) or libev was compiled on a |
566 | system like NetBSD. |
635 | known-to-be-good (-enough) system like NetBSD. |
567 | |
636 | |
568 | You still can embed kqueue into a normal poll or select backend and use it |
637 | You still can embed kqueue into a normal poll or select backend and use it |
569 | only for sockets (after having made sure that sockets work with kqueue on |
638 | only for sockets (after having made sure that sockets work with kqueue on |
570 | the target platform). See C<ev_embed> watchers for more info. |
639 | the target platform). See C<ev_embed> watchers for more info. |
571 | |
640 | |
572 | It scales in the same way as the epoll backend, but the interface to the |
641 | It scales in the same way as the epoll backend, but the interface to the |
573 | kernel is more efficient (which says nothing about its actual speed, of |
642 | kernel is more efficient (which says nothing about its actual speed, of |
574 | course). While stopping, setting and starting an I/O watcher does never |
643 | course). While stopping, setting and starting an I/O watcher does never |
575 | cause an extra system call as with C<EVBACKEND_EPOLL>, it still adds up to |
644 | cause an extra system call as with C<EVBACKEND_EPOLL>, it still adds up to |
576 | two event changes per incident. Support for C<fork ()> is very bad (you |
645 | two event changes per incident. Support for C<fork ()> is very bad (you |
577 | might have to leak fd's on fork, but it's more sane than epoll) and it |
646 | might have to leak fds on fork, but it's more sane than epoll) and it |
578 | drops fds silently in similarly hard-to-detect cases. |
647 | drops fds silently in similarly hard-to-detect cases. |
579 | |
648 | |
580 | This backend usually performs well under most conditions. |
649 | This backend usually performs well under most conditions. |
581 | |
650 | |
582 | While nominally embeddable in other event loops, this doesn't work |
651 | While nominally embeddable in other event loops, this doesn't work |
… | |
… | |
657 | Example: Use whatever libev has to offer, but make sure that kqueue is |
726 | Example: Use whatever libev has to offer, but make sure that kqueue is |
658 | used if available. |
727 | used if available. |
659 | |
728 | |
660 | struct ev_loop *loop = ev_loop_new (ev_recommended_backends () | EVBACKEND_KQUEUE); |
729 | struct ev_loop *loop = ev_loop_new (ev_recommended_backends () | EVBACKEND_KQUEUE); |
661 | |
730 | |
|
|
731 | Example: Similarly, on linux, you mgiht want to take advantage of the |
|
|
732 | linux aio backend if possible, but fall back to something else if that |
|
|
733 | isn't available. |
|
|
734 | |
|
|
735 | struct ev_loop *loop = ev_loop_new (ev_recommended_backends () | EVBACKEND_LINUXAIO); |
|
|
736 | |
662 | =item ev_loop_destroy (loop) |
737 | =item ev_loop_destroy (loop) |
663 | |
738 | |
664 | Destroys an event loop object (frees all memory and kernel state |
739 | Destroys an event loop object (frees all memory and kernel state |
665 | etc.). None of the active event watchers will be stopped in the normal |
740 | etc.). None of the active event watchers will be stopped in the normal |
666 | sense, so e.g. C<ev_is_active> might still return true. It is your |
741 | sense, so e.g. C<ev_is_active> might still return true. It is your |
… | |
… | |
682 | If you need dynamically allocated loops it is better to use C<ev_loop_new> |
757 | If you need dynamically allocated loops it is better to use C<ev_loop_new> |
683 | and C<ev_loop_destroy>. |
758 | and C<ev_loop_destroy>. |
684 | |
759 | |
685 | =item ev_loop_fork (loop) |
760 | =item ev_loop_fork (loop) |
686 | |
761 | |
687 | This function sets a flag that causes subsequent C<ev_run> iterations to |
762 | This function sets a flag that causes subsequent C<ev_run> iterations |
688 | reinitialise the kernel state for backends that have one. Despite the |
763 | to reinitialise the kernel state for backends that have one. Despite |
689 | name, you can call it anytime, but it makes most sense after forking, in |
764 | the name, you can call it anytime you are allowed to start or stop |
690 | the child process. You I<must> call it (or use C<EVFLAG_FORKCHECK>) in the |
765 | watchers (except inside an C<ev_prepare> callback), but it makes most |
|
|
766 | sense after forking, in the child process. You I<must> call it (or use |
691 | child before resuming or calling C<ev_run>. |
767 | C<EVFLAG_FORKCHECK>) in the child before resuming or calling C<ev_run>. |
|
|
768 | |
|
|
769 | In addition, if you want to reuse a loop (via this function or |
|
|
770 | C<EVFLAG_FORKCHECK>), you I<also> have to ignore C<SIGPIPE>. |
692 | |
771 | |
693 | Again, you I<have> to call it on I<any> loop that you want to re-use after |
772 | Again, you I<have> to call it on I<any> loop that you want to re-use after |
694 | a fork, I<even if you do not plan to use the loop in the parent>. This is |
773 | a fork, I<even if you do not plan to use the loop in the parent>. This is |
695 | because some kernel interfaces *cough* I<kqueue> *cough* do funny things |
774 | because some kernel interfaces *cough* I<kqueue> *cough* do funny things |
696 | during fork. |
775 | during fork. |
… | |
… | |
1605 | |
1684 | |
1606 | But really, best use non-blocking mode. |
1685 | But really, best use non-blocking mode. |
1607 | |
1686 | |
1608 | =head3 The special problem of disappearing file descriptors |
1687 | =head3 The special problem of disappearing file descriptors |
1609 | |
1688 | |
1610 | Some backends (e.g. kqueue, epoll) need to be told about closing a file |
1689 | Some backends (e.g. kqueue, epoll, linuxaio) need to be told about closing |
1611 | descriptor (either due to calling C<close> explicitly or any other means, |
1690 | a file descriptor (either due to calling C<close> explicitly or any other |
1612 | such as C<dup2>). The reason is that you register interest in some file |
1691 | means, such as C<dup2>). The reason is that you register interest in some |
1613 | descriptor, but when it goes away, the operating system will silently drop |
1692 | file descriptor, but when it goes away, the operating system will silently |
1614 | this interest. If another file descriptor with the same number then is |
1693 | drop this interest. If another file descriptor with the same number then |
1615 | registered with libev, there is no efficient way to see that this is, in |
1694 | is registered with libev, there is no efficient way to see that this is, |
1616 | fact, a different file descriptor. |
1695 | in fact, a different file descriptor. |
1617 | |
1696 | |
1618 | To avoid having to explicitly tell libev about such cases, libev follows |
1697 | To avoid having to explicitly tell libev about such cases, libev follows |
1619 | the following policy: Each time C<ev_io_set> is being called, libev |
1698 | the following policy: Each time C<ev_io_set> is being called, libev |
1620 | will assume that this is potentially a new file descriptor, otherwise |
1699 | will assume that this is potentially a new file descriptor, otherwise |
1621 | it is assumed that the file descriptor stays the same. That means that |
1700 | it is assumed that the file descriptor stays the same. That means that |
… | |
… | |
1670 | when you rarely read from a file instead of from a socket, and want to |
1749 | when you rarely read from a file instead of from a socket, and want to |
1671 | reuse the same code path. |
1750 | reuse the same code path. |
1672 | |
1751 | |
1673 | =head3 The special problem of fork |
1752 | =head3 The special problem of fork |
1674 | |
1753 | |
1675 | Some backends (epoll, kqueue) do not support C<fork ()> at all or exhibit |
1754 | Some backends (epoll, kqueue, linuxaio, iouring) do not support C<fork ()> |
1676 | useless behaviour. Libev fully supports fork, but needs to be told about |
1755 | at all or exhibit useless behaviour. Libev fully supports fork, but needs |
1677 | it in the child if you want to continue to use it in the child. |
1756 | to be told about it in the child if you want to continue to use it in the |
|
|
1757 | child. |
1678 | |
1758 | |
1679 | To support fork in your child processes, you have to call C<ev_loop_fork |
1759 | To support fork in your child processes, you have to call C<ev_loop_fork |
1680 | ()> after a fork in the child, enable C<EVFLAG_FORKCHECK>, or resort to |
1760 | ()> after a fork in the child, enable C<EVFLAG_FORKCHECK>, or resort to |
1681 | C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>. |
1761 | C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>. |
1682 | |
1762 | |
… | |
… | |
2028 | |
2108 | |
2029 | The relative timeouts are calculated relative to the C<ev_now ()> |
2109 | The relative timeouts are calculated relative to the C<ev_now ()> |
2030 | time. This is usually the right thing as this timestamp refers to the time |
2110 | time. This is usually the right thing as this timestamp refers to the time |
2031 | of the event triggering whatever timeout you are modifying/starting. If |
2111 | of the event triggering whatever timeout you are modifying/starting. If |
2032 | you suspect event processing to be delayed and you I<need> to base the |
2112 | you suspect event processing to be delayed and you I<need> to base the |
2033 | timeout on the current time, use something like this to adjust for this: |
2113 | timeout on the current time, use something like the following to adjust |
|
|
2114 | for it: |
2034 | |
2115 | |
2035 | ev_timer_set (&timer, after + ev_now () - ev_time (), 0.); |
2116 | ev_timer_set (&timer, after + (ev_time () - ev_now ()), 0.); |
2036 | |
2117 | |
2037 | If the event loop is suspended for a long time, you can also force an |
2118 | If the event loop is suspended for a long time, you can also force an |
2038 | update of the time returned by C<ev_now ()> by calling C<ev_now_update |
2119 | update of the time returned by C<ev_now ()> by calling C<ev_now_update |
2039 | ()>. |
2120 | ()>, although that will push the event time of all outstanding events |
|
|
2121 | further into the future. |
2040 | |
2122 | |
2041 | =head3 The special problem of unsynchronised clocks |
2123 | =head3 The special problem of unsynchronised clocks |
2042 | |
2124 | |
2043 | Modern systems have a variety of clocks - libev itself uses the normal |
2125 | Modern systems have a variety of clocks - libev itself uses the normal |
2044 | "wall clock" clock and, if available, the monotonic clock (to avoid time |
2126 | "wall clock" clock and, if available, the monotonic clock (to avoid time |
… | |
… | |
2107 | |
2189 | |
2108 | =item ev_timer_init (ev_timer *, callback, ev_tstamp after, ev_tstamp repeat) |
2190 | =item ev_timer_init (ev_timer *, callback, ev_tstamp after, ev_tstamp repeat) |
2109 | |
2191 | |
2110 | =item ev_timer_set (ev_timer *, ev_tstamp after, ev_tstamp repeat) |
2192 | =item ev_timer_set (ev_timer *, ev_tstamp after, ev_tstamp repeat) |
2111 | |
2193 | |
2112 | Configure the timer to trigger after C<after> seconds. If C<repeat> |
2194 | Configure the timer to trigger after C<after> seconds (fractional and |
2113 | is C<0.>, then it will automatically be stopped once the timeout is |
2195 | negative values are supported). If C<repeat> is C<0.>, then it will |
2114 | reached. If it is positive, then the timer will automatically be |
2196 | automatically be stopped once the timeout is reached. If it is positive, |
2115 | configured to trigger again C<repeat> seconds later, again, and again, |
2197 | then the timer will automatically be configured to trigger again C<repeat> |
2116 | until stopped manually. |
2198 | seconds later, again, and again, until stopped manually. |
2117 | |
2199 | |
2118 | The timer itself will do a best-effort at avoiding drift, that is, if |
2200 | The timer itself will do a best-effort at avoiding drift, that is, if |
2119 | you configure a timer to trigger every 10 seconds, then it will normally |
2201 | you configure a timer to trigger every 10 seconds, then it will normally |
2120 | trigger at exactly 10 second intervals. If, however, your program cannot |
2202 | trigger at exactly 10 second intervals. If, however, your program cannot |
2121 | keep up with the timer (because it takes longer than those 10 seconds to |
2203 | keep up with the timer (because it takes longer than those 10 seconds to |
… | |
… | |
2203 | Periodic watchers are also timers of a kind, but they are very versatile |
2285 | Periodic watchers are also timers of a kind, but they are very versatile |
2204 | (and unfortunately a bit complex). |
2286 | (and unfortunately a bit complex). |
2205 | |
2287 | |
2206 | Unlike C<ev_timer>, periodic watchers are not based on real time (or |
2288 | Unlike C<ev_timer>, periodic watchers are not based on real time (or |
2207 | relative time, the physical time that passes) but on wall clock time |
2289 | relative time, the physical time that passes) but on wall clock time |
2208 | (absolute time, the thing you can read on your calender or clock). The |
2290 | (absolute time, the thing you can read on your calendar or clock). The |
2209 | difference is that wall clock time can run faster or slower than real |
2291 | difference is that wall clock time can run faster or slower than real |
2210 | time, and time jumps are not uncommon (e.g. when you adjust your |
2292 | time, and time jumps are not uncommon (e.g. when you adjust your |
2211 | wrist-watch). |
2293 | wrist-watch). |
2212 | |
2294 | |
2213 | You can tell a periodic watcher to trigger after some specific point |
2295 | You can tell a periodic watcher to trigger after some specific point |
… | |
… | |
2218 | C<ev_timer>, which would still trigger roughly 10 seconds after starting |
2300 | C<ev_timer>, which would still trigger roughly 10 seconds after starting |
2219 | it, as it uses a relative timeout). |
2301 | it, as it uses a relative timeout). |
2220 | |
2302 | |
2221 | C<ev_periodic> watchers can also be used to implement vastly more complex |
2303 | C<ev_periodic> watchers can also be used to implement vastly more complex |
2222 | timers, such as triggering an event on each "midnight, local time", or |
2304 | timers, such as triggering an event on each "midnight, local time", or |
2223 | other complicated rules. This cannot be done with C<ev_timer> watchers, as |
2305 | other complicated rules. This cannot easily be done with C<ev_timer> |
2224 | those cannot react to time jumps. |
2306 | watchers, as those cannot react to time jumps. |
2225 | |
2307 | |
2226 | As with timers, the callback is guaranteed to be invoked only when the |
2308 | As with timers, the callback is guaranteed to be invoked only when the |
2227 | point in time where it is supposed to trigger has passed. If multiple |
2309 | point in time where it is supposed to trigger has passed. If multiple |
2228 | timers become ready during the same loop iteration then the ones with |
2310 | timers become ready during the same loop iteration then the ones with |
2229 | earlier time-out values are invoked before ones with later time-out values |
2311 | earlier time-out values are invoked before ones with later time-out values |
… | |
… | |
2315 | |
2397 | |
2316 | NOTE: I<< This callback must always return a time that is higher than or |
2398 | NOTE: I<< This callback must always return a time that is higher than or |
2317 | equal to the passed C<now> value >>. |
2399 | equal to the passed C<now> value >>. |
2318 | |
2400 | |
2319 | This can be used to create very complex timers, such as a timer that |
2401 | This can be used to create very complex timers, such as a timer that |
2320 | triggers on "next midnight, local time". To do this, you would calculate the |
2402 | triggers on "next midnight, local time". To do this, you would calculate |
2321 | next midnight after C<now> and return the timestamp value for this. How |
2403 | the next midnight after C<now> and return the timestamp value for |
2322 | you do this is, again, up to you (but it is not trivial, which is the main |
2404 | this. Here is a (completely untested, no error checking) example on how to |
2323 | reason I omitted it as an example). |
2405 | do this: |
|
|
2406 | |
|
|
2407 | #include <time.h> |
|
|
2408 | |
|
|
2409 | static ev_tstamp |
|
|
2410 | my_rescheduler (ev_periodic *w, ev_tstamp now) |
|
|
2411 | { |
|
|
2412 | time_t tnow = (time_t)now; |
|
|
2413 | struct tm tm; |
|
|
2414 | localtime_r (&tnow, &tm); |
|
|
2415 | |
|
|
2416 | tm.tm_sec = tm.tm_min = tm.tm_hour = 0; // midnight current day |
|
|
2417 | ++tm.tm_mday; // midnight next day |
|
|
2418 | |
|
|
2419 | return mktime (&tm); |
|
|
2420 | } |
|
|
2421 | |
|
|
2422 | Note: this code might run into trouble on days that have more then two |
|
|
2423 | midnights (beginning and end). |
2324 | |
2424 | |
2325 | =back |
2425 | =back |
2326 | |
2426 | |
2327 | =item ev_periodic_again (loop, ev_periodic *) |
2427 | =item ev_periodic_again (loop, ev_periodic *) |
2328 | |
2428 | |
… | |
… | |
2393 | |
2493 | |
2394 | ev_periodic hourly_tick; |
2494 | ev_periodic hourly_tick; |
2395 | ev_periodic_init (&hourly_tick, clock_cb, |
2495 | ev_periodic_init (&hourly_tick, clock_cb, |
2396 | fmod (ev_now (loop), 3600.), 3600., 0); |
2496 | fmod (ev_now (loop), 3600.), 3600., 0); |
2397 | ev_periodic_start (loop, &hourly_tick); |
2497 | ev_periodic_start (loop, &hourly_tick); |
2398 | |
2498 | |
2399 | |
2499 | |
2400 | =head2 C<ev_signal> - signal me when a signal gets signalled! |
2500 | =head2 C<ev_signal> - signal me when a signal gets signalled! |
2401 | |
2501 | |
2402 | Signal watchers will trigger an event when the process receives a specific |
2502 | Signal watchers will trigger an event when the process receives a specific |
2403 | signal one or more times. Even though signals are very asynchronous, libev |
2503 | signal one or more times. Even though signals are very asynchronous, libev |
… | |
… | |
2907 | |
3007 | |
2908 | Prepare and check watchers are often (but not always) used in pairs: |
3008 | Prepare and check watchers are often (but not always) used in pairs: |
2909 | prepare watchers get invoked before the process blocks and check watchers |
3009 | prepare watchers get invoked before the process blocks and check watchers |
2910 | afterwards. |
3010 | afterwards. |
2911 | |
3011 | |
2912 | You I<must not> call C<ev_run> or similar functions that enter |
3012 | You I<must not> call C<ev_run> (or similar functions that enter the |
2913 | the current event loop from either C<ev_prepare> or C<ev_check> |
3013 | current event loop) or C<ev_loop_fork> from either C<ev_prepare> or |
2914 | watchers. Other loops than the current one are fine, however. The |
3014 | C<ev_check> watchers. Other loops than the current one are fine, |
2915 | rationale behind this is that you do not need to check for recursion in |
3015 | however. The rationale behind this is that you do not need to check |
2916 | those watchers, i.e. the sequence will always be C<ev_prepare>, blocking, |
3016 | for recursion in those watchers, i.e. the sequence will always be |
2917 | C<ev_check> so if you have one watcher of each kind they will always be |
3017 | C<ev_prepare>, blocking, C<ev_check> so if you have one watcher of each |
2918 | called in pairs bracketing the blocking call. |
3018 | kind they will always be called in pairs bracketing the blocking call. |
2919 | |
3019 | |
2920 | Their main purpose is to integrate other event mechanisms into libev and |
3020 | Their main purpose is to integrate other event mechanisms into libev and |
2921 | their use is somewhat advanced. They could be used, for example, to track |
3021 | their use is somewhat advanced. They could be used, for example, to track |
2922 | variable changes, implement your own watchers, integrate net-snmp or a |
3022 | variable changes, implement your own watchers, integrate net-snmp or a |
2923 | coroutine library and lots more. They are also occasionally useful if |
3023 | coroutine library and lots more. They are also occasionally useful if |
… | |
… | |
3213 | used). |
3313 | used). |
3214 | |
3314 | |
3215 | struct ev_loop *loop_hi = ev_default_init (0); |
3315 | struct ev_loop *loop_hi = ev_default_init (0); |
3216 | struct ev_loop *loop_lo = 0; |
3316 | struct ev_loop *loop_lo = 0; |
3217 | ev_embed embed; |
3317 | ev_embed embed; |
3218 | |
3318 | |
3219 | // see if there is a chance of getting one that works |
3319 | // see if there is a chance of getting one that works |
3220 | // (remember that a flags value of 0 means autodetection) |
3320 | // (remember that a flags value of 0 means autodetection) |
3221 | loop_lo = ev_embeddable_backends () & ev_recommended_backends () |
3321 | loop_lo = ev_embeddable_backends () & ev_recommended_backends () |
3222 | ? ev_loop_new (ev_embeddable_backends () & ev_recommended_backends ()) |
3322 | ? ev_loop_new (ev_embeddable_backends () & ev_recommended_backends ()) |
3223 | : 0; |
3323 | : 0; |
… | |
… | |
3237 | C<loop_socket>. (One might optionally use C<EVFLAG_NOENV>, too). |
3337 | C<loop_socket>. (One might optionally use C<EVFLAG_NOENV>, too). |
3238 | |
3338 | |
3239 | struct ev_loop *loop = ev_default_init (0); |
3339 | struct ev_loop *loop = ev_default_init (0); |
3240 | struct ev_loop *loop_socket = 0; |
3340 | struct ev_loop *loop_socket = 0; |
3241 | ev_embed embed; |
3341 | ev_embed embed; |
3242 | |
3342 | |
3243 | if (ev_supported_backends () & ~ev_recommended_backends () & EVBACKEND_KQUEUE) |
3343 | if (ev_supported_backends () & ~ev_recommended_backends () & EVBACKEND_KQUEUE) |
3244 | if ((loop_socket = ev_loop_new (EVBACKEND_KQUEUE)) |
3344 | if ((loop_socket = ev_loop_new (EVBACKEND_KQUEUE)) |
3245 | { |
3345 | { |
3246 | ev_embed_init (&embed, 0, loop_socket); |
3346 | ev_embed_init (&embed, 0, loop_socket); |
3247 | ev_embed_start (loop, &embed); |
3347 | ev_embed_start (loop, &embed); |
… | |
… | |
3263 | and calls it in the wrong process, the fork handlers will be invoked, too, |
3363 | and calls it in the wrong process, the fork handlers will be invoked, too, |
3264 | of course. |
3364 | of course. |
3265 | |
3365 | |
3266 | =head3 The special problem of life after fork - how is it possible? |
3366 | =head3 The special problem of life after fork - how is it possible? |
3267 | |
3367 | |
3268 | Most uses of C<fork()> consist of forking, then some simple calls to set |
3368 | Most uses of C<fork ()> consist of forking, then some simple calls to set |
3269 | up/change the process environment, followed by a call to C<exec()>. This |
3369 | up/change the process environment, followed by a call to C<exec()>. This |
3270 | sequence should be handled by libev without any problems. |
3370 | sequence should be handled by libev without any problems. |
3271 | |
3371 | |
3272 | This changes when the application actually wants to do event handling |
3372 | This changes when the application actually wants to do event handling |
3273 | in the child, or both parent in child, in effect "continuing" after the |
3373 | in the child, or both parent in child, in effect "continuing" after the |
… | |
… | |
3511 | |
3611 | |
3512 | There are some other functions of possible interest. Described. Here. Now. |
3612 | There are some other functions of possible interest. Described. Here. Now. |
3513 | |
3613 | |
3514 | =over 4 |
3614 | =over 4 |
3515 | |
3615 | |
3516 | =item ev_once (loop, int fd, int events, ev_tstamp timeout, callback) |
3616 | =item ev_once (loop, int fd, int events, ev_tstamp timeout, callback, arg) |
3517 | |
3617 | |
3518 | This function combines a simple timer and an I/O watcher, calls your |
3618 | This function combines a simple timer and an I/O watcher, calls your |
3519 | callback on whichever event happens first and automatically stops both |
3619 | callback on whichever event happens first and automatically stops both |
3520 | watchers. This is useful if you want to wait for a single event on an fd |
3620 | watchers. This is useful if you want to wait for a single event on an fd |
3521 | or timeout without having to allocate/configure/start/stop/free one or |
3621 | or timeout without having to allocate/configure/start/stop/free one or |
… | |
… | |
3897 | To embed libev, see L</EMBEDDING>, but in short, it's easiest to create two |
3997 | To embed libev, see L</EMBEDDING>, but in short, it's easiest to create two |
3898 | files, F<my_ev.h> and F<my_ev.c> that include the respective libev files: |
3998 | files, F<my_ev.h> and F<my_ev.c> that include the respective libev files: |
3899 | |
3999 | |
3900 | // my_ev.h |
4000 | // my_ev.h |
3901 | #define EV_CB_DECLARE(type) struct my_coro *cb; |
4001 | #define EV_CB_DECLARE(type) struct my_coro *cb; |
3902 | #define EV_CB_INVOKE(watcher) switch_to ((watcher)->cb); |
4002 | #define EV_CB_INVOKE(watcher) switch_to ((watcher)->cb) |
3903 | #include "../libev/ev.h" |
4003 | #include "../libev/ev.h" |
3904 | |
4004 | |
3905 | // my_ev.c |
4005 | // my_ev.c |
3906 | #define EV_H "my_ev.h" |
4006 | #define EV_H "my_ev.h" |
3907 | #include "../libev/ev.c" |
4007 | #include "../libev/ev.c" |
… | |
… | |
3953 | The normal C API should work fine when used from C++: both ev.h and the |
4053 | The normal C API should work fine when used from C++: both ev.h and the |
3954 | libev sources can be compiled as C++. Therefore, code that uses the C API |
4054 | libev sources can be compiled as C++. Therefore, code that uses the C API |
3955 | will work fine. |
4055 | will work fine. |
3956 | |
4056 | |
3957 | Proper exception specifications might have to be added to callbacks passed |
4057 | Proper exception specifications might have to be added to callbacks passed |
3958 | to libev: exceptions may be thrown only from watcher callbacks, all |
4058 | to libev: exceptions may be thrown only from watcher callbacks, all other |
3959 | other callbacks (allocator, syserr, loop acquire/release and periodic |
4059 | callbacks (allocator, syserr, loop acquire/release and periodic reschedule |
3960 | reschedule callbacks) must not throw exceptions, and might need a C<throw |
4060 | callbacks) must not throw exceptions, and might need a C<noexcept> |
3961 | ()> specification. If you have code that needs to be compiled as both C |
4061 | specification. If you have code that needs to be compiled as both C and |
3962 | and C++ you can use the C<EV_THROW> macro for this: |
4062 | C++ you can use the C<EV_NOEXCEPT> macro for this: |
3963 | |
4063 | |
3964 | static void |
4064 | static void |
3965 | fatal_error (const char *msg) EV_THROW |
4065 | fatal_error (const char *msg) EV_NOEXCEPT |
3966 | { |
4066 | { |
3967 | perror (msg); |
4067 | perror (msg); |
3968 | abort (); |
4068 | abort (); |
3969 | } |
4069 | } |
3970 | |
4070 | |
… | |
… | |
4097 | void operator() (ev::io &w, int revents) |
4197 | void operator() (ev::io &w, int revents) |
4098 | { |
4198 | { |
4099 | ... |
4199 | ... |
4100 | } |
4200 | } |
4101 | } |
4201 | } |
4102 | |
4202 | |
4103 | myfunctor f; |
4203 | myfunctor f; |
4104 | |
4204 | |
4105 | ev::io w; |
4205 | ev::io w; |
4106 | w.set (&f); |
4206 | w.set (&f); |
4107 | |
4207 | |
… | |
… | |
4380 | ev_vars.h |
4480 | ev_vars.h |
4381 | ev_wrap.h |
4481 | ev_wrap.h |
4382 | |
4482 | |
4383 | ev_win32.c required on win32 platforms only |
4483 | ev_win32.c required on win32 platforms only |
4384 | |
4484 | |
4385 | ev_select.c only when select backend is enabled (which is enabled by default) |
4485 | ev_select.c only when select backend is enabled |
4386 | ev_poll.c only when poll backend is enabled (disabled by default) |
4486 | ev_poll.c only when poll backend is enabled |
4387 | ev_epoll.c only when the epoll backend is enabled (disabled by default) |
4487 | ev_epoll.c only when the epoll backend is enabled |
|
|
4488 | ev_linuxaio.c only when the linux aio backend is enabled |
|
|
4489 | ev_iouring.c only when the linux io_uring backend is enabled |
4388 | ev_kqueue.c only when the kqueue backend is enabled (disabled by default) |
4490 | ev_kqueue.c only when the kqueue backend is enabled |
4389 | ev_port.c only when the solaris port backend is enabled (disabled by default) |
4491 | ev_port.c only when the solaris port backend is enabled |
4390 | |
4492 | |
4391 | F<ev.c> includes the backend files directly when enabled, so you only need |
4493 | F<ev.c> includes the backend files directly when enabled, so you only need |
4392 | to compile this single file. |
4494 | to compile this single file. |
4393 | |
4495 | |
4394 | =head3 LIBEVENT COMPATIBILITY API |
4496 | =head3 LIBEVENT COMPATIBILITY API |
… | |
… | |
4582 | If defined to be C<1>, libev will compile in support for the Linux |
4684 | If defined to be C<1>, libev will compile in support for the Linux |
4583 | C<epoll>(7) backend. Its availability will be detected at runtime, |
4685 | C<epoll>(7) backend. Its availability will be detected at runtime, |
4584 | otherwise another method will be used as fallback. This is the preferred |
4686 | otherwise another method will be used as fallback. This is the preferred |
4585 | backend for GNU/Linux systems. If undefined, it will be enabled if the |
4687 | backend for GNU/Linux systems. If undefined, it will be enabled if the |
4586 | headers indicate GNU/Linux + Glibc 2.4 or newer, otherwise disabled. |
4688 | headers indicate GNU/Linux + Glibc 2.4 or newer, otherwise disabled. |
|
|
4689 | |
|
|
4690 | =item EV_USE_LINUXAIO |
|
|
4691 | |
|
|
4692 | If defined to be C<1>, libev will compile in support for the Linux aio |
|
|
4693 | backend (C<EV_USE_EPOLL> must also be enabled). If undefined, it will be |
|
|
4694 | enabled on linux, otherwise disabled. |
|
|
4695 | |
|
|
4696 | =item EV_USE_IOURING |
|
|
4697 | |
|
|
4698 | If defined to be C<1>, libev will compile in support for the Linux |
|
|
4699 | io_uring backend (C<EV_USE_EPOLL> must also be enabled). Due to it's |
|
|
4700 | current limitations it has to be requested explicitly. If undefined, it |
|
|
4701 | will be enabled on linux, otherwise disabled. |
4587 | |
4702 | |
4588 | =item EV_USE_KQUEUE |
4703 | =item EV_USE_KQUEUE |
4589 | |
4704 | |
4590 | If defined to be C<1>, libev will compile in support for the BSD style |
4705 | If defined to be C<1>, libev will compile in support for the BSD style |
4591 | C<kqueue>(2) backend. Its actual availability will be detected at runtime, |
4706 | C<kqueue>(2) backend. Its actual availability will be detected at runtime, |
… | |
… | |
4869 | called. If set to C<2>, then the internal verification code will be |
4984 | called. If set to C<2>, then the internal verification code will be |
4870 | called once per loop, which can slow down libev. If set to C<3>, then the |
4985 | called once per loop, which can slow down libev. If set to C<3>, then the |
4871 | verification code will be called very frequently, which will slow down |
4986 | verification code will be called very frequently, which will slow down |
4872 | libev considerably. |
4987 | libev considerably. |
4873 | |
4988 | |
|
|
4989 | Verification errors are reported via C's C<assert> mechanism, so if you |
|
|
4990 | disable that (e.g. by defining C<NDEBUG>) then no errors will be reported. |
|
|
4991 | |
4874 | The default is C<1>, unless C<EV_FEATURES> overrides it, in which case it |
4992 | The default is C<1>, unless C<EV_FEATURES> overrides it, in which case it |
4875 | will be C<0>. |
4993 | will be C<0>. |
4876 | |
4994 | |
4877 | =item EV_COMMON |
4995 | =item EV_COMMON |
4878 | |
4996 | |
… | |
… | |
5294 | structure (guaranteed by POSIX but not by ISO C for example), but it also |
5412 | structure (guaranteed by POSIX but not by ISO C for example), but it also |
5295 | assumes that the same (machine) code can be used to call any watcher |
5413 | assumes that the same (machine) code can be used to call any watcher |
5296 | callback: The watcher callbacks have different type signatures, but libev |
5414 | callback: The watcher callbacks have different type signatures, but libev |
5297 | calls them using an C<ev_watcher *> internally. |
5415 | calls them using an C<ev_watcher *> internally. |
5298 | |
5416 | |
|
|
5417 | =item null pointers and integer zero are represented by 0 bytes |
|
|
5418 | |
|
|
5419 | Libev uses C<memset> to initialise structs and arrays to C<0> bytes, and |
|
|
5420 | relies on this setting pointers and integers to null. |
|
|
5421 | |
5299 | =item pointer accesses must be thread-atomic |
5422 | =item pointer accesses must be thread-atomic |
5300 | |
5423 | |
5301 | Accessing a pointer value must be atomic, it must both be readable and |
5424 | Accessing a pointer value must be atomic, it must both be readable and |
5302 | writable in one piece - this is the case on all current architectures. |
5425 | writable in one piece - this is the case on all current architectures. |
5303 | |
5426 | |