… | |
… | |
105 | details of the event, and then hand it over to libev by I<starting> the |
105 | details of the event, and then hand it over to libev by I<starting> the |
106 | watcher. |
106 | watcher. |
107 | |
107 | |
108 | =head2 FEATURES |
108 | =head2 FEATURES |
109 | |
109 | |
110 | Libev supports C<select>, C<poll>, the Linux-specific C<epoll>, the |
110 | Libev supports C<select>, C<poll>, the Linux-specific aio and C<epoll> |
111 | BSD-specific C<kqueue> and the Solaris-specific event port mechanisms |
111 | interfaces, the BSD-specific C<kqueue> and the Solaris-specific event port |
112 | for file descriptor events (C<ev_io>), the Linux C<inotify> interface |
112 | mechanisms for file descriptor events (C<ev_io>), the Linux C<inotify> |
113 | (for C<ev_stat>), Linux eventfd/signalfd (for faster and cleaner |
113 | interface (for C<ev_stat>), Linux eventfd/signalfd (for faster and cleaner |
114 | inter-thread wakeup (C<ev_async>)/signal handling (C<ev_signal>)) relative |
114 | inter-thread wakeup (C<ev_async>)/signal handling (C<ev_signal>)) relative |
115 | timers (C<ev_timer>), absolute timers with customised rescheduling |
115 | timers (C<ev_timer>), absolute timers with customised rescheduling |
116 | (C<ev_periodic>), synchronous signals (C<ev_signal>), process status |
116 | (C<ev_periodic>), synchronous signals (C<ev_signal>), process status |
117 | change events (C<ev_child>), and event watchers dealing with the event |
117 | change events (C<ev_child>), and event watchers dealing with the event |
118 | loop mechanism itself (C<ev_idle>, C<ev_embed>, C<ev_prepare> and |
118 | loop mechanism itself (C<ev_idle>, C<ev_embed>, C<ev_prepare> and |
… | |
… | |
159 | When libev detects a usage error such as a negative timer interval, then |
159 | When libev detects a usage error such as a negative timer interval, then |
160 | it will print a diagnostic message and abort (via the C<assert> mechanism, |
160 | it will print a diagnostic message and abort (via the C<assert> mechanism, |
161 | so C<NDEBUG> will disable this checking): these are programming errors in |
161 | so C<NDEBUG> will disable this checking): these are programming errors in |
162 | the libev caller and need to be fixed there. |
162 | the libev caller and need to be fixed there. |
163 | |
163 | |
|
|
164 | Via the C<EV_FREQUENT> macro you can compile in and/or enable extensive |
|
|
165 | consistency checking code inside libev that can be used to check for |
|
|
166 | internal inconsistencies, suually caused by application bugs. |
|
|
167 | |
164 | Libev also has a few internal error-checking C<assert>ions, and also has |
168 | Libev also has a few internal error-checking C<assert>ions. These do not |
165 | extensive consistency checking code. These do not trigger under normal |
|
|
166 | circumstances, as they indicate either a bug in libev or worse. |
169 | trigger under normal circumstances, as they indicate either a bug in libev |
|
|
170 | or worse. |
167 | |
171 | |
168 | |
172 | |
169 | =head1 GLOBAL FUNCTIONS |
173 | =head1 GLOBAL FUNCTIONS |
170 | |
174 | |
171 | These functions can be called anytime, even before initialising the |
175 | These functions can be called anytime, even before initialising the |
… | |
… | |
265 | |
269 | |
266 | You could override this function in high-availability programs to, say, |
270 | You could override this function in high-availability programs to, say, |
267 | free some memory if it cannot allocate memory, to use a special allocator, |
271 | free some memory if it cannot allocate memory, to use a special allocator, |
268 | or even to sleep a while and retry until some memory is available. |
272 | or even to sleep a while and retry until some memory is available. |
269 | |
273 | |
|
|
274 | Example: The following is the C<realloc> function that libev itself uses |
|
|
275 | which should work with C<realloc> and C<free> functions of all kinds and |
|
|
276 | is probably a good basis for your own implementation. |
|
|
277 | |
|
|
278 | static void * |
|
|
279 | ev_realloc_emul (void *ptr, long size) EV_NOEXCEPT |
|
|
280 | { |
|
|
281 | if (size) |
|
|
282 | return realloc (ptr, size); |
|
|
283 | |
|
|
284 | free (ptr); |
|
|
285 | return 0; |
|
|
286 | } |
|
|
287 | |
270 | Example: Replace the libev allocator with one that waits a bit and then |
288 | Example: Replace the libev allocator with one that waits a bit and then |
271 | retries (example requires a standards-compliant C<realloc>). |
289 | retries. |
272 | |
290 | |
273 | static void * |
291 | static void * |
274 | persistent_realloc (void *ptr, size_t size) |
292 | persistent_realloc (void *ptr, size_t size) |
275 | { |
293 | { |
|
|
294 | if (!size) |
|
|
295 | { |
|
|
296 | free (ptr); |
|
|
297 | return 0; |
|
|
298 | } |
|
|
299 | |
276 | for (;;) |
300 | for (;;) |
277 | { |
301 | { |
278 | void *newptr = realloc (ptr, size); |
302 | void *newptr = realloc (ptr, size); |
279 | |
303 | |
280 | if (newptr) |
304 | if (newptr) |
… | |
… | |
411 | make libev check for a fork in each iteration by enabling this flag. |
435 | make libev check for a fork in each iteration by enabling this flag. |
412 | |
436 | |
413 | This works by calling C<getpid ()> on every iteration of the loop, |
437 | This works by calling C<getpid ()> on every iteration of the loop, |
414 | and thus this might slow down your event loop if you do a lot of loop |
438 | and thus this might slow down your event loop if you do a lot of loop |
415 | iterations and little real work, but is usually not noticeable (on my |
439 | iterations and little real work, but is usually not noticeable (on my |
416 | GNU/Linux system for example, C<getpid> is actually a simple 5-insn sequence |
440 | GNU/Linux system for example, C<getpid> is actually a simple 5-insn |
417 | without a system call and thus I<very> fast, but my GNU/Linux system also has |
441 | sequence without a system call and thus I<very> fast, but my GNU/Linux |
418 | C<pthread_atfork> which is even faster). |
442 | system also has C<pthread_atfork> which is even faster). (Update: glibc |
|
|
443 | versions 2.25 apparently removed the C<getpid> optimisation again). |
419 | |
444 | |
420 | The big advantage of this flag is that you can forget about fork (and |
445 | The big advantage of this flag is that you can forget about fork (and |
421 | forget about forgetting to tell libev about forking) when you use this |
446 | forget about forgetting to tell libev about forking, although you still |
422 | flag. |
447 | have to ignore C<SIGPIPE>) when you use this flag. |
423 | |
448 | |
424 | This flag setting cannot be overridden or specified in the C<LIBEV_FLAGS> |
449 | This flag setting cannot be overridden or specified in the C<LIBEV_FLAGS> |
425 | environment variable. |
450 | environment variable. |
426 | |
451 | |
427 | =item C<EVFLAG_NOINOTIFY> |
452 | =item C<EVFLAG_NOINOTIFY> |
… | |
… | |
455 | unblocking the signals. |
480 | unblocking the signals. |
456 | |
481 | |
457 | It's also required by POSIX in a threaded program, as libev calls |
482 | It's also required by POSIX in a threaded program, as libev calls |
458 | C<sigprocmask>, whose behaviour is officially unspecified. |
483 | C<sigprocmask>, whose behaviour is officially unspecified. |
459 | |
484 | |
460 | This flag's behaviour will become the default in future versions of libev. |
485 | =item C<EVFLAG_NOTIMERFD> |
|
|
486 | |
|
|
487 | When this flag is specified, the libev will avoid using a C<timerfd> to |
|
|
488 | detect time jumps. It will still be able to detect time jumps, but takes |
|
|
489 | longer and has a lower accuracy in doing so, but saves a file descriptor |
|
|
490 | per loop. |
|
|
491 | |
|
|
492 | The current implementation only tries to use a C<timerfd> when the first |
|
|
493 | C<ev_periodic> watcher is started and falls back on other methods if it |
|
|
494 | cannot be created, but this behaviour might change in the future. |
461 | |
495 | |
462 | =item C<EVBACKEND_SELECT> (value 1, portable select backend) |
496 | =item C<EVBACKEND_SELECT> (value 1, portable select backend) |
463 | |
497 | |
464 | This is your standard select(2) backend. Not I<completely> standard, as |
498 | This is your standard select(2) backend. Not I<completely> standard, as |
465 | libev tries to roll its own fd_set with no limits on the number of fds, |
499 | libev tries to roll its own fd_set with no limits on the number of fds, |
… | |
… | |
490 | This backend maps C<EV_READ> to C<POLLIN | POLLERR | POLLHUP>, and |
524 | This backend maps C<EV_READ> to C<POLLIN | POLLERR | POLLHUP>, and |
491 | C<EV_WRITE> to C<POLLOUT | POLLERR | POLLHUP>. |
525 | C<EV_WRITE> to C<POLLOUT | POLLERR | POLLHUP>. |
492 | |
526 | |
493 | =item C<EVBACKEND_EPOLL> (value 4, Linux) |
527 | =item C<EVBACKEND_EPOLL> (value 4, Linux) |
494 | |
528 | |
495 | Use the linux-specific epoll(7) interface (for both pre- and post-2.6.9 |
529 | Use the Linux-specific epoll(7) interface (for both pre- and post-2.6.9 |
496 | kernels). |
530 | kernels). |
497 | |
531 | |
498 | For few fds, this backend is a bit little slower than poll and select, but |
532 | For few fds, this backend is a bit little slower than poll and select, but |
499 | it scales phenomenally better. While poll and select usually scale like |
533 | it scales phenomenally better. While poll and select usually scale like |
500 | O(total_fds) where total_fds is the total number of fds (or the highest |
534 | O(total_fds) where total_fds is the total number of fds (or the highest |
… | |
… | |
546 | All this means that, in practice, C<EVBACKEND_SELECT> can be as fast or |
580 | All this means that, in practice, C<EVBACKEND_SELECT> can be as fast or |
547 | faster than epoll for maybe up to a hundred file descriptors, depending on |
581 | faster than epoll for maybe up to a hundred file descriptors, depending on |
548 | the usage. So sad. |
582 | the usage. So sad. |
549 | |
583 | |
550 | While nominally embeddable in other event loops, this feature is broken in |
584 | While nominally embeddable in other event loops, this feature is broken in |
551 | all kernel versions tested so far. |
585 | a lot of kernel revisions, but probably(!) works in current versions. |
552 | |
586 | |
553 | This backend maps C<EV_READ> and C<EV_WRITE> in the same way as |
587 | This backend maps C<EV_READ> and C<EV_WRITE> in the same way as |
554 | C<EVBACKEND_POLL>. |
588 | C<EVBACKEND_POLL>. |
555 | |
589 | |
|
|
590 | =item C<EVBACKEND_LINUXAIO> (value 64, Linux) |
|
|
591 | |
|
|
592 | Use the Linux-specific Linux AIO (I<not> C<< aio(7) >> but C<< |
|
|
593 | io_submit(2) >>) event interface available in post-4.18 kernels (but libev |
|
|
594 | only tries to use it in 4.19+). |
|
|
595 | |
|
|
596 | This is another Linux train wreck of an event interface. |
|
|
597 | |
|
|
598 | If this backend works for you (as of this writing, it was very |
|
|
599 | experimental), it is the best event interface available on Linux and might |
|
|
600 | be well worth enabling it - if it isn't available in your kernel this will |
|
|
601 | be detected and this backend will be skipped. |
|
|
602 | |
|
|
603 | This backend can batch oneshot requests and supports a user-space ring |
|
|
604 | buffer to receive events. It also doesn't suffer from most of the design |
|
|
605 | problems of epoll (such as not being able to remove event sources from |
|
|
606 | the epoll set), and generally sounds too good to be true. Because, this |
|
|
607 | being the Linux kernel, of course it suffers from a whole new set of |
|
|
608 | limitations, forcing you to fall back to epoll, inheriting all its design |
|
|
609 | issues. |
|
|
610 | |
|
|
611 | For one, it is not easily embeddable (but probably could be done using |
|
|
612 | an event fd at some extra overhead). It also is subject to a system wide |
|
|
613 | limit that can be configured in F</proc/sys/fs/aio-max-nr>. If no AIO |
|
|
614 | requests are left, this backend will be skipped during initialisation, and |
|
|
615 | will switch to epoll when the loop is active. |
|
|
616 | |
|
|
617 | Most problematic in practice, however, is that not all file descriptors |
|
|
618 | work with it. For example, in Linux 5.1, TCP sockets, pipes, event fds, |
|
|
619 | files, F</dev/null> and many others are supported, but ttys do not work |
|
|
620 | properly (a known bug that the kernel developers don't care about, see |
|
|
621 | L<https://lore.kernel.org/patchwork/patch/1047453/>), so this is not |
|
|
622 | (yet?) a generic event polling interface. |
|
|
623 | |
|
|
624 | Overall, it seems the Linux developers just don't want it to have a |
|
|
625 | generic event handling mechanism other than C<select> or C<poll>. |
|
|
626 | |
|
|
627 | To work around all these problem, the current version of libev uses its |
|
|
628 | epoll backend as a fallback for file descriptor types that do not work. Or |
|
|
629 | falls back completely to epoll if the kernel acts up. |
|
|
630 | |
|
|
631 | This backend maps C<EV_READ> and C<EV_WRITE> in the same way as |
|
|
632 | C<EVBACKEND_POLL>. |
|
|
633 | |
556 | =item C<EVBACKEND_KQUEUE> (value 8, most BSD clones) |
634 | =item C<EVBACKEND_KQUEUE> (value 8, most BSD clones) |
557 | |
635 | |
558 | Kqueue deserves special mention, as at the time of this writing, it |
636 | Kqueue deserves special mention, as at the time this backend was |
559 | was broken on all BSDs except NetBSD (usually it doesn't work reliably |
637 | implemented, it was broken on all BSDs except NetBSD (usually it doesn't |
560 | with anything but sockets and pipes, except on Darwin, where of course |
638 | work reliably with anything but sockets and pipes, except on Darwin, |
561 | it's completely useless). Unlike epoll, however, whose brokenness |
639 | where of course it's completely useless). Unlike epoll, however, whose |
562 | is by design, these kqueue bugs can (and eventually will) be fixed |
640 | brokenness is by design, these kqueue bugs can be (and mostly have been) |
563 | without API changes to existing programs. For this reason it's not being |
641 | fixed without API changes to existing programs. For this reason it's not |
564 | "auto-detected" unless you explicitly specify it in the flags (i.e. using |
642 | being "auto-detected" on all platforms unless you explicitly specify it |
565 | C<EVBACKEND_KQUEUE>) or libev was compiled on a known-to-be-good (-enough) |
643 | in the flags (i.e. using C<EVBACKEND_KQUEUE>) or libev was compiled on a |
566 | system like NetBSD. |
644 | known-to-be-good (-enough) system like NetBSD. |
567 | |
645 | |
568 | You still can embed kqueue into a normal poll or select backend and use it |
646 | You still can embed kqueue into a normal poll or select backend and use it |
569 | only for sockets (after having made sure that sockets work with kqueue on |
647 | only for sockets (after having made sure that sockets work with kqueue on |
570 | the target platform). See C<ev_embed> watchers for more info. |
648 | the target platform). See C<ev_embed> watchers for more info. |
571 | |
649 | |
572 | It scales in the same way as the epoll backend, but the interface to the |
650 | It scales in the same way as the epoll backend, but the interface to the |
573 | kernel is more efficient (which says nothing about its actual speed, of |
651 | kernel is more efficient (which says nothing about its actual speed, of |
574 | course). While stopping, setting and starting an I/O watcher does never |
652 | course). While stopping, setting and starting an I/O watcher does never |
575 | cause an extra system call as with C<EVBACKEND_EPOLL>, it still adds up to |
653 | cause an extra system call as with C<EVBACKEND_EPOLL>, it still adds up to |
576 | two event changes per incident. Support for C<fork ()> is very bad (you |
654 | two event changes per incident. Support for C<fork ()> is very bad (you |
577 | might have to leak fd's on fork, but it's more sane than epoll) and it |
655 | might have to leak fds on fork, but it's more sane than epoll) and it |
578 | drops fds silently in similarly hard-to-detect cases. |
656 | drops fds silently in similarly hard-to-detect cases. |
579 | |
657 | |
580 | This backend usually performs well under most conditions. |
658 | This backend usually performs well under most conditions. |
581 | |
659 | |
582 | While nominally embeddable in other event loops, this doesn't work |
660 | While nominally embeddable in other event loops, this doesn't work |
… | |
… | |
657 | Example: Use whatever libev has to offer, but make sure that kqueue is |
735 | Example: Use whatever libev has to offer, but make sure that kqueue is |
658 | used if available. |
736 | used if available. |
659 | |
737 | |
660 | struct ev_loop *loop = ev_loop_new (ev_recommended_backends () | EVBACKEND_KQUEUE); |
738 | struct ev_loop *loop = ev_loop_new (ev_recommended_backends () | EVBACKEND_KQUEUE); |
661 | |
739 | |
|
|
740 | Example: Similarly, on linux, you mgiht want to take advantage of the |
|
|
741 | linux aio backend if possible, but fall back to something else if that |
|
|
742 | isn't available. |
|
|
743 | |
|
|
744 | struct ev_loop *loop = ev_loop_new (ev_recommended_backends () | EVBACKEND_LINUXAIO); |
|
|
745 | |
662 | =item ev_loop_destroy (loop) |
746 | =item ev_loop_destroy (loop) |
663 | |
747 | |
664 | Destroys an event loop object (frees all memory and kernel state |
748 | Destroys an event loop object (frees all memory and kernel state |
665 | etc.). None of the active event watchers will be stopped in the normal |
749 | etc.). None of the active event watchers will be stopped in the normal |
666 | sense, so e.g. C<ev_is_active> might still return true. It is your |
750 | sense, so e.g. C<ev_is_active> might still return true. It is your |
… | |
… | |
682 | If you need dynamically allocated loops it is better to use C<ev_loop_new> |
766 | If you need dynamically allocated loops it is better to use C<ev_loop_new> |
683 | and C<ev_loop_destroy>. |
767 | and C<ev_loop_destroy>. |
684 | |
768 | |
685 | =item ev_loop_fork (loop) |
769 | =item ev_loop_fork (loop) |
686 | |
770 | |
687 | This function sets a flag that causes subsequent C<ev_run> iterations to |
771 | This function sets a flag that causes subsequent C<ev_run> iterations |
688 | reinitialise the kernel state for backends that have one. Despite the |
772 | to reinitialise the kernel state for backends that have one. Despite |
689 | name, you can call it anytime, but it makes most sense after forking, in |
773 | the name, you can call it anytime you are allowed to start or stop |
690 | the child process. You I<must> call it (or use C<EVFLAG_FORKCHECK>) in the |
774 | watchers (except inside an C<ev_prepare> callback), but it makes most |
|
|
775 | sense after forking, in the child process. You I<must> call it (or use |
691 | child before resuming or calling C<ev_run>. |
776 | C<EVFLAG_FORKCHECK>) in the child before resuming or calling C<ev_run>. |
|
|
777 | |
|
|
778 | In addition, if you want to reuse a loop (via this function or |
|
|
779 | C<EVFLAG_FORKCHECK>), you I<also> have to ignore C<SIGPIPE>. |
692 | |
780 | |
693 | Again, you I<have> to call it on I<any> loop that you want to re-use after |
781 | Again, you I<have> to call it on I<any> loop that you want to re-use after |
694 | a fork, I<even if you do not plan to use the loop in the parent>. This is |
782 | a fork, I<even if you do not plan to use the loop in the parent>. This is |
695 | because some kernel interfaces *cough* I<kqueue> *cough* do funny things |
783 | because some kernel interfaces *cough* I<kqueue> *cough* do funny things |
696 | during fork. |
784 | during fork. |
… | |
… | |
871 | - Queue all expired timers. |
959 | - Queue all expired timers. |
872 | - Queue all expired periodics. |
960 | - Queue all expired periodics. |
873 | - Queue all idle watchers with priority higher than that of pending events. |
961 | - Queue all idle watchers with priority higher than that of pending events. |
874 | - Queue all check watchers. |
962 | - Queue all check watchers. |
875 | - Call all queued watchers in reverse order (i.e. check watchers first). |
963 | - Call all queued watchers in reverse order (i.e. check watchers first). |
876 | Signals and child watchers are implemented as I/O watchers, and will |
964 | Signals, async and child watchers are implemented as I/O watchers, and |
877 | be handled here by queueing them when their watcher gets executed. |
965 | will be handled here by queueing them when their watcher gets executed. |
878 | - If ev_break has been called, or EVRUN_ONCE or EVRUN_NOWAIT |
966 | - If ev_break has been called, or EVRUN_ONCE or EVRUN_NOWAIT |
879 | were used, or there are no active watchers, goto FINISH, otherwise |
967 | were used, or there are no active watchers, goto FINISH, otherwise |
880 | continue with step LOOP. |
968 | continue with step LOOP. |
881 | FINISH: |
969 | FINISH: |
882 | - Reset the ev_break status iff it was EVBREAK_ONE. |
970 | - Reset the ev_break status iff it was EVBREAK_ONE. |
… | |
… | |
1130 | with a watcher-specific start function (C<< ev_TYPE_start (loop, watcher |
1218 | with a watcher-specific start function (C<< ev_TYPE_start (loop, watcher |
1131 | *) >>), and you can stop watching for events at any time by calling the |
1219 | *) >>), and you can stop watching for events at any time by calling the |
1132 | corresponding stop function (C<< ev_TYPE_stop (loop, watcher *) >>. |
1220 | corresponding stop function (C<< ev_TYPE_stop (loop, watcher *) >>. |
1133 | |
1221 | |
1134 | As long as your watcher is active (has been started but not stopped) you |
1222 | As long as your watcher is active (has been started but not stopped) you |
1135 | must not touch the values stored in it. Most specifically you must never |
1223 | must not touch the values stored in it except when explicitly documented |
1136 | reinitialise it or call its C<ev_TYPE_set> macro. |
1224 | otherwise. Most specifically you must never reinitialise it or call its |
|
|
1225 | C<ev_TYPE_set> macro. |
1137 | |
1226 | |
1138 | Each and every callback receives the event loop pointer as first, the |
1227 | Each and every callback receives the event loop pointer as first, the |
1139 | registered watcher structure as second, and a bitset of received events as |
1228 | registered watcher structure as second, and a bitset of received events as |
1140 | third argument. |
1229 | third argument. |
1141 | |
1230 | |
… | |
… | |
1307 | |
1396 | |
1308 | =item bool ev_is_active (ev_TYPE *watcher) |
1397 | =item bool ev_is_active (ev_TYPE *watcher) |
1309 | |
1398 | |
1310 | Returns a true value iff the watcher is active (i.e. it has been started |
1399 | Returns a true value iff the watcher is active (i.e. it has been started |
1311 | and not yet been stopped). As long as a watcher is active you must not modify |
1400 | and not yet been stopped). As long as a watcher is active you must not modify |
1312 | it. |
1401 | it unless documented otherwise. |
1313 | |
1402 | |
1314 | =item bool ev_is_pending (ev_TYPE *watcher) |
1403 | =item bool ev_is_pending (ev_TYPE *watcher) |
1315 | |
1404 | |
1316 | Returns a true value iff the watcher is pending, (i.e. it has outstanding |
1405 | Returns a true value iff the watcher is pending, (i.e. it has outstanding |
1317 | events but its callback has not yet been invoked). As long as a watcher |
1406 | events but its callback has not yet been invoked). As long as a watcher |
… | |
… | |
1456 | |
1545 | |
1457 | Many event loops support I<watcher priorities>, which are usually small |
1546 | Many event loops support I<watcher priorities>, which are usually small |
1458 | integers that influence the ordering of event callback invocation |
1547 | integers that influence the ordering of event callback invocation |
1459 | between watchers in some way, all else being equal. |
1548 | between watchers in some way, all else being equal. |
1460 | |
1549 | |
1461 | In libev, Watcher priorities can be set using C<ev_set_priority>. See its |
1550 | In libev, watcher priorities can be set using C<ev_set_priority>. See its |
1462 | description for the more technical details such as the actual priority |
1551 | description for the more technical details such as the actual priority |
1463 | range. |
1552 | range. |
1464 | |
1553 | |
1465 | There are two common ways how these these priorities are being interpreted |
1554 | There are two common ways how these these priorities are being interpreted |
1466 | by event loops: |
1555 | by event loops: |
… | |
… | |
1560 | |
1649 | |
1561 | This section describes each watcher in detail, but will not repeat |
1650 | This section describes each watcher in detail, but will not repeat |
1562 | information given in the last section. Any initialisation/set macros, |
1651 | information given in the last section. Any initialisation/set macros, |
1563 | functions and members specific to the watcher type are explained. |
1652 | functions and members specific to the watcher type are explained. |
1564 | |
1653 | |
1565 | Members are additionally marked with either I<[read-only]>, meaning that, |
1654 | Most members are additionally marked with either I<[read-only]>, meaning |
1566 | while the watcher is active, you can look at the member and expect some |
1655 | that, while the watcher is active, you can look at the member and expect |
1567 | sensible content, but you must not modify it (you can modify it while the |
1656 | some sensible content, but you must not modify it (you can modify it while |
1568 | watcher is stopped to your hearts content), or I<[read-write]>, which |
1657 | the watcher is stopped to your hearts content), or I<[read-write]>, which |
1569 | means you can expect it to have some sensible content while the watcher |
1658 | means you can expect it to have some sensible content while the watcher is |
1570 | is active, but you can also modify it. Modifying it may not do something |
1659 | active, but you can also modify it (within the same thread as the event |
|
|
1660 | loop, i.e. without creating data races). Modifying it may not do something |
1571 | sensible or take immediate effect (or do anything at all), but libev will |
1661 | sensible or take immediate effect (or do anything at all), but libev will |
1572 | not crash or malfunction in any way. |
1662 | not crash or malfunction in any way. |
1573 | |
1663 | |
|
|
1664 | In any case, the documentation for each member will explain what the |
|
|
1665 | effects are, and if there are any additional access restrictions. |
1574 | |
1666 | |
1575 | =head2 C<ev_io> - is this file descriptor readable or writable? |
1667 | =head2 C<ev_io> - is this file descriptor readable or writable? |
1576 | |
1668 | |
1577 | I/O watchers check whether a file descriptor is readable or writable |
1669 | I/O watchers check whether a file descriptor is readable or writable |
1578 | in each iteration of the event loop, or, more precisely, when reading |
1670 | in each iteration of the event loop, or, more precisely, when reading |
… | |
… | |
1605 | |
1697 | |
1606 | But really, best use non-blocking mode. |
1698 | But really, best use non-blocking mode. |
1607 | |
1699 | |
1608 | =head3 The special problem of disappearing file descriptors |
1700 | =head3 The special problem of disappearing file descriptors |
1609 | |
1701 | |
1610 | Some backends (e.g. kqueue, epoll) need to be told about closing a file |
1702 | Some backends (e.g. kqueue, epoll, linuxaio) need to be told about closing |
1611 | descriptor (either due to calling C<close> explicitly or any other means, |
1703 | a file descriptor (either due to calling C<close> explicitly or any other |
1612 | such as C<dup2>). The reason is that you register interest in some file |
1704 | means, such as C<dup2>). The reason is that you register interest in some |
1613 | descriptor, but when it goes away, the operating system will silently drop |
1705 | file descriptor, but when it goes away, the operating system will silently |
1614 | this interest. If another file descriptor with the same number then is |
1706 | drop this interest. If another file descriptor with the same number then |
1615 | registered with libev, there is no efficient way to see that this is, in |
1707 | is registered with libev, there is no efficient way to see that this is, |
1616 | fact, a different file descriptor. |
1708 | in fact, a different file descriptor. |
1617 | |
1709 | |
1618 | To avoid having to explicitly tell libev about such cases, libev follows |
1710 | To avoid having to explicitly tell libev about such cases, libev follows |
1619 | the following policy: Each time C<ev_io_set> is being called, libev |
1711 | the following policy: Each time C<ev_io_set> is being called, libev |
1620 | will assume that this is potentially a new file descriptor, otherwise |
1712 | will assume that this is potentially a new file descriptor, otherwise |
1621 | it is assumed that the file descriptor stays the same. That means that |
1713 | it is assumed that the file descriptor stays the same. That means that |
… | |
… | |
1670 | when you rarely read from a file instead of from a socket, and want to |
1762 | when you rarely read from a file instead of from a socket, and want to |
1671 | reuse the same code path. |
1763 | reuse the same code path. |
1672 | |
1764 | |
1673 | =head3 The special problem of fork |
1765 | =head3 The special problem of fork |
1674 | |
1766 | |
1675 | Some backends (epoll, kqueue) do not support C<fork ()> at all or exhibit |
1767 | Some backends (epoll, kqueue, linuxaio, iouring) do not support C<fork ()> |
1676 | useless behaviour. Libev fully supports fork, but needs to be told about |
1768 | at all or exhibit useless behaviour. Libev fully supports fork, but needs |
1677 | it in the child if you want to continue to use it in the child. |
1769 | to be told about it in the child if you want to continue to use it in the |
|
|
1770 | child. |
1678 | |
1771 | |
1679 | To support fork in your child processes, you have to call C<ev_loop_fork |
1772 | To support fork in your child processes, you have to call C<ev_loop_fork |
1680 | ()> after a fork in the child, enable C<EVFLAG_FORKCHECK>, or resort to |
1773 | ()> after a fork in the child, enable C<EVFLAG_FORKCHECK>, or resort to |
1681 | C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>. |
1774 | C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>. |
1682 | |
1775 | |
… | |
… | |
1737 | =item ev_io_init (ev_io *, callback, int fd, int events) |
1830 | =item ev_io_init (ev_io *, callback, int fd, int events) |
1738 | |
1831 | |
1739 | =item ev_io_set (ev_io *, int fd, int events) |
1832 | =item ev_io_set (ev_io *, int fd, int events) |
1740 | |
1833 | |
1741 | Configures an C<ev_io> watcher. The C<fd> is the file descriptor to |
1834 | Configures an C<ev_io> watcher. The C<fd> is the file descriptor to |
1742 | receive events for and C<events> is either C<EV_READ>, C<EV_WRITE> or |
1835 | receive events for and C<events> is either C<EV_READ>, C<EV_WRITE>, both |
1743 | C<EV_READ | EV_WRITE>, to express the desire to receive the given events. |
1836 | C<EV_READ | EV_WRITE> or C<0>, to express the desire to receive the given |
|
|
1837 | events. |
1744 | |
1838 | |
1745 | =item int fd [read-only] |
1839 | Note that setting the C<events> to C<0> and starting the watcher is |
|
|
1840 | supported, but not specially optimized - if your program sometimes happens |
|
|
1841 | to generate this combination this is fine, but if it is easy to avoid |
|
|
1842 | starting an io watcher watching for no events you should do so. |
1746 | |
1843 | |
1747 | The file descriptor being watched. |
1844 | =item ev_io_modify (ev_io *, int events) |
1748 | |
1845 | |
|
|
1846 | Similar to C<ev_io_set>, but only changes the requested events. Using this |
|
|
1847 | might be faster with some backends, as libev can assume that the C<fd> |
|
|
1848 | still refers to the same underlying file description, something it cannot |
|
|
1849 | do when using C<ev_io_set>. |
|
|
1850 | |
|
|
1851 | =item int fd [no-modify] |
|
|
1852 | |
|
|
1853 | The file descriptor being watched. While it can be read at any time, you |
|
|
1854 | must not modify this member even when the watcher is stopped - always use |
|
|
1855 | C<ev_io_set> for that. |
|
|
1856 | |
1749 | =item int events [read-only] |
1857 | =item int events [no-modify] |
1750 | |
1858 | |
1751 | The events being watched. |
1859 | The set of events the fd is being watched for, among other flags. Remember |
|
|
1860 | that this is a bit set - to test for C<EV_READ>, use C<< w->events & |
|
|
1861 | EV_READ >>, and similarly for C<EV_WRITE>. |
|
|
1862 | |
|
|
1863 | As with C<fd>, you must not modify this member even when the watcher is |
|
|
1864 | stopped, always use C<ev_io_set> or C<ev_io_modify> for that. |
1752 | |
1865 | |
1753 | =back |
1866 | =back |
1754 | |
1867 | |
1755 | =head3 Examples |
1868 | =head3 Examples |
1756 | |
1869 | |
… | |
… | |
2028 | |
2141 | |
2029 | The relative timeouts are calculated relative to the C<ev_now ()> |
2142 | The relative timeouts are calculated relative to the C<ev_now ()> |
2030 | time. This is usually the right thing as this timestamp refers to the time |
2143 | time. This is usually the right thing as this timestamp refers to the time |
2031 | of the event triggering whatever timeout you are modifying/starting. If |
2144 | of the event triggering whatever timeout you are modifying/starting. If |
2032 | you suspect event processing to be delayed and you I<need> to base the |
2145 | you suspect event processing to be delayed and you I<need> to base the |
2033 | timeout on the current time, use something like this to adjust for this: |
2146 | timeout on the current time, use something like the following to adjust |
|
|
2147 | for it: |
2034 | |
2148 | |
2035 | ev_timer_set (&timer, after + ev_now () - ev_time (), 0.); |
2149 | ev_timer_set (&timer, after + (ev_time () - ev_now ()), 0.); |
2036 | |
2150 | |
2037 | If the event loop is suspended for a long time, you can also force an |
2151 | If the event loop is suspended for a long time, you can also force an |
2038 | update of the time returned by C<ev_now ()> by calling C<ev_now_update |
2152 | update of the time returned by C<ev_now ()> by calling C<ev_now_update |
2039 | ()>. |
2153 | ()>, although that will push the event time of all outstanding events |
|
|
2154 | further into the future. |
2040 | |
2155 | |
2041 | =head3 The special problem of unsynchronised clocks |
2156 | =head3 The special problem of unsynchronised clocks |
2042 | |
2157 | |
2043 | Modern systems have a variety of clocks - libev itself uses the normal |
2158 | Modern systems have a variety of clocks - libev itself uses the normal |
2044 | "wall clock" clock and, if available, the monotonic clock (to avoid time |
2159 | "wall clock" clock and, if available, the monotonic clock (to avoid time |
… | |
… | |
2107 | |
2222 | |
2108 | =item ev_timer_init (ev_timer *, callback, ev_tstamp after, ev_tstamp repeat) |
2223 | =item ev_timer_init (ev_timer *, callback, ev_tstamp after, ev_tstamp repeat) |
2109 | |
2224 | |
2110 | =item ev_timer_set (ev_timer *, ev_tstamp after, ev_tstamp repeat) |
2225 | =item ev_timer_set (ev_timer *, ev_tstamp after, ev_tstamp repeat) |
2111 | |
2226 | |
2112 | Configure the timer to trigger after C<after> seconds. If C<repeat> |
2227 | Configure the timer to trigger after C<after> seconds (fractional and |
2113 | is C<0.>, then it will automatically be stopped once the timeout is |
2228 | negative values are supported). If C<repeat> is C<0.>, then it will |
2114 | reached. If it is positive, then the timer will automatically be |
2229 | automatically be stopped once the timeout is reached. If it is positive, |
2115 | configured to trigger again C<repeat> seconds later, again, and again, |
2230 | then the timer will automatically be configured to trigger again C<repeat> |
2116 | until stopped manually. |
2231 | seconds later, again, and again, until stopped manually. |
2117 | |
2232 | |
2118 | The timer itself will do a best-effort at avoiding drift, that is, if |
2233 | The timer itself will do a best-effort at avoiding drift, that is, if |
2119 | you configure a timer to trigger every 10 seconds, then it will normally |
2234 | you configure a timer to trigger every 10 seconds, then it will normally |
2120 | trigger at exactly 10 second intervals. If, however, your program cannot |
2235 | trigger at exactly 10 second intervals. If, however, your program cannot |
2121 | keep up with the timer (because it takes longer than those 10 seconds to |
2236 | keep up with the timer (because it takes longer than those 10 seconds to |
… | |
… | |
2203 | Periodic watchers are also timers of a kind, but they are very versatile |
2318 | Periodic watchers are also timers of a kind, but they are very versatile |
2204 | (and unfortunately a bit complex). |
2319 | (and unfortunately a bit complex). |
2205 | |
2320 | |
2206 | Unlike C<ev_timer>, periodic watchers are not based on real time (or |
2321 | Unlike C<ev_timer>, periodic watchers are not based on real time (or |
2207 | relative time, the physical time that passes) but on wall clock time |
2322 | relative time, the physical time that passes) but on wall clock time |
2208 | (absolute time, the thing you can read on your calender or clock). The |
2323 | (absolute time, the thing you can read on your calendar or clock). The |
2209 | difference is that wall clock time can run faster or slower than real |
2324 | difference is that wall clock time can run faster or slower than real |
2210 | time, and time jumps are not uncommon (e.g. when you adjust your |
2325 | time, and time jumps are not uncommon (e.g. when you adjust your |
2211 | wrist-watch). |
2326 | wrist-watch). |
2212 | |
2327 | |
2213 | You can tell a periodic watcher to trigger after some specific point |
2328 | You can tell a periodic watcher to trigger after some specific point |
… | |
… | |
2218 | C<ev_timer>, which would still trigger roughly 10 seconds after starting |
2333 | C<ev_timer>, which would still trigger roughly 10 seconds after starting |
2219 | it, as it uses a relative timeout). |
2334 | it, as it uses a relative timeout). |
2220 | |
2335 | |
2221 | C<ev_periodic> watchers can also be used to implement vastly more complex |
2336 | C<ev_periodic> watchers can also be used to implement vastly more complex |
2222 | timers, such as triggering an event on each "midnight, local time", or |
2337 | timers, such as triggering an event on each "midnight, local time", or |
2223 | other complicated rules. This cannot be done with C<ev_timer> watchers, as |
2338 | other complicated rules. This cannot easily be done with C<ev_timer> |
2224 | those cannot react to time jumps. |
2339 | watchers, as those cannot react to time jumps. |
2225 | |
2340 | |
2226 | As with timers, the callback is guaranteed to be invoked only when the |
2341 | As with timers, the callback is guaranteed to be invoked only when the |
2227 | point in time where it is supposed to trigger has passed. If multiple |
2342 | point in time where it is supposed to trigger has passed. If multiple |
2228 | timers become ready during the same loop iteration then the ones with |
2343 | timers become ready during the same loop iteration then the ones with |
2229 | earlier time-out values are invoked before ones with later time-out values |
2344 | earlier time-out values are invoked before ones with later time-out values |
… | |
… | |
2315 | |
2430 | |
2316 | NOTE: I<< This callback must always return a time that is higher than or |
2431 | NOTE: I<< This callback must always return a time that is higher than or |
2317 | equal to the passed C<now> value >>. |
2432 | equal to the passed C<now> value >>. |
2318 | |
2433 | |
2319 | This can be used to create very complex timers, such as a timer that |
2434 | This can be used to create very complex timers, such as a timer that |
2320 | triggers on "next midnight, local time". To do this, you would calculate the |
2435 | triggers on "next midnight, local time". To do this, you would calculate |
2321 | next midnight after C<now> and return the timestamp value for this. How |
2436 | the next midnight after C<now> and return the timestamp value for |
2322 | you do this is, again, up to you (but it is not trivial, which is the main |
2437 | this. Here is a (completely untested, no error checking) example on how to |
2323 | reason I omitted it as an example). |
2438 | do this: |
|
|
2439 | |
|
|
2440 | #include <time.h> |
|
|
2441 | |
|
|
2442 | static ev_tstamp |
|
|
2443 | my_rescheduler (ev_periodic *w, ev_tstamp now) |
|
|
2444 | { |
|
|
2445 | time_t tnow = (time_t)now; |
|
|
2446 | struct tm tm; |
|
|
2447 | localtime_r (&tnow, &tm); |
|
|
2448 | |
|
|
2449 | tm.tm_sec = tm.tm_min = tm.tm_hour = 0; // midnight current day |
|
|
2450 | ++tm.tm_mday; // midnight next day |
|
|
2451 | |
|
|
2452 | return mktime (&tm); |
|
|
2453 | } |
|
|
2454 | |
|
|
2455 | Note: this code might run into trouble on days that have more then two |
|
|
2456 | midnights (beginning and end). |
2324 | |
2457 | |
2325 | =back |
2458 | =back |
2326 | |
2459 | |
2327 | =item ev_periodic_again (loop, ev_periodic *) |
2460 | =item ev_periodic_again (loop, ev_periodic *) |
2328 | |
2461 | |
… | |
… | |
2393 | |
2526 | |
2394 | ev_periodic hourly_tick; |
2527 | ev_periodic hourly_tick; |
2395 | ev_periodic_init (&hourly_tick, clock_cb, |
2528 | ev_periodic_init (&hourly_tick, clock_cb, |
2396 | fmod (ev_now (loop), 3600.), 3600., 0); |
2529 | fmod (ev_now (loop), 3600.), 3600., 0); |
2397 | ev_periodic_start (loop, &hourly_tick); |
2530 | ev_periodic_start (loop, &hourly_tick); |
2398 | |
2531 | |
2399 | |
2532 | |
2400 | =head2 C<ev_signal> - signal me when a signal gets signalled! |
2533 | =head2 C<ev_signal> - signal me when a signal gets signalled! |
2401 | |
2534 | |
2402 | Signal watchers will trigger an event when the process receives a specific |
2535 | Signal watchers will trigger an event when the process receives a specific |
2403 | signal one or more times. Even though signals are very asynchronous, libev |
2536 | signal one or more times. Even though signals are very asynchronous, libev |
… | |
… | |
2907 | |
3040 | |
2908 | Prepare and check watchers are often (but not always) used in pairs: |
3041 | Prepare and check watchers are often (but not always) used in pairs: |
2909 | prepare watchers get invoked before the process blocks and check watchers |
3042 | prepare watchers get invoked before the process blocks and check watchers |
2910 | afterwards. |
3043 | afterwards. |
2911 | |
3044 | |
2912 | You I<must not> call C<ev_run> or similar functions that enter |
3045 | You I<must not> call C<ev_run> (or similar functions that enter the |
2913 | the current event loop from either C<ev_prepare> or C<ev_check> |
3046 | current event loop) or C<ev_loop_fork> from either C<ev_prepare> or |
2914 | watchers. Other loops than the current one are fine, however. The |
3047 | C<ev_check> watchers. Other loops than the current one are fine, |
2915 | rationale behind this is that you do not need to check for recursion in |
3048 | however. The rationale behind this is that you do not need to check |
2916 | those watchers, i.e. the sequence will always be C<ev_prepare>, blocking, |
3049 | for recursion in those watchers, i.e. the sequence will always be |
2917 | C<ev_check> so if you have one watcher of each kind they will always be |
3050 | C<ev_prepare>, blocking, C<ev_check> so if you have one watcher of each |
2918 | called in pairs bracketing the blocking call. |
3051 | kind they will always be called in pairs bracketing the blocking call. |
2919 | |
3052 | |
2920 | Their main purpose is to integrate other event mechanisms into libev and |
3053 | Their main purpose is to integrate other event mechanisms into libev and |
2921 | their use is somewhat advanced. They could be used, for example, to track |
3054 | their use is somewhat advanced. They could be used, for example, to track |
2922 | variable changes, implement your own watchers, integrate net-snmp or a |
3055 | variable changes, implement your own watchers, integrate net-snmp or a |
2923 | coroutine library and lots more. They are also occasionally useful if |
3056 | coroutine library and lots more. They are also occasionally useful if |
… | |
… | |
3213 | used). |
3346 | used). |
3214 | |
3347 | |
3215 | struct ev_loop *loop_hi = ev_default_init (0); |
3348 | struct ev_loop *loop_hi = ev_default_init (0); |
3216 | struct ev_loop *loop_lo = 0; |
3349 | struct ev_loop *loop_lo = 0; |
3217 | ev_embed embed; |
3350 | ev_embed embed; |
3218 | |
3351 | |
3219 | // see if there is a chance of getting one that works |
3352 | // see if there is a chance of getting one that works |
3220 | // (remember that a flags value of 0 means autodetection) |
3353 | // (remember that a flags value of 0 means autodetection) |
3221 | loop_lo = ev_embeddable_backends () & ev_recommended_backends () |
3354 | loop_lo = ev_embeddable_backends () & ev_recommended_backends () |
3222 | ? ev_loop_new (ev_embeddable_backends () & ev_recommended_backends ()) |
3355 | ? ev_loop_new (ev_embeddable_backends () & ev_recommended_backends ()) |
3223 | : 0; |
3356 | : 0; |
… | |
… | |
3237 | C<loop_socket>. (One might optionally use C<EVFLAG_NOENV>, too). |
3370 | C<loop_socket>. (One might optionally use C<EVFLAG_NOENV>, too). |
3238 | |
3371 | |
3239 | struct ev_loop *loop = ev_default_init (0); |
3372 | struct ev_loop *loop = ev_default_init (0); |
3240 | struct ev_loop *loop_socket = 0; |
3373 | struct ev_loop *loop_socket = 0; |
3241 | ev_embed embed; |
3374 | ev_embed embed; |
3242 | |
3375 | |
3243 | if (ev_supported_backends () & ~ev_recommended_backends () & EVBACKEND_KQUEUE) |
3376 | if (ev_supported_backends () & ~ev_recommended_backends () & EVBACKEND_KQUEUE) |
3244 | if ((loop_socket = ev_loop_new (EVBACKEND_KQUEUE)) |
3377 | if ((loop_socket = ev_loop_new (EVBACKEND_KQUEUE)) |
3245 | { |
3378 | { |
3246 | ev_embed_init (&embed, 0, loop_socket); |
3379 | ev_embed_init (&embed, 0, loop_socket); |
3247 | ev_embed_start (loop, &embed); |
3380 | ev_embed_start (loop, &embed); |
… | |
… | |
3263 | and calls it in the wrong process, the fork handlers will be invoked, too, |
3396 | and calls it in the wrong process, the fork handlers will be invoked, too, |
3264 | of course. |
3397 | of course. |
3265 | |
3398 | |
3266 | =head3 The special problem of life after fork - how is it possible? |
3399 | =head3 The special problem of life after fork - how is it possible? |
3267 | |
3400 | |
3268 | Most uses of C<fork()> consist of forking, then some simple calls to set |
3401 | Most uses of C<fork ()> consist of forking, then some simple calls to set |
3269 | up/change the process environment, followed by a call to C<exec()>. This |
3402 | up/change the process environment, followed by a call to C<exec()>. This |
3270 | sequence should be handled by libev without any problems. |
3403 | sequence should be handled by libev without any problems. |
3271 | |
3404 | |
3272 | This changes when the application actually wants to do event handling |
3405 | This changes when the application actually wants to do event handling |
3273 | in the child, or both parent in child, in effect "continuing" after the |
3406 | in the child, or both parent in child, in effect "continuing" after the |
… | |
… | |
3511 | |
3644 | |
3512 | There are some other functions of possible interest. Described. Here. Now. |
3645 | There are some other functions of possible interest. Described. Here. Now. |
3513 | |
3646 | |
3514 | =over 4 |
3647 | =over 4 |
3515 | |
3648 | |
3516 | =item ev_once (loop, int fd, int events, ev_tstamp timeout, callback) |
3649 | =item ev_once (loop, int fd, int events, ev_tstamp timeout, callback, arg) |
3517 | |
3650 | |
3518 | This function combines a simple timer and an I/O watcher, calls your |
3651 | This function combines a simple timer and an I/O watcher, calls your |
3519 | callback on whichever event happens first and automatically stops both |
3652 | callback on whichever event happens first and automatically stops both |
3520 | watchers. This is useful if you want to wait for a single event on an fd |
3653 | watchers. This is useful if you want to wait for a single event on an fd |
3521 | or timeout without having to allocate/configure/start/stop/free one or |
3654 | or timeout without having to allocate/configure/start/stop/free one or |
… | |
… | |
3729 | event loop thread and an unspecified mechanism to wake up the main thread. |
3862 | event loop thread and an unspecified mechanism to wake up the main thread. |
3730 | |
3863 | |
3731 | First, you need to associate some data with the event loop: |
3864 | First, you need to associate some data with the event loop: |
3732 | |
3865 | |
3733 | typedef struct { |
3866 | typedef struct { |
3734 | mutex_t lock; /* global loop lock */ |
3867 | pthread_mutex_t lock; /* global loop lock */ |
|
|
3868 | pthread_t tid; |
|
|
3869 | pthread_cond_t invoke_cv; |
3735 | ev_async async_w; |
3870 | ev_async async_w; |
3736 | thread_t tid; |
|
|
3737 | cond_t invoke_cv; |
|
|
3738 | } userdata; |
3871 | } userdata; |
3739 | |
3872 | |
3740 | void prepare_loop (EV_P) |
3873 | void prepare_loop (EV_P) |
3741 | { |
3874 | { |
3742 | // for simplicity, we use a static userdata struct. |
3875 | // for simplicity, we use a static userdata struct. |
3743 | static userdata u; |
3876 | static userdata u; |
3744 | |
3877 | |
3745 | ev_async_init (&u->async_w, async_cb); |
3878 | ev_async_init (&u.async_w, async_cb); |
3746 | ev_async_start (EV_A_ &u->async_w); |
3879 | ev_async_start (EV_A_ &u.async_w); |
3747 | |
3880 | |
3748 | pthread_mutex_init (&u->lock, 0); |
3881 | pthread_mutex_init (&u.lock, 0); |
3749 | pthread_cond_init (&u->invoke_cv, 0); |
3882 | pthread_cond_init (&u.invoke_cv, 0); |
3750 | |
3883 | |
3751 | // now associate this with the loop |
3884 | // now associate this with the loop |
3752 | ev_set_userdata (EV_A_ u); |
3885 | ev_set_userdata (EV_A_ &u); |
3753 | ev_set_invoke_pending_cb (EV_A_ l_invoke); |
3886 | ev_set_invoke_pending_cb (EV_A_ l_invoke); |
3754 | ev_set_loop_release_cb (EV_A_ l_release, l_acquire); |
3887 | ev_set_loop_release_cb (EV_A_ l_release, l_acquire); |
3755 | |
3888 | |
3756 | // then create the thread running ev_run |
3889 | // then create the thread running ev_run |
3757 | pthread_create (&u->tid, 0, l_run, EV_A); |
3890 | pthread_create (&u.tid, 0, l_run, EV_A); |
3758 | } |
3891 | } |
3759 | |
3892 | |
3760 | The callback for the C<ev_async> watcher does nothing: the watcher is used |
3893 | The callback for the C<ev_async> watcher does nothing: the watcher is used |
3761 | solely to wake up the event loop so it takes notice of any new watchers |
3894 | solely to wake up the event loop so it takes notice of any new watchers |
3762 | that might have been added: |
3895 | that might have been added: |
… | |
… | |
3897 | To embed libev, see L</EMBEDDING>, but in short, it's easiest to create two |
4030 | To embed libev, see L</EMBEDDING>, but in short, it's easiest to create two |
3898 | files, F<my_ev.h> and F<my_ev.c> that include the respective libev files: |
4031 | files, F<my_ev.h> and F<my_ev.c> that include the respective libev files: |
3899 | |
4032 | |
3900 | // my_ev.h |
4033 | // my_ev.h |
3901 | #define EV_CB_DECLARE(type) struct my_coro *cb; |
4034 | #define EV_CB_DECLARE(type) struct my_coro *cb; |
3902 | #define EV_CB_INVOKE(watcher) switch_to ((watcher)->cb); |
4035 | #define EV_CB_INVOKE(watcher) switch_to ((watcher)->cb) |
3903 | #include "../libev/ev.h" |
4036 | #include "../libev/ev.h" |
3904 | |
4037 | |
3905 | // my_ev.c |
4038 | // my_ev.c |
3906 | #define EV_H "my_ev.h" |
4039 | #define EV_H "my_ev.h" |
3907 | #include "../libev/ev.c" |
4040 | #include "../libev/ev.c" |
… | |
… | |
3953 | The normal C API should work fine when used from C++: both ev.h and the |
4086 | The normal C API should work fine when used from C++: both ev.h and the |
3954 | libev sources can be compiled as C++. Therefore, code that uses the C API |
4087 | libev sources can be compiled as C++. Therefore, code that uses the C API |
3955 | will work fine. |
4088 | will work fine. |
3956 | |
4089 | |
3957 | Proper exception specifications might have to be added to callbacks passed |
4090 | Proper exception specifications might have to be added to callbacks passed |
3958 | to libev: exceptions may be thrown only from watcher callbacks, all |
4091 | to libev: exceptions may be thrown only from watcher callbacks, all other |
3959 | other callbacks (allocator, syserr, loop acquire/release and periodic |
4092 | callbacks (allocator, syserr, loop acquire/release and periodic reschedule |
3960 | reschedule callbacks) must not throw exceptions, and might need a C<throw |
4093 | callbacks) must not throw exceptions, and might need a C<noexcept> |
3961 | ()> specification. If you have code that needs to be compiled as both C |
4094 | specification. If you have code that needs to be compiled as both C and |
3962 | and C++ you can use the C<EV_THROW> macro for this: |
4095 | C++ you can use the C<EV_NOEXCEPT> macro for this: |
3963 | |
4096 | |
3964 | static void |
4097 | static void |
3965 | fatal_error (const char *msg) EV_THROW |
4098 | fatal_error (const char *msg) EV_NOEXCEPT |
3966 | { |
4099 | { |
3967 | perror (msg); |
4100 | perror (msg); |
3968 | abort (); |
4101 | abort (); |
3969 | } |
4102 | } |
3970 | |
4103 | |
… | |
… | |
4097 | void operator() (ev::io &w, int revents) |
4230 | void operator() (ev::io &w, int revents) |
4098 | { |
4231 | { |
4099 | ... |
4232 | ... |
4100 | } |
4233 | } |
4101 | } |
4234 | } |
4102 | |
4235 | |
4103 | myfunctor f; |
4236 | myfunctor f; |
4104 | |
4237 | |
4105 | ev::io w; |
4238 | ev::io w; |
4106 | w.set (&f); |
4239 | w.set (&f); |
4107 | |
4240 | |
… | |
… | |
4133 | gets automatically stopped and restarted when reconfiguring it with this |
4266 | gets automatically stopped and restarted when reconfiguring it with this |
4134 | method. |
4267 | method. |
4135 | |
4268 | |
4136 | For C<ev::embed> watchers this method is called C<set_embed>, to avoid |
4269 | For C<ev::embed> watchers this method is called C<set_embed>, to avoid |
4137 | clashing with the C<set (loop)> method. |
4270 | clashing with the C<set (loop)> method. |
|
|
4271 | |
|
|
4272 | For C<ev::io> watchers there is an additional C<set> method that acepts a |
|
|
4273 | new event mask only, and internally calls C<ev_io_modfify>. |
4138 | |
4274 | |
4139 | =item w->start () |
4275 | =item w->start () |
4140 | |
4276 | |
4141 | Starts the watcher. Note that there is no C<loop> argument, as the |
4277 | Starts the watcher. Note that there is no C<loop> argument, as the |
4142 | constructor already stores the event loop. |
4278 | constructor already stores the event loop. |
… | |
… | |
4380 | ev_vars.h |
4516 | ev_vars.h |
4381 | ev_wrap.h |
4517 | ev_wrap.h |
4382 | |
4518 | |
4383 | ev_win32.c required on win32 platforms only |
4519 | ev_win32.c required on win32 platforms only |
4384 | |
4520 | |
4385 | ev_select.c only when select backend is enabled (which is enabled by default) |
4521 | ev_select.c only when select backend is enabled |
4386 | ev_poll.c only when poll backend is enabled (disabled by default) |
4522 | ev_poll.c only when poll backend is enabled |
4387 | ev_epoll.c only when the epoll backend is enabled (disabled by default) |
4523 | ev_epoll.c only when the epoll backend is enabled |
|
|
4524 | ev_linuxaio.c only when the linux aio backend is enabled |
|
|
4525 | ev_iouring.c only when the linux io_uring backend is enabled |
4388 | ev_kqueue.c only when the kqueue backend is enabled (disabled by default) |
4526 | ev_kqueue.c only when the kqueue backend is enabled |
4389 | ev_port.c only when the solaris port backend is enabled (disabled by default) |
4527 | ev_port.c only when the solaris port backend is enabled |
4390 | |
4528 | |
4391 | F<ev.c> includes the backend files directly when enabled, so you only need |
4529 | F<ev.c> includes the backend files directly when enabled, so you only need |
4392 | to compile this single file. |
4530 | to compile this single file. |
4393 | |
4531 | |
4394 | =head3 LIBEVENT COMPATIBILITY API |
4532 | =head3 LIBEVENT COMPATIBILITY API |
… | |
… | |
4513 | available and will probe for kernel support at runtime. This will improve |
4651 | available and will probe for kernel support at runtime. This will improve |
4514 | C<ev_signal> and C<ev_async> performance and reduce resource consumption. |
4652 | C<ev_signal> and C<ev_async> performance and reduce resource consumption. |
4515 | If undefined, it will be enabled if the headers indicate GNU/Linux + Glibc |
4653 | If undefined, it will be enabled if the headers indicate GNU/Linux + Glibc |
4516 | 2.7 or newer, otherwise disabled. |
4654 | 2.7 or newer, otherwise disabled. |
4517 | |
4655 | |
|
|
4656 | =item EV_USE_SIGNALFD |
|
|
4657 | |
|
|
4658 | If defined to be C<1>, then libev will assume that C<signalfd ()> is |
|
|
4659 | available and will probe for kernel support at runtime. This enables |
|
|
4660 | the use of EVFLAG_SIGNALFD for faster and simpler signal handling. If |
|
|
4661 | undefined, it will be enabled if the headers indicate GNU/Linux + Glibc |
|
|
4662 | 2.7 or newer, otherwise disabled. |
|
|
4663 | |
|
|
4664 | =item EV_USE_TIMERFD |
|
|
4665 | |
|
|
4666 | If defined to be C<1>, then libev will assume that C<timerfd ()> is |
|
|
4667 | available and will probe for kernel support at runtime. This allows |
|
|
4668 | libev to detect time jumps accurately. If undefined, it will be enabled |
|
|
4669 | if the headers indicate GNU/Linux + Glibc 2.8 or newer and define |
|
|
4670 | C<TFD_TIMER_CANCEL_ON_SET>, otherwise disabled. |
|
|
4671 | |
|
|
4672 | =item EV_USE_EVENTFD |
|
|
4673 | |
|
|
4674 | If defined to be C<1>, then libev will assume that C<eventfd ()> is |
|
|
4675 | available and will probe for kernel support at runtime. This will improve |
|
|
4676 | C<ev_signal> and C<ev_async> performance and reduce resource consumption. |
|
|
4677 | If undefined, it will be enabled if the headers indicate GNU/Linux + Glibc |
|
|
4678 | 2.7 or newer, otherwise disabled. |
|
|
4679 | |
4518 | =item EV_USE_SELECT |
4680 | =item EV_USE_SELECT |
4519 | |
4681 | |
4520 | If undefined or defined to be C<1>, libev will compile in support for the |
4682 | If undefined or defined to be C<1>, libev will compile in support for the |
4521 | C<select>(2) backend. No attempt at auto-detection will be done: if no |
4683 | C<select>(2) backend. No attempt at auto-detection will be done: if no |
4522 | other method takes over, select will be it. Otherwise the select backend |
4684 | other method takes over, select will be it. Otherwise the select backend |
… | |
… | |
4582 | If defined to be C<1>, libev will compile in support for the Linux |
4744 | If defined to be C<1>, libev will compile in support for the Linux |
4583 | C<epoll>(7) backend. Its availability will be detected at runtime, |
4745 | C<epoll>(7) backend. Its availability will be detected at runtime, |
4584 | otherwise another method will be used as fallback. This is the preferred |
4746 | otherwise another method will be used as fallback. This is the preferred |
4585 | backend for GNU/Linux systems. If undefined, it will be enabled if the |
4747 | backend for GNU/Linux systems. If undefined, it will be enabled if the |
4586 | headers indicate GNU/Linux + Glibc 2.4 or newer, otherwise disabled. |
4748 | headers indicate GNU/Linux + Glibc 2.4 or newer, otherwise disabled. |
|
|
4749 | |
|
|
4750 | =item EV_USE_LINUXAIO |
|
|
4751 | |
|
|
4752 | If defined to be C<1>, libev will compile in support for the Linux aio |
|
|
4753 | backend (C<EV_USE_EPOLL> must also be enabled). If undefined, it will be |
|
|
4754 | enabled on linux, otherwise disabled. |
|
|
4755 | |
|
|
4756 | =item EV_USE_IOURING |
|
|
4757 | |
|
|
4758 | If defined to be C<1>, libev will compile in support for the Linux |
|
|
4759 | io_uring backend (C<EV_USE_EPOLL> must also be enabled). Due to it's |
|
|
4760 | current limitations it has to be requested explicitly. If undefined, it |
|
|
4761 | will be enabled on linux, otherwise disabled. |
4587 | |
4762 | |
4588 | =item EV_USE_KQUEUE |
4763 | =item EV_USE_KQUEUE |
4589 | |
4764 | |
4590 | If defined to be C<1>, libev will compile in support for the BSD style |
4765 | If defined to be C<1>, libev will compile in support for the BSD style |
4591 | C<kqueue>(2) backend. Its actual availability will be detected at runtime, |
4766 | C<kqueue>(2) backend. Its actual availability will be detected at runtime, |
… | |
… | |
4869 | called. If set to C<2>, then the internal verification code will be |
5044 | called. If set to C<2>, then the internal verification code will be |
4870 | called once per loop, which can slow down libev. If set to C<3>, then the |
5045 | called once per loop, which can slow down libev. If set to C<3>, then the |
4871 | verification code will be called very frequently, which will slow down |
5046 | verification code will be called very frequently, which will slow down |
4872 | libev considerably. |
5047 | libev considerably. |
4873 | |
5048 | |
|
|
5049 | Verification errors are reported via C's C<assert> mechanism, so if you |
|
|
5050 | disable that (e.g. by defining C<NDEBUG>) then no errors will be reported. |
|
|
5051 | |
4874 | The default is C<1>, unless C<EV_FEATURES> overrides it, in which case it |
5052 | The default is C<1>, unless C<EV_FEATURES> overrides it, in which case it |
4875 | will be C<0>. |
5053 | will be C<0>. |
4876 | |
5054 | |
4877 | =item EV_COMMON |
5055 | =item EV_COMMON |
4878 | |
5056 | |
… | |
… | |
5294 | structure (guaranteed by POSIX but not by ISO C for example), but it also |
5472 | structure (guaranteed by POSIX but not by ISO C for example), but it also |
5295 | assumes that the same (machine) code can be used to call any watcher |
5473 | assumes that the same (machine) code can be used to call any watcher |
5296 | callback: The watcher callbacks have different type signatures, but libev |
5474 | callback: The watcher callbacks have different type signatures, but libev |
5297 | calls them using an C<ev_watcher *> internally. |
5475 | calls them using an C<ev_watcher *> internally. |
5298 | |
5476 | |
|
|
5477 | =item null pointers and integer zero are represented by 0 bytes |
|
|
5478 | |
|
|
5479 | Libev uses C<memset> to initialise structs and arrays to C<0> bytes, and |
|
|
5480 | relies on this setting pointers and integers to null. |
|
|
5481 | |
5299 | =item pointer accesses must be thread-atomic |
5482 | =item pointer accesses must be thread-atomic |
5300 | |
5483 | |
5301 | Accessing a pointer value must be atomic, it must both be readable and |
5484 | Accessing a pointer value must be atomic, it must both be readable and |
5302 | writable in one piece - this is the case on all current architectures. |
5485 | writable in one piece - this is the case on all current architectures. |
5303 | |
5486 | |