… | |
… | |
105 | details of the event, and then hand it over to libev by I<starting> the |
105 | details of the event, and then hand it over to libev by I<starting> the |
106 | watcher. |
106 | watcher. |
107 | |
107 | |
108 | =head2 FEATURES |
108 | =head2 FEATURES |
109 | |
109 | |
110 | Libev supports C<select>, C<poll>, the Linux-specific C<epoll>, the |
110 | Libev supports C<select>, C<poll>, the Linux-specific aio and C<epoll> |
111 | BSD-specific C<kqueue> and the Solaris-specific event port mechanisms |
111 | interfaces, the BSD-specific C<kqueue> and the Solaris-specific event port |
112 | for file descriptor events (C<ev_io>), the Linux C<inotify> interface |
112 | mechanisms for file descriptor events (C<ev_io>), the Linux C<inotify> |
113 | (for C<ev_stat>), Linux eventfd/signalfd (for faster and cleaner |
113 | interface (for C<ev_stat>), Linux eventfd/signalfd (for faster and cleaner |
114 | inter-thread wakeup (C<ev_async>)/signal handling (C<ev_signal>)) relative |
114 | inter-thread wakeup (C<ev_async>)/signal handling (C<ev_signal>)) relative |
115 | timers (C<ev_timer>), absolute timers with customised rescheduling |
115 | timers (C<ev_timer>), absolute timers with customised rescheduling |
116 | (C<ev_periodic>), synchronous signals (C<ev_signal>), process status |
116 | (C<ev_periodic>), synchronous signals (C<ev_signal>), process status |
117 | change events (C<ev_child>), and event watchers dealing with the event |
117 | change events (C<ev_child>), and event watchers dealing with the event |
118 | loop mechanism itself (C<ev_idle>, C<ev_embed>, C<ev_prepare> and |
118 | loop mechanism itself (C<ev_idle>, C<ev_embed>, C<ev_prepare> and |
… | |
… | |
159 | When libev detects a usage error such as a negative timer interval, then |
159 | When libev detects a usage error such as a negative timer interval, then |
160 | it will print a diagnostic message and abort (via the C<assert> mechanism, |
160 | it will print a diagnostic message and abort (via the C<assert> mechanism, |
161 | so C<NDEBUG> will disable this checking): these are programming errors in |
161 | so C<NDEBUG> will disable this checking): these are programming errors in |
162 | the libev caller and need to be fixed there. |
162 | the libev caller and need to be fixed there. |
163 | |
163 | |
|
|
164 | Via the C<EV_FREQUENT> macro you can compile in and/or enable extensive |
|
|
165 | consistency checking code inside libev that can be used to check for |
|
|
166 | internal inconsistencies, suually caused by application bugs. |
|
|
167 | |
164 | Libev also has a few internal error-checking C<assert>ions, and also has |
168 | Libev also has a few internal error-checking C<assert>ions. These do not |
165 | extensive consistency checking code. These do not trigger under normal |
|
|
166 | circumstances, as they indicate either a bug in libev or worse. |
169 | trigger under normal circumstances, as they indicate either a bug in libev |
|
|
170 | or worse. |
167 | |
171 | |
168 | |
172 | |
169 | =head1 GLOBAL FUNCTIONS |
173 | =head1 GLOBAL FUNCTIONS |
170 | |
174 | |
171 | These functions can be called anytime, even before initialising the |
175 | These functions can be called anytime, even before initialising the |
… | |
… | |
265 | |
269 | |
266 | You could override this function in high-availability programs to, say, |
270 | You could override this function in high-availability programs to, say, |
267 | free some memory if it cannot allocate memory, to use a special allocator, |
271 | free some memory if it cannot allocate memory, to use a special allocator, |
268 | or even to sleep a while and retry until some memory is available. |
272 | or even to sleep a while and retry until some memory is available. |
269 | |
273 | |
|
|
274 | Example: The following is the C<realloc> function that libev itself uses |
|
|
275 | which should work with C<realloc> and C<free> functions of all kinds and |
|
|
276 | is probably a good basis for your own implementation. |
|
|
277 | |
|
|
278 | static void * |
|
|
279 | ev_realloc_emul (void *ptr, long size) EV_NOEXCEPT |
|
|
280 | { |
|
|
281 | if (size) |
|
|
282 | return realloc (ptr, size); |
|
|
283 | |
|
|
284 | free (ptr); |
|
|
285 | return 0; |
|
|
286 | } |
|
|
287 | |
270 | Example: Replace the libev allocator with one that waits a bit and then |
288 | Example: Replace the libev allocator with one that waits a bit and then |
271 | retries (example requires a standards-compliant C<realloc>). |
289 | retries. |
272 | |
290 | |
273 | static void * |
291 | static void * |
274 | persistent_realloc (void *ptr, size_t size) |
292 | persistent_realloc (void *ptr, size_t size) |
275 | { |
293 | { |
|
|
294 | if (!size) |
|
|
295 | { |
|
|
296 | free (ptr); |
|
|
297 | return 0; |
|
|
298 | } |
|
|
299 | |
276 | for (;;) |
300 | for (;;) |
277 | { |
301 | { |
278 | void *newptr = realloc (ptr, size); |
302 | void *newptr = realloc (ptr, size); |
279 | |
303 | |
280 | if (newptr) |
304 | if (newptr) |
… | |
… | |
411 | make libev check for a fork in each iteration by enabling this flag. |
435 | make libev check for a fork in each iteration by enabling this flag. |
412 | |
436 | |
413 | This works by calling C<getpid ()> on every iteration of the loop, |
437 | This works by calling C<getpid ()> on every iteration of the loop, |
414 | and thus this might slow down your event loop if you do a lot of loop |
438 | and thus this might slow down your event loop if you do a lot of loop |
415 | iterations and little real work, but is usually not noticeable (on my |
439 | iterations and little real work, but is usually not noticeable (on my |
416 | GNU/Linux system for example, C<getpid> is actually a simple 5-insn sequence |
440 | GNU/Linux system for example, C<getpid> is actually a simple 5-insn |
417 | without a system call and thus I<very> fast, but my GNU/Linux system also has |
441 | sequence without a system call and thus I<very> fast, but my GNU/Linux |
418 | C<pthread_atfork> which is even faster). |
442 | system also has C<pthread_atfork> which is even faster). (Update: glibc |
|
|
443 | versions 2.25 apparently removed the C<getpid> optimisation again). |
419 | |
444 | |
420 | The big advantage of this flag is that you can forget about fork (and |
445 | The big advantage of this flag is that you can forget about fork (and |
421 | forget about forgetting to tell libev about forking) when you use this |
446 | forget about forgetting to tell libev about forking, although you still |
422 | flag. |
447 | have to ignore C<SIGPIPE>) when you use this flag. |
423 | |
448 | |
424 | This flag setting cannot be overridden or specified in the C<LIBEV_FLAGS> |
449 | This flag setting cannot be overridden or specified in the C<LIBEV_FLAGS> |
425 | environment variable. |
450 | environment variable. |
426 | |
451 | |
427 | =item C<EVFLAG_NOINOTIFY> |
452 | =item C<EVFLAG_NOINOTIFY> |
… | |
… | |
455 | unblocking the signals. |
480 | unblocking the signals. |
456 | |
481 | |
457 | It's also required by POSIX in a threaded program, as libev calls |
482 | It's also required by POSIX in a threaded program, as libev calls |
458 | C<sigprocmask>, whose behaviour is officially unspecified. |
483 | C<sigprocmask>, whose behaviour is officially unspecified. |
459 | |
484 | |
460 | This flag's behaviour will become the default in future versions of libev. |
485 | =item C<EVFLAG_NOTIMERFD> |
|
|
486 | |
|
|
487 | When this flag is specified, the libev will avoid using a C<timerfd> to |
|
|
488 | detect time jumps. It will still be able to detect time jumps, but takes |
|
|
489 | longer and has a lower accuracy in doing so, but saves a file descriptor |
|
|
490 | per loop. |
|
|
491 | |
|
|
492 | The current implementation only tries to use a C<timerfd> when the first |
|
|
493 | C<ev_periodic> watcher is started and falls back on other methods if it |
|
|
494 | cannot be created, but this behaviour might change in the future. |
461 | |
495 | |
462 | =item C<EVBACKEND_SELECT> (value 1, portable select backend) |
496 | =item C<EVBACKEND_SELECT> (value 1, portable select backend) |
463 | |
497 | |
464 | This is your standard select(2) backend. Not I<completely> standard, as |
498 | This is your standard select(2) backend. Not I<completely> standard, as |
465 | libev tries to roll its own fd_set with no limits on the number of fds, |
499 | libev tries to roll its own fd_set with no limits on the number of fds, |
… | |
… | |
490 | This backend maps C<EV_READ> to C<POLLIN | POLLERR | POLLHUP>, and |
524 | This backend maps C<EV_READ> to C<POLLIN | POLLERR | POLLHUP>, and |
491 | C<EV_WRITE> to C<POLLOUT | POLLERR | POLLHUP>. |
525 | C<EV_WRITE> to C<POLLOUT | POLLERR | POLLHUP>. |
492 | |
526 | |
493 | =item C<EVBACKEND_EPOLL> (value 4, Linux) |
527 | =item C<EVBACKEND_EPOLL> (value 4, Linux) |
494 | |
528 | |
495 | Use the linux-specific epoll(7) interface (for both pre- and post-2.6.9 |
529 | Use the Linux-specific epoll(7) interface (for both pre- and post-2.6.9 |
496 | kernels). |
530 | kernels). |
497 | |
531 | |
498 | For few fds, this backend is a bit little slower than poll and select, but |
532 | For few fds, this backend is a bit little slower than poll and select, but |
499 | it scales phenomenally better. While poll and select usually scale like |
533 | it scales phenomenally better. While poll and select usually scale like |
500 | O(total_fds) where total_fds is the total number of fds (or the highest |
534 | O(total_fds) where total_fds is the total number of fds (or the highest |
… | |
… | |
546 | All this means that, in practice, C<EVBACKEND_SELECT> can be as fast or |
580 | All this means that, in practice, C<EVBACKEND_SELECT> can be as fast or |
547 | faster than epoll for maybe up to a hundred file descriptors, depending on |
581 | faster than epoll for maybe up to a hundred file descriptors, depending on |
548 | the usage. So sad. |
582 | the usage. So sad. |
549 | |
583 | |
550 | While nominally embeddable in other event loops, this feature is broken in |
584 | While nominally embeddable in other event loops, this feature is broken in |
551 | all kernel versions tested so far. |
585 | a lot of kernel revisions, but probably(!) works in current versions. |
552 | |
586 | |
553 | This backend maps C<EV_READ> and C<EV_WRITE> in the same way as |
587 | This backend maps C<EV_READ> and C<EV_WRITE> in the same way as |
554 | C<EVBACKEND_POLL>. |
588 | C<EVBACKEND_POLL>. |
555 | |
589 | |
|
|
590 | =item C<EVBACKEND_LINUXAIO> (value 64, Linux) |
|
|
591 | |
|
|
592 | Use the Linux-specific Linux AIO (I<not> C<< aio(7) >> but C<< |
|
|
593 | io_submit(2) >>) event interface available in post-4.18 kernels (but libev |
|
|
594 | only tries to use it in 4.19+). |
|
|
595 | |
|
|
596 | This is another Linux train wreck of an event interface. |
|
|
597 | |
|
|
598 | If this backend works for you (as of this writing, it was very |
|
|
599 | experimental), it is the best event interface available on Linux and might |
|
|
600 | be well worth enabling it - if it isn't available in your kernel this will |
|
|
601 | be detected and this backend will be skipped. |
|
|
602 | |
|
|
603 | This backend can batch oneshot requests and supports a user-space ring |
|
|
604 | buffer to receive events. It also doesn't suffer from most of the design |
|
|
605 | problems of epoll (such as not being able to remove event sources from |
|
|
606 | the epoll set), and generally sounds too good to be true. Because, this |
|
|
607 | being the Linux kernel, of course it suffers from a whole new set of |
|
|
608 | limitations, forcing you to fall back to epoll, inheriting all its design |
|
|
609 | issues. |
|
|
610 | |
|
|
611 | For one, it is not easily embeddable (but probably could be done using |
|
|
612 | an event fd at some extra overhead). It also is subject to a system wide |
|
|
613 | limit that can be configured in F</proc/sys/fs/aio-max-nr>. If no AIO |
|
|
614 | requests are left, this backend will be skipped during initialisation, and |
|
|
615 | will switch to epoll when the loop is active. |
|
|
616 | |
|
|
617 | Most problematic in practice, however, is that not all file descriptors |
|
|
618 | work with it. For example, in Linux 5.1, TCP sockets, pipes, event fds, |
|
|
619 | files, F</dev/null> and many others are supported, but ttys do not work |
|
|
620 | properly (a known bug that the kernel developers don't care about, see |
|
|
621 | L<https://lore.kernel.org/patchwork/patch/1047453/>), so this is not |
|
|
622 | (yet?) a generic event polling interface. |
|
|
623 | |
|
|
624 | Overall, it seems the Linux developers just don't want it to have a |
|
|
625 | generic event handling mechanism other than C<select> or C<poll>. |
|
|
626 | |
|
|
627 | To work around all these problem, the current version of libev uses its |
|
|
628 | epoll backend as a fallback for file descriptor types that do not work. Or |
|
|
629 | falls back completely to epoll if the kernel acts up. |
|
|
630 | |
|
|
631 | This backend maps C<EV_READ> and C<EV_WRITE> in the same way as |
|
|
632 | C<EVBACKEND_POLL>. |
|
|
633 | |
556 | =item C<EVBACKEND_KQUEUE> (value 8, most BSD clones) |
634 | =item C<EVBACKEND_KQUEUE> (value 8, most BSD clones) |
557 | |
635 | |
558 | Kqueue deserves special mention, as at the time of this writing, it |
636 | Kqueue deserves special mention, as at the time this backend was |
559 | was broken on all BSDs except NetBSD (usually it doesn't work reliably |
637 | implemented, it was broken on all BSDs except NetBSD (usually it doesn't |
560 | with anything but sockets and pipes, except on Darwin, where of course |
638 | work reliably with anything but sockets and pipes, except on Darwin, |
561 | it's completely useless). Unlike epoll, however, whose brokenness |
639 | where of course it's completely useless). Unlike epoll, however, whose |
562 | is by design, these kqueue bugs can (and eventually will) be fixed |
640 | brokenness is by design, these kqueue bugs can be (and mostly have been) |
563 | without API changes to existing programs. For this reason it's not being |
641 | fixed without API changes to existing programs. For this reason it's not |
564 | "auto-detected" unless you explicitly specify it in the flags (i.e. using |
642 | being "auto-detected" on all platforms unless you explicitly specify it |
565 | C<EVBACKEND_KQUEUE>) or libev was compiled on a known-to-be-good (-enough) |
643 | in the flags (i.e. using C<EVBACKEND_KQUEUE>) or libev was compiled on a |
566 | system like NetBSD. |
644 | known-to-be-good (-enough) system like NetBSD. |
567 | |
645 | |
568 | You still can embed kqueue into a normal poll or select backend and use it |
646 | You still can embed kqueue into a normal poll or select backend and use it |
569 | only for sockets (after having made sure that sockets work with kqueue on |
647 | only for sockets (after having made sure that sockets work with kqueue on |
570 | the target platform). See C<ev_embed> watchers for more info. |
648 | the target platform). See C<ev_embed> watchers for more info. |
571 | |
649 | |
572 | It scales in the same way as the epoll backend, but the interface to the |
650 | It scales in the same way as the epoll backend, but the interface to the |
573 | kernel is more efficient (which says nothing about its actual speed, of |
651 | kernel is more efficient (which says nothing about its actual speed, of |
574 | course). While stopping, setting and starting an I/O watcher does never |
652 | course). While stopping, setting and starting an I/O watcher does never |
575 | cause an extra system call as with C<EVBACKEND_EPOLL>, it still adds up to |
653 | cause an extra system call as with C<EVBACKEND_EPOLL>, it still adds up to |
576 | two event changes per incident. Support for C<fork ()> is very bad (you |
654 | two event changes per incident. Support for C<fork ()> is very bad (you |
577 | might have to leak fd's on fork, but it's more sane than epoll) and it |
655 | might have to leak fds on fork, but it's more sane than epoll) and it |
578 | drops fds silently in similarly hard-to-detect cases. |
656 | drops fds silently in similarly hard-to-detect cases. |
579 | |
657 | |
580 | This backend usually performs well under most conditions. |
658 | This backend usually performs well under most conditions. |
581 | |
659 | |
582 | While nominally embeddable in other event loops, this doesn't work |
660 | While nominally embeddable in other event loops, this doesn't work |
… | |
… | |
657 | Example: Use whatever libev has to offer, but make sure that kqueue is |
735 | Example: Use whatever libev has to offer, but make sure that kqueue is |
658 | used if available. |
736 | used if available. |
659 | |
737 | |
660 | struct ev_loop *loop = ev_loop_new (ev_recommended_backends () | EVBACKEND_KQUEUE); |
738 | struct ev_loop *loop = ev_loop_new (ev_recommended_backends () | EVBACKEND_KQUEUE); |
661 | |
739 | |
|
|
740 | Example: Similarly, on linux, you mgiht want to take advantage of the |
|
|
741 | linux aio backend if possible, but fall back to something else if that |
|
|
742 | isn't available. |
|
|
743 | |
|
|
744 | struct ev_loop *loop = ev_loop_new (ev_recommended_backends () | EVBACKEND_LINUXAIO); |
|
|
745 | |
662 | =item ev_loop_destroy (loop) |
746 | =item ev_loop_destroy (loop) |
663 | |
747 | |
664 | Destroys an event loop object (frees all memory and kernel state |
748 | Destroys an event loop object (frees all memory and kernel state |
665 | etc.). None of the active event watchers will be stopped in the normal |
749 | etc.). None of the active event watchers will be stopped in the normal |
666 | sense, so e.g. C<ev_is_active> might still return true. It is your |
750 | sense, so e.g. C<ev_is_active> might still return true. It is your |
… | |
… | |
682 | If you need dynamically allocated loops it is better to use C<ev_loop_new> |
766 | If you need dynamically allocated loops it is better to use C<ev_loop_new> |
683 | and C<ev_loop_destroy>. |
767 | and C<ev_loop_destroy>. |
684 | |
768 | |
685 | =item ev_loop_fork (loop) |
769 | =item ev_loop_fork (loop) |
686 | |
770 | |
687 | This function sets a flag that causes subsequent C<ev_run> iterations to |
771 | This function sets a flag that causes subsequent C<ev_run> iterations |
688 | reinitialise the kernel state for backends that have one. Despite the |
772 | to reinitialise the kernel state for backends that have one. Despite |
689 | name, you can call it anytime, but it makes most sense after forking, in |
773 | the name, you can call it anytime you are allowed to start or stop |
690 | the child process. You I<must> call it (or use C<EVFLAG_FORKCHECK>) in the |
774 | watchers (except inside an C<ev_prepare> callback), but it makes most |
|
|
775 | sense after forking, in the child process. You I<must> call it (or use |
691 | child before resuming or calling C<ev_run>. |
776 | C<EVFLAG_FORKCHECK>) in the child before resuming or calling C<ev_run>. |
|
|
777 | |
|
|
778 | In addition, if you want to reuse a loop (via this function or |
|
|
779 | C<EVFLAG_FORKCHECK>), you I<also> have to ignore C<SIGPIPE>. |
692 | |
780 | |
693 | Again, you I<have> to call it on I<any> loop that you want to re-use after |
781 | Again, you I<have> to call it on I<any> loop that you want to re-use after |
694 | a fork, I<even if you do not plan to use the loop in the parent>. This is |
782 | a fork, I<even if you do not plan to use the loop in the parent>. This is |
695 | because some kernel interfaces *cough* I<kqueue> *cough* do funny things |
783 | because some kernel interfaces *cough* I<kqueue> *cough* do funny things |
696 | during fork. |
784 | during fork. |
… | |
… | |
1456 | |
1544 | |
1457 | Many event loops support I<watcher priorities>, which are usually small |
1545 | Many event loops support I<watcher priorities>, which are usually small |
1458 | integers that influence the ordering of event callback invocation |
1546 | integers that influence the ordering of event callback invocation |
1459 | between watchers in some way, all else being equal. |
1547 | between watchers in some way, all else being equal. |
1460 | |
1548 | |
1461 | In libev, Watcher priorities can be set using C<ev_set_priority>. See its |
1549 | In libev, watcher priorities can be set using C<ev_set_priority>. See its |
1462 | description for the more technical details such as the actual priority |
1550 | description for the more technical details such as the actual priority |
1463 | range. |
1551 | range. |
1464 | |
1552 | |
1465 | There are two common ways how these these priorities are being interpreted |
1553 | There are two common ways how these these priorities are being interpreted |
1466 | by event loops: |
1554 | by event loops: |
… | |
… | |
1605 | |
1693 | |
1606 | But really, best use non-blocking mode. |
1694 | But really, best use non-blocking mode. |
1607 | |
1695 | |
1608 | =head3 The special problem of disappearing file descriptors |
1696 | =head3 The special problem of disappearing file descriptors |
1609 | |
1697 | |
1610 | Some backends (e.g. kqueue, epoll) need to be told about closing a file |
1698 | Some backends (e.g. kqueue, epoll, linuxaio) need to be told about closing |
1611 | descriptor (either due to calling C<close> explicitly or any other means, |
1699 | a file descriptor (either due to calling C<close> explicitly or any other |
1612 | such as C<dup2>). The reason is that you register interest in some file |
1700 | means, such as C<dup2>). The reason is that you register interest in some |
1613 | descriptor, but when it goes away, the operating system will silently drop |
1701 | file descriptor, but when it goes away, the operating system will silently |
1614 | this interest. If another file descriptor with the same number then is |
1702 | drop this interest. If another file descriptor with the same number then |
1615 | registered with libev, there is no efficient way to see that this is, in |
1703 | is registered with libev, there is no efficient way to see that this is, |
1616 | fact, a different file descriptor. |
1704 | in fact, a different file descriptor. |
1617 | |
1705 | |
1618 | To avoid having to explicitly tell libev about such cases, libev follows |
1706 | To avoid having to explicitly tell libev about such cases, libev follows |
1619 | the following policy: Each time C<ev_io_set> is being called, libev |
1707 | the following policy: Each time C<ev_io_set> is being called, libev |
1620 | will assume that this is potentially a new file descriptor, otherwise |
1708 | will assume that this is potentially a new file descriptor, otherwise |
1621 | it is assumed that the file descriptor stays the same. That means that |
1709 | it is assumed that the file descriptor stays the same. That means that |
… | |
… | |
1670 | when you rarely read from a file instead of from a socket, and want to |
1758 | when you rarely read from a file instead of from a socket, and want to |
1671 | reuse the same code path. |
1759 | reuse the same code path. |
1672 | |
1760 | |
1673 | =head3 The special problem of fork |
1761 | =head3 The special problem of fork |
1674 | |
1762 | |
1675 | Some backends (epoll, kqueue) do not support C<fork ()> at all or exhibit |
1763 | Some backends (epoll, kqueue, linuxaio, iouring) do not support C<fork ()> |
1676 | useless behaviour. Libev fully supports fork, but needs to be told about |
1764 | at all or exhibit useless behaviour. Libev fully supports fork, but needs |
1677 | it in the child if you want to continue to use it in the child. |
1765 | to be told about it in the child if you want to continue to use it in the |
|
|
1766 | child. |
1678 | |
1767 | |
1679 | To support fork in your child processes, you have to call C<ev_loop_fork |
1768 | To support fork in your child processes, you have to call C<ev_loop_fork |
1680 | ()> after a fork in the child, enable C<EVFLAG_FORKCHECK>, or resort to |
1769 | ()> after a fork in the child, enable C<EVFLAG_FORKCHECK>, or resort to |
1681 | C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>. |
1770 | C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>. |
1682 | |
1771 | |
… | |
… | |
2028 | |
2117 | |
2029 | The relative timeouts are calculated relative to the C<ev_now ()> |
2118 | The relative timeouts are calculated relative to the C<ev_now ()> |
2030 | time. This is usually the right thing as this timestamp refers to the time |
2119 | time. This is usually the right thing as this timestamp refers to the time |
2031 | of the event triggering whatever timeout you are modifying/starting. If |
2120 | of the event triggering whatever timeout you are modifying/starting. If |
2032 | you suspect event processing to be delayed and you I<need> to base the |
2121 | you suspect event processing to be delayed and you I<need> to base the |
2033 | timeout on the current time, use something like this to adjust for this: |
2122 | timeout on the current time, use something like the following to adjust |
|
|
2123 | for it: |
2034 | |
2124 | |
2035 | ev_timer_set (&timer, after + ev_now () - ev_time (), 0.); |
2125 | ev_timer_set (&timer, after + (ev_time () - ev_now ()), 0.); |
2036 | |
2126 | |
2037 | If the event loop is suspended for a long time, you can also force an |
2127 | If the event loop is suspended for a long time, you can also force an |
2038 | update of the time returned by C<ev_now ()> by calling C<ev_now_update |
2128 | update of the time returned by C<ev_now ()> by calling C<ev_now_update |
2039 | ()>. |
2129 | ()>, although that will push the event time of all outstanding events |
|
|
2130 | further into the future. |
2040 | |
2131 | |
2041 | =head3 The special problem of unsynchronised clocks |
2132 | =head3 The special problem of unsynchronised clocks |
2042 | |
2133 | |
2043 | Modern systems have a variety of clocks - libev itself uses the normal |
2134 | Modern systems have a variety of clocks - libev itself uses the normal |
2044 | "wall clock" clock and, if available, the monotonic clock (to avoid time |
2135 | "wall clock" clock and, if available, the monotonic clock (to avoid time |
… | |
… | |
2107 | |
2198 | |
2108 | =item ev_timer_init (ev_timer *, callback, ev_tstamp after, ev_tstamp repeat) |
2199 | =item ev_timer_init (ev_timer *, callback, ev_tstamp after, ev_tstamp repeat) |
2109 | |
2200 | |
2110 | =item ev_timer_set (ev_timer *, ev_tstamp after, ev_tstamp repeat) |
2201 | =item ev_timer_set (ev_timer *, ev_tstamp after, ev_tstamp repeat) |
2111 | |
2202 | |
2112 | Configure the timer to trigger after C<after> seconds. If C<repeat> |
2203 | Configure the timer to trigger after C<after> seconds (fractional and |
2113 | is C<0.>, then it will automatically be stopped once the timeout is |
2204 | negative values are supported). If C<repeat> is C<0.>, then it will |
2114 | reached. If it is positive, then the timer will automatically be |
2205 | automatically be stopped once the timeout is reached. If it is positive, |
2115 | configured to trigger again C<repeat> seconds later, again, and again, |
2206 | then the timer will automatically be configured to trigger again C<repeat> |
2116 | until stopped manually. |
2207 | seconds later, again, and again, until stopped manually. |
2117 | |
2208 | |
2118 | The timer itself will do a best-effort at avoiding drift, that is, if |
2209 | The timer itself will do a best-effort at avoiding drift, that is, if |
2119 | you configure a timer to trigger every 10 seconds, then it will normally |
2210 | you configure a timer to trigger every 10 seconds, then it will normally |
2120 | trigger at exactly 10 second intervals. If, however, your program cannot |
2211 | trigger at exactly 10 second intervals. If, however, your program cannot |
2121 | keep up with the timer (because it takes longer than those 10 seconds to |
2212 | keep up with the timer (because it takes longer than those 10 seconds to |
… | |
… | |
2203 | Periodic watchers are also timers of a kind, but they are very versatile |
2294 | Periodic watchers are also timers of a kind, but they are very versatile |
2204 | (and unfortunately a bit complex). |
2295 | (and unfortunately a bit complex). |
2205 | |
2296 | |
2206 | Unlike C<ev_timer>, periodic watchers are not based on real time (or |
2297 | Unlike C<ev_timer>, periodic watchers are not based on real time (or |
2207 | relative time, the physical time that passes) but on wall clock time |
2298 | relative time, the physical time that passes) but on wall clock time |
2208 | (absolute time, the thing you can read on your calender or clock). The |
2299 | (absolute time, the thing you can read on your calendar or clock). The |
2209 | difference is that wall clock time can run faster or slower than real |
2300 | difference is that wall clock time can run faster or slower than real |
2210 | time, and time jumps are not uncommon (e.g. when you adjust your |
2301 | time, and time jumps are not uncommon (e.g. when you adjust your |
2211 | wrist-watch). |
2302 | wrist-watch). |
2212 | |
2303 | |
2213 | You can tell a periodic watcher to trigger after some specific point |
2304 | You can tell a periodic watcher to trigger after some specific point |
… | |
… | |
2218 | C<ev_timer>, which would still trigger roughly 10 seconds after starting |
2309 | C<ev_timer>, which would still trigger roughly 10 seconds after starting |
2219 | it, as it uses a relative timeout). |
2310 | it, as it uses a relative timeout). |
2220 | |
2311 | |
2221 | C<ev_periodic> watchers can also be used to implement vastly more complex |
2312 | C<ev_periodic> watchers can also be used to implement vastly more complex |
2222 | timers, such as triggering an event on each "midnight, local time", or |
2313 | timers, such as triggering an event on each "midnight, local time", or |
2223 | other complicated rules. This cannot be done with C<ev_timer> watchers, as |
2314 | other complicated rules. This cannot easily be done with C<ev_timer> |
2224 | those cannot react to time jumps. |
2315 | watchers, as those cannot react to time jumps. |
2225 | |
2316 | |
2226 | As with timers, the callback is guaranteed to be invoked only when the |
2317 | As with timers, the callback is guaranteed to be invoked only when the |
2227 | point in time where it is supposed to trigger has passed. If multiple |
2318 | point in time where it is supposed to trigger has passed. If multiple |
2228 | timers become ready during the same loop iteration then the ones with |
2319 | timers become ready during the same loop iteration then the ones with |
2229 | earlier time-out values are invoked before ones with later time-out values |
2320 | earlier time-out values are invoked before ones with later time-out values |
… | |
… | |
2315 | |
2406 | |
2316 | NOTE: I<< This callback must always return a time that is higher than or |
2407 | NOTE: I<< This callback must always return a time that is higher than or |
2317 | equal to the passed C<now> value >>. |
2408 | equal to the passed C<now> value >>. |
2318 | |
2409 | |
2319 | This can be used to create very complex timers, such as a timer that |
2410 | This can be used to create very complex timers, such as a timer that |
2320 | triggers on "next midnight, local time". To do this, you would calculate the |
2411 | triggers on "next midnight, local time". To do this, you would calculate |
2321 | next midnight after C<now> and return the timestamp value for this. How |
2412 | the next midnight after C<now> and return the timestamp value for |
2322 | you do this is, again, up to you (but it is not trivial, which is the main |
2413 | this. Here is a (completely untested, no error checking) example on how to |
2323 | reason I omitted it as an example). |
2414 | do this: |
|
|
2415 | |
|
|
2416 | #include <time.h> |
|
|
2417 | |
|
|
2418 | static ev_tstamp |
|
|
2419 | my_rescheduler (ev_periodic *w, ev_tstamp now) |
|
|
2420 | { |
|
|
2421 | time_t tnow = (time_t)now; |
|
|
2422 | struct tm tm; |
|
|
2423 | localtime_r (&tnow, &tm); |
|
|
2424 | |
|
|
2425 | tm.tm_sec = tm.tm_min = tm.tm_hour = 0; // midnight current day |
|
|
2426 | ++tm.tm_mday; // midnight next day |
|
|
2427 | |
|
|
2428 | return mktime (&tm); |
|
|
2429 | } |
|
|
2430 | |
|
|
2431 | Note: this code might run into trouble on days that have more then two |
|
|
2432 | midnights (beginning and end). |
2324 | |
2433 | |
2325 | =back |
2434 | =back |
2326 | |
2435 | |
2327 | =item ev_periodic_again (loop, ev_periodic *) |
2436 | =item ev_periodic_again (loop, ev_periodic *) |
2328 | |
2437 | |
… | |
… | |
2393 | |
2502 | |
2394 | ev_periodic hourly_tick; |
2503 | ev_periodic hourly_tick; |
2395 | ev_periodic_init (&hourly_tick, clock_cb, |
2504 | ev_periodic_init (&hourly_tick, clock_cb, |
2396 | fmod (ev_now (loop), 3600.), 3600., 0); |
2505 | fmod (ev_now (loop), 3600.), 3600., 0); |
2397 | ev_periodic_start (loop, &hourly_tick); |
2506 | ev_periodic_start (loop, &hourly_tick); |
2398 | |
2507 | |
2399 | |
2508 | |
2400 | =head2 C<ev_signal> - signal me when a signal gets signalled! |
2509 | =head2 C<ev_signal> - signal me when a signal gets signalled! |
2401 | |
2510 | |
2402 | Signal watchers will trigger an event when the process receives a specific |
2511 | Signal watchers will trigger an event when the process receives a specific |
2403 | signal one or more times. Even though signals are very asynchronous, libev |
2512 | signal one or more times. Even though signals are very asynchronous, libev |
… | |
… | |
2907 | |
3016 | |
2908 | Prepare and check watchers are often (but not always) used in pairs: |
3017 | Prepare and check watchers are often (but not always) used in pairs: |
2909 | prepare watchers get invoked before the process blocks and check watchers |
3018 | prepare watchers get invoked before the process blocks and check watchers |
2910 | afterwards. |
3019 | afterwards. |
2911 | |
3020 | |
2912 | You I<must not> call C<ev_run> or similar functions that enter |
3021 | You I<must not> call C<ev_run> (or similar functions that enter the |
2913 | the current event loop from either C<ev_prepare> or C<ev_check> |
3022 | current event loop) or C<ev_loop_fork> from either C<ev_prepare> or |
2914 | watchers. Other loops than the current one are fine, however. The |
3023 | C<ev_check> watchers. Other loops than the current one are fine, |
2915 | rationale behind this is that you do not need to check for recursion in |
3024 | however. The rationale behind this is that you do not need to check |
2916 | those watchers, i.e. the sequence will always be C<ev_prepare>, blocking, |
3025 | for recursion in those watchers, i.e. the sequence will always be |
2917 | C<ev_check> so if you have one watcher of each kind they will always be |
3026 | C<ev_prepare>, blocking, C<ev_check> so if you have one watcher of each |
2918 | called in pairs bracketing the blocking call. |
3027 | kind they will always be called in pairs bracketing the blocking call. |
2919 | |
3028 | |
2920 | Their main purpose is to integrate other event mechanisms into libev and |
3029 | Their main purpose is to integrate other event mechanisms into libev and |
2921 | their use is somewhat advanced. They could be used, for example, to track |
3030 | their use is somewhat advanced. They could be used, for example, to track |
2922 | variable changes, implement your own watchers, integrate net-snmp or a |
3031 | variable changes, implement your own watchers, integrate net-snmp or a |
2923 | coroutine library and lots more. They are also occasionally useful if |
3032 | coroutine library and lots more. They are also occasionally useful if |
… | |
… | |
3213 | used). |
3322 | used). |
3214 | |
3323 | |
3215 | struct ev_loop *loop_hi = ev_default_init (0); |
3324 | struct ev_loop *loop_hi = ev_default_init (0); |
3216 | struct ev_loop *loop_lo = 0; |
3325 | struct ev_loop *loop_lo = 0; |
3217 | ev_embed embed; |
3326 | ev_embed embed; |
3218 | |
3327 | |
3219 | // see if there is a chance of getting one that works |
3328 | // see if there is a chance of getting one that works |
3220 | // (remember that a flags value of 0 means autodetection) |
3329 | // (remember that a flags value of 0 means autodetection) |
3221 | loop_lo = ev_embeddable_backends () & ev_recommended_backends () |
3330 | loop_lo = ev_embeddable_backends () & ev_recommended_backends () |
3222 | ? ev_loop_new (ev_embeddable_backends () & ev_recommended_backends ()) |
3331 | ? ev_loop_new (ev_embeddable_backends () & ev_recommended_backends ()) |
3223 | : 0; |
3332 | : 0; |
… | |
… | |
3237 | C<loop_socket>. (One might optionally use C<EVFLAG_NOENV>, too). |
3346 | C<loop_socket>. (One might optionally use C<EVFLAG_NOENV>, too). |
3238 | |
3347 | |
3239 | struct ev_loop *loop = ev_default_init (0); |
3348 | struct ev_loop *loop = ev_default_init (0); |
3240 | struct ev_loop *loop_socket = 0; |
3349 | struct ev_loop *loop_socket = 0; |
3241 | ev_embed embed; |
3350 | ev_embed embed; |
3242 | |
3351 | |
3243 | if (ev_supported_backends () & ~ev_recommended_backends () & EVBACKEND_KQUEUE) |
3352 | if (ev_supported_backends () & ~ev_recommended_backends () & EVBACKEND_KQUEUE) |
3244 | if ((loop_socket = ev_loop_new (EVBACKEND_KQUEUE)) |
3353 | if ((loop_socket = ev_loop_new (EVBACKEND_KQUEUE)) |
3245 | { |
3354 | { |
3246 | ev_embed_init (&embed, 0, loop_socket); |
3355 | ev_embed_init (&embed, 0, loop_socket); |
3247 | ev_embed_start (loop, &embed); |
3356 | ev_embed_start (loop, &embed); |
… | |
… | |
3263 | and calls it in the wrong process, the fork handlers will be invoked, too, |
3372 | and calls it in the wrong process, the fork handlers will be invoked, too, |
3264 | of course. |
3373 | of course. |
3265 | |
3374 | |
3266 | =head3 The special problem of life after fork - how is it possible? |
3375 | =head3 The special problem of life after fork - how is it possible? |
3267 | |
3376 | |
3268 | Most uses of C<fork()> consist of forking, then some simple calls to set |
3377 | Most uses of C<fork ()> consist of forking, then some simple calls to set |
3269 | up/change the process environment, followed by a call to C<exec()>. This |
3378 | up/change the process environment, followed by a call to C<exec()>. This |
3270 | sequence should be handled by libev without any problems. |
3379 | sequence should be handled by libev without any problems. |
3271 | |
3380 | |
3272 | This changes when the application actually wants to do event handling |
3381 | This changes when the application actually wants to do event handling |
3273 | in the child, or both parent in child, in effect "continuing" after the |
3382 | in the child, or both parent in child, in effect "continuing" after the |
… | |
… | |
3511 | |
3620 | |
3512 | There are some other functions of possible interest. Described. Here. Now. |
3621 | There are some other functions of possible interest. Described. Here. Now. |
3513 | |
3622 | |
3514 | =over 4 |
3623 | =over 4 |
3515 | |
3624 | |
3516 | =item ev_once (loop, int fd, int events, ev_tstamp timeout, callback) |
3625 | =item ev_once (loop, int fd, int events, ev_tstamp timeout, callback, arg) |
3517 | |
3626 | |
3518 | This function combines a simple timer and an I/O watcher, calls your |
3627 | This function combines a simple timer and an I/O watcher, calls your |
3519 | callback on whichever event happens first and automatically stops both |
3628 | callback on whichever event happens first and automatically stops both |
3520 | watchers. This is useful if you want to wait for a single event on an fd |
3629 | watchers. This is useful if you want to wait for a single event on an fd |
3521 | or timeout without having to allocate/configure/start/stop/free one or |
3630 | or timeout without having to allocate/configure/start/stop/free one or |
… | |
… | |
3897 | To embed libev, see L</EMBEDDING>, but in short, it's easiest to create two |
4006 | To embed libev, see L</EMBEDDING>, but in short, it's easiest to create two |
3898 | files, F<my_ev.h> and F<my_ev.c> that include the respective libev files: |
4007 | files, F<my_ev.h> and F<my_ev.c> that include the respective libev files: |
3899 | |
4008 | |
3900 | // my_ev.h |
4009 | // my_ev.h |
3901 | #define EV_CB_DECLARE(type) struct my_coro *cb; |
4010 | #define EV_CB_DECLARE(type) struct my_coro *cb; |
3902 | #define EV_CB_INVOKE(watcher) switch_to ((watcher)->cb); |
4011 | #define EV_CB_INVOKE(watcher) switch_to ((watcher)->cb) |
3903 | #include "../libev/ev.h" |
4012 | #include "../libev/ev.h" |
3904 | |
4013 | |
3905 | // my_ev.c |
4014 | // my_ev.c |
3906 | #define EV_H "my_ev.h" |
4015 | #define EV_H "my_ev.h" |
3907 | #include "../libev/ev.c" |
4016 | #include "../libev/ev.c" |
… | |
… | |
3953 | The normal C API should work fine when used from C++: both ev.h and the |
4062 | The normal C API should work fine when used from C++: both ev.h and the |
3954 | libev sources can be compiled as C++. Therefore, code that uses the C API |
4063 | libev sources can be compiled as C++. Therefore, code that uses the C API |
3955 | will work fine. |
4064 | will work fine. |
3956 | |
4065 | |
3957 | Proper exception specifications might have to be added to callbacks passed |
4066 | Proper exception specifications might have to be added to callbacks passed |
3958 | to libev: exceptions may be thrown only from watcher callbacks, all |
4067 | to libev: exceptions may be thrown only from watcher callbacks, all other |
3959 | other callbacks (allocator, syserr, loop acquire/release and periodic |
4068 | callbacks (allocator, syserr, loop acquire/release and periodic reschedule |
3960 | reschedule callbacks) must not throw exceptions, and might need a C<throw |
4069 | callbacks) must not throw exceptions, and might need a C<noexcept> |
3961 | ()> specification. If you have code that needs to be compiled as both C |
4070 | specification. If you have code that needs to be compiled as both C and |
3962 | and C++ you can use the C<EV_THROW> macro for this: |
4071 | C++ you can use the C<EV_NOEXCEPT> macro for this: |
3963 | |
4072 | |
3964 | static void |
4073 | static void |
3965 | fatal_error (const char *msg) EV_THROW |
4074 | fatal_error (const char *msg) EV_NOEXCEPT |
3966 | { |
4075 | { |
3967 | perror (msg); |
4076 | perror (msg); |
3968 | abort (); |
4077 | abort (); |
3969 | } |
4078 | } |
3970 | |
4079 | |
… | |
… | |
4097 | void operator() (ev::io &w, int revents) |
4206 | void operator() (ev::io &w, int revents) |
4098 | { |
4207 | { |
4099 | ... |
4208 | ... |
4100 | } |
4209 | } |
4101 | } |
4210 | } |
4102 | |
4211 | |
4103 | myfunctor f; |
4212 | myfunctor f; |
4104 | |
4213 | |
4105 | ev::io w; |
4214 | ev::io w; |
4106 | w.set (&f); |
4215 | w.set (&f); |
4107 | |
4216 | |
… | |
… | |
4380 | ev_vars.h |
4489 | ev_vars.h |
4381 | ev_wrap.h |
4490 | ev_wrap.h |
4382 | |
4491 | |
4383 | ev_win32.c required on win32 platforms only |
4492 | ev_win32.c required on win32 platforms only |
4384 | |
4493 | |
4385 | ev_select.c only when select backend is enabled (which is enabled by default) |
4494 | ev_select.c only when select backend is enabled |
4386 | ev_poll.c only when poll backend is enabled (disabled by default) |
4495 | ev_poll.c only when poll backend is enabled |
4387 | ev_epoll.c only when the epoll backend is enabled (disabled by default) |
4496 | ev_epoll.c only when the epoll backend is enabled |
|
|
4497 | ev_linuxaio.c only when the linux aio backend is enabled |
|
|
4498 | ev_iouring.c only when the linux io_uring backend is enabled |
4388 | ev_kqueue.c only when the kqueue backend is enabled (disabled by default) |
4499 | ev_kqueue.c only when the kqueue backend is enabled |
4389 | ev_port.c only when the solaris port backend is enabled (disabled by default) |
4500 | ev_port.c only when the solaris port backend is enabled |
4390 | |
4501 | |
4391 | F<ev.c> includes the backend files directly when enabled, so you only need |
4502 | F<ev.c> includes the backend files directly when enabled, so you only need |
4392 | to compile this single file. |
4503 | to compile this single file. |
4393 | |
4504 | |
4394 | =head3 LIBEVENT COMPATIBILITY API |
4505 | =head3 LIBEVENT COMPATIBILITY API |
… | |
… | |
4513 | available and will probe for kernel support at runtime. This will improve |
4624 | available and will probe for kernel support at runtime. This will improve |
4514 | C<ev_signal> and C<ev_async> performance and reduce resource consumption. |
4625 | C<ev_signal> and C<ev_async> performance and reduce resource consumption. |
4515 | If undefined, it will be enabled if the headers indicate GNU/Linux + Glibc |
4626 | If undefined, it will be enabled if the headers indicate GNU/Linux + Glibc |
4516 | 2.7 or newer, otherwise disabled. |
4627 | 2.7 or newer, otherwise disabled. |
4517 | |
4628 | |
|
|
4629 | =item EV_USE_SIGNALFD |
|
|
4630 | |
|
|
4631 | If defined to be C<1>, then libev will assume that C<signalfd ()> is |
|
|
4632 | available and will probe for kernel support at runtime. This enables |
|
|
4633 | the use of EVFLAG_SIGNALFD for faster and simpler signal handling. If |
|
|
4634 | undefined, it will be enabled if the headers indicate GNU/Linux + Glibc |
|
|
4635 | 2.7 or newer, otherwise disabled. |
|
|
4636 | |
|
|
4637 | =item EV_USE_TIMERFD |
|
|
4638 | |
|
|
4639 | If defined to be C<1>, then libev will assume that C<timerfd ()> is |
|
|
4640 | available and will probe for kernel support at runtime. This allows |
|
|
4641 | libev to detect time jumps accurately. If undefined, it will be enabled |
|
|
4642 | if the headers indicate GNU/Linux + Glibc 2.8 or newer and define |
|
|
4643 | C<TFD_TIMER_CANCEL_ON_SET>, otherwise disabled. |
|
|
4644 | |
|
|
4645 | =item EV_USE_EVENTFD |
|
|
4646 | |
|
|
4647 | If defined to be C<1>, then libev will assume that C<eventfd ()> is |
|
|
4648 | available and will probe for kernel support at runtime. This will improve |
|
|
4649 | C<ev_signal> and C<ev_async> performance and reduce resource consumption. |
|
|
4650 | If undefined, it will be enabled if the headers indicate GNU/Linux + Glibc |
|
|
4651 | 2.7 or newer, otherwise disabled. |
|
|
4652 | |
4518 | =item EV_USE_SELECT |
4653 | =item EV_USE_SELECT |
4519 | |
4654 | |
4520 | If undefined or defined to be C<1>, libev will compile in support for the |
4655 | If undefined or defined to be C<1>, libev will compile in support for the |
4521 | C<select>(2) backend. No attempt at auto-detection will be done: if no |
4656 | C<select>(2) backend. No attempt at auto-detection will be done: if no |
4522 | other method takes over, select will be it. Otherwise the select backend |
4657 | other method takes over, select will be it. Otherwise the select backend |
… | |
… | |
4582 | If defined to be C<1>, libev will compile in support for the Linux |
4717 | If defined to be C<1>, libev will compile in support for the Linux |
4583 | C<epoll>(7) backend. Its availability will be detected at runtime, |
4718 | C<epoll>(7) backend. Its availability will be detected at runtime, |
4584 | otherwise another method will be used as fallback. This is the preferred |
4719 | otherwise another method will be used as fallback. This is the preferred |
4585 | backend for GNU/Linux systems. If undefined, it will be enabled if the |
4720 | backend for GNU/Linux systems. If undefined, it will be enabled if the |
4586 | headers indicate GNU/Linux + Glibc 2.4 or newer, otherwise disabled. |
4721 | headers indicate GNU/Linux + Glibc 2.4 or newer, otherwise disabled. |
|
|
4722 | |
|
|
4723 | =item EV_USE_LINUXAIO |
|
|
4724 | |
|
|
4725 | If defined to be C<1>, libev will compile in support for the Linux aio |
|
|
4726 | backend (C<EV_USE_EPOLL> must also be enabled). If undefined, it will be |
|
|
4727 | enabled on linux, otherwise disabled. |
|
|
4728 | |
|
|
4729 | =item EV_USE_IOURING |
|
|
4730 | |
|
|
4731 | If defined to be C<1>, libev will compile in support for the Linux |
|
|
4732 | io_uring backend (C<EV_USE_EPOLL> must also be enabled). Due to it's |
|
|
4733 | current limitations it has to be requested explicitly. If undefined, it |
|
|
4734 | will be enabled on linux, otherwise disabled. |
4587 | |
4735 | |
4588 | =item EV_USE_KQUEUE |
4736 | =item EV_USE_KQUEUE |
4589 | |
4737 | |
4590 | If defined to be C<1>, libev will compile in support for the BSD style |
4738 | If defined to be C<1>, libev will compile in support for the BSD style |
4591 | C<kqueue>(2) backend. Its actual availability will be detected at runtime, |
4739 | C<kqueue>(2) backend. Its actual availability will be detected at runtime, |
… | |
… | |
4869 | called. If set to C<2>, then the internal verification code will be |
5017 | called. If set to C<2>, then the internal verification code will be |
4870 | called once per loop, which can slow down libev. If set to C<3>, then the |
5018 | called once per loop, which can slow down libev. If set to C<3>, then the |
4871 | verification code will be called very frequently, which will slow down |
5019 | verification code will be called very frequently, which will slow down |
4872 | libev considerably. |
5020 | libev considerably. |
4873 | |
5021 | |
|
|
5022 | Verification errors are reported via C's C<assert> mechanism, so if you |
|
|
5023 | disable that (e.g. by defining C<NDEBUG>) then no errors will be reported. |
|
|
5024 | |
4874 | The default is C<1>, unless C<EV_FEATURES> overrides it, in which case it |
5025 | The default is C<1>, unless C<EV_FEATURES> overrides it, in which case it |
4875 | will be C<0>. |
5026 | will be C<0>. |
4876 | |
5027 | |
4877 | =item EV_COMMON |
5028 | =item EV_COMMON |
4878 | |
5029 | |
… | |
… | |
5294 | structure (guaranteed by POSIX but not by ISO C for example), but it also |
5445 | structure (guaranteed by POSIX but not by ISO C for example), but it also |
5295 | assumes that the same (machine) code can be used to call any watcher |
5446 | assumes that the same (machine) code can be used to call any watcher |
5296 | callback: The watcher callbacks have different type signatures, but libev |
5447 | callback: The watcher callbacks have different type signatures, but libev |
5297 | calls them using an C<ev_watcher *> internally. |
5448 | calls them using an C<ev_watcher *> internally. |
5298 | |
5449 | |
|
|
5450 | =item null pointers and integer zero are represented by 0 bytes |
|
|
5451 | |
|
|
5452 | Libev uses C<memset> to initialise structs and arrays to C<0> bytes, and |
|
|
5453 | relies on this setting pointers and integers to null. |
|
|
5454 | |
5299 | =item pointer accesses must be thread-atomic |
5455 | =item pointer accesses must be thread-atomic |
5300 | |
5456 | |
5301 | Accessing a pointer value must be atomic, it must both be readable and |
5457 | Accessing a pointer value must be atomic, it must both be readable and |
5302 | writable in one piece - this is the case on all current architectures. |
5458 | writable in one piece - this is the case on all current architectures. |
5303 | |
5459 | |