… | |
… | |
131 | .\} |
131 | .\} |
132 | .rm #[ #] #H #V #F C |
132 | .rm #[ #] #H #V #F C |
133 | .\" ======================================================================== |
133 | .\" ======================================================================== |
134 | .\" |
134 | .\" |
135 | .IX Title "LIBEV 3" |
135 | .IX Title "LIBEV 3" |
136 | .TH LIBEV 3 "2019-06-22" "libev-4.25" "libev - high performance full featured event loop" |
136 | .TH LIBEV 3 "2019-06-25" "libev-4.25" "libev - high performance full featured event loop" |
137 | .\" For nroff, turn off justification. Always turn off hyphenation; it makes |
137 | .\" For nroff, turn off justification. Always turn off hyphenation; it makes |
138 | .\" way too many mistakes in technical documents. |
138 | .\" way too many mistakes in technical documents. |
139 | .if n .ad l |
139 | .if n .ad l |
140 | .nh |
140 | .nh |
141 | .SH "NAME" |
141 | .SH "NAME" |
… | |
… | |
638 | This backend maps \f(CW\*(C`EV_READ\*(C'\fR to \f(CW\*(C`POLLIN | POLLERR | POLLHUP\*(C'\fR, and |
638 | This backend maps \f(CW\*(C`EV_READ\*(C'\fR to \f(CW\*(C`POLLIN | POLLERR | POLLHUP\*(C'\fR, and |
639 | \&\f(CW\*(C`EV_WRITE\*(C'\fR to \f(CW\*(C`POLLOUT | POLLERR | POLLHUP\*(C'\fR. |
639 | \&\f(CW\*(C`EV_WRITE\*(C'\fR to \f(CW\*(C`POLLOUT | POLLERR | POLLHUP\*(C'\fR. |
640 | .ie n .IP """EVBACKEND_EPOLL"" (value 4, Linux)" 4 |
640 | .ie n .IP """EVBACKEND_EPOLL"" (value 4, Linux)" 4 |
641 | .el .IP "\f(CWEVBACKEND_EPOLL\fR (value 4, Linux)" 4 |
641 | .el .IP "\f(CWEVBACKEND_EPOLL\fR (value 4, Linux)" 4 |
642 | .IX Item "EVBACKEND_EPOLL (value 4, Linux)" |
642 | .IX Item "EVBACKEND_EPOLL (value 4, Linux)" |
643 | Use the linux-specific \fBepoll\fR\|(7) interface (for both pre\- and post\-2.6.9 |
643 | Use the Linux-specific \fBepoll\fR\|(7) interface (for both pre\- and post\-2.6.9 |
644 | kernels). |
644 | kernels). |
645 | .Sp |
645 | .Sp |
646 | For few fds, this backend is a bit little slower than poll and select, but |
646 | For few fds, this backend is a bit little slower than poll and select, but |
647 | it scales phenomenally better. While poll and select usually scale like |
647 | it scales phenomenally better. While poll and select usually scale like |
648 | O(total_fds) where total_fds is the total number of fds (or the highest |
648 | O(total_fds) where total_fds is the total number of fds (or the highest |
… | |
… | |
701 | This backend maps \f(CW\*(C`EV_READ\*(C'\fR and \f(CW\*(C`EV_WRITE\*(C'\fR in the same way as |
701 | This backend maps \f(CW\*(C`EV_READ\*(C'\fR and \f(CW\*(C`EV_WRITE\*(C'\fR in the same way as |
702 | \&\f(CW\*(C`EVBACKEND_POLL\*(C'\fR. |
702 | \&\f(CW\*(C`EVBACKEND_POLL\*(C'\fR. |
703 | .ie n .IP """EVBACKEND_LINUXAIO"" (value 64, Linux)" 4 |
703 | .ie n .IP """EVBACKEND_LINUXAIO"" (value 64, Linux)" 4 |
704 | .el .IP "\f(CWEVBACKEND_LINUXAIO\fR (value 64, Linux)" 4 |
704 | .el .IP "\f(CWEVBACKEND_LINUXAIO\fR (value 64, Linux)" 4 |
705 | .IX Item "EVBACKEND_LINUXAIO (value 64, Linux)" |
705 | .IX Item "EVBACKEND_LINUXAIO (value 64, Linux)" |
706 | Use the linux-specific linux aio (\fInot\fR \f(CWaio(7)\fR) event interface |
706 | Use the Linux-specific Linux \s-1AIO\s0 (\fInot\fR \f(CWaio(7)\fR but \f(CWio_submit(2)\fR) event interface available in post\-4.18 kernels (but libev |
707 | available in post\-4.18 kernels. |
707 | only tries to use it in 4.19+). |
|
|
708 | .Sp |
|
|
709 | This is another Linux train wreck of an event interface. |
708 | .Sp |
710 | .Sp |
709 | If this backend works for you (as of this writing, it was very |
711 | If this backend works for you (as of this writing, it was very |
710 | experimental and only supports a subset of file types), it is the best |
712 | experimental), it is the best event interface available on Linux and might |
711 | event interface available on linux and might be well worth it enabling it |
713 | be well worth enabling it \- if it isn't available in your kernel this will |
712 | \&\- if it isn't available in your kernel this will be detected and another |
714 | be detected and this backend will be skipped. |
713 | backend will be chosen. |
|
|
714 | .Sp |
715 | .Sp |
715 | This backend can batch oneshot requests and uses a user-space ring buffer |
716 | This backend can batch oneshot requests and supports a user-space ring |
716 | to receive events. It also doesn't suffer from most of the design problems |
717 | buffer to receive events. It also doesn't suffer from most of the design |
717 | of epoll (such as not being able to remove event sources from the epoll |
718 | problems of epoll (such as not being able to remove event sources from |
718 | set), and generally sounds too good to be true. Because, this being the |
719 | the epoll set), and generally sounds too good to be true. Because, this |
719 | linux kernel, of course it suffers from a whole new set of limitations. |
720 | being the Linux kernel, of course it suffers from a whole new set of |
|
|
721 | limitations, forcing you to fall back to epoll, inheriting all its design |
|
|
722 | issues. |
720 | .Sp |
723 | .Sp |
721 | For one, it is not easily embeddable (but probably could be done using |
724 | For one, it is not easily embeddable (but probably could be done using |
722 | an event fd at some extra overhead). It also is subject to various |
725 | an event fd at some extra overhead). It also is subject to a system wide |
723 | arbitrary limits that can be configured in \fI/proc/sys/fs/aio\-max\-nr\fR |
726 | limit that can be configured in \fI/proc/sys/fs/aio\-max\-nr\fR. If no \s-1AIO\s0 |
724 | and \fI/proc/sys/fs/aio\-nr\fR), which could lead to it being skipped during |
727 | requests are left, this backend will be skipped during initialisation, and |
725 | initialisation. |
728 | will switch to epoll when the loop is active. |
726 | .Sp |
729 | .Sp |
727 | Most problematic in practise, however, is that, like kqueue, it requires |
730 | Most problematic in practice, however, is that not all file descriptors |
728 | special support from drivers, and, not surprisingly, not all drivers |
|
|
729 | implement it. For example, in linux 4.19, tcp sockets, pipes, event fds, |
731 | work with it. For example, in Linux 5.1, \s-1TCP\s0 sockets, pipes, event fds, |
730 | files, \fI/dev/null\fR and a few others are supported, but ttys are not, so |
732 | files, \fI/dev/null\fR and many others are supported, but ttys do not work |
731 | this is not (yet?) a generic event polling interface but is probably still |
733 | properly (a known bug that the kernel developers don't care about, see |
732 | be very useful in a web server or similar program. |
734 | <https://lore.kernel.org/patchwork/patch/1047453/>), so this is not |
|
|
735 | (yet?) a generic event polling interface. |
|
|
736 | .Sp |
|
|
737 | Overall, it seems the Linux developers just don't want it to have a |
|
|
738 | generic event handling mechanism other than \f(CW\*(C`select\*(C'\fR or \f(CW\*(C`poll\*(C'\fR. |
|
|
739 | .Sp |
|
|
740 | To work around all these problem, the current version of libev uses its |
|
|
741 | epoll backend as a fallback for file descriptor types that do not work. Or |
|
|
742 | falls back completely to epoll if the kernel acts up. |
733 | .Sp |
743 | .Sp |
734 | This backend maps \f(CW\*(C`EV_READ\*(C'\fR and \f(CW\*(C`EV_WRITE\*(C'\fR in the same way as |
744 | This backend maps \f(CW\*(C`EV_READ\*(C'\fR and \f(CW\*(C`EV_WRITE\*(C'\fR in the same way as |
735 | \&\f(CW\*(C`EVBACKEND_POLL\*(C'\fR. |
745 | \&\f(CW\*(C`EVBACKEND_POLL\*(C'\fR. |
736 | .ie n .IP """EVBACKEND_KQUEUE"" (value 8, most \s-1BSD\s0 clones)" 4 |
746 | .ie n .IP """EVBACKEND_KQUEUE"" (value 8, most \s-1BSD\s0 clones)" 4 |
737 | .el .IP "\f(CWEVBACKEND_KQUEUE\fR (value 8, most \s-1BSD\s0 clones)" 4 |
747 | .el .IP "\f(CWEVBACKEND_KQUEUE\fR (value 8, most \s-1BSD\s0 clones)" 4 |
738 | .IX Item "EVBACKEND_KQUEUE (value 8, most BSD clones)" |
748 | .IX Item "EVBACKEND_KQUEUE (value 8, most BSD clones)" |
739 | Kqueue deserves special mention, as at the time of this writing, it |
749 | Kqueue deserves special mention, as at the time this backend was |
740 | was broken on all BSDs except NetBSD (usually it doesn't work reliably |
750 | implemented, it was broken on all BSDs except NetBSD (usually it doesn't |
741 | with anything but sockets and pipes, except on Darwin, where of course |
751 | work reliably with anything but sockets and pipes, except on Darwin, |
742 | it's completely useless). Unlike epoll, however, whose brokenness |
752 | where of course it's completely useless). Unlike epoll, however, whose |
743 | is by design, these kqueue bugs can (and eventually will) be fixed |
753 | brokenness is by design, these kqueue bugs can be (and mostly have been) |
744 | without \s-1API\s0 changes to existing programs. For this reason it's not being |
754 | fixed without \s-1API\s0 changes to existing programs. For this reason it's not |
745 | \&\*(L"auto-detected\*(R" unless you explicitly specify it in the flags (i.e. using |
755 | being \*(L"auto-detected\*(R" on all platforms unless you explicitly specify it |
746 | \&\f(CW\*(C`EVBACKEND_KQUEUE\*(C'\fR) or libev was compiled on a known-to-be-good (\-enough) |
756 | in the flags (i.e. using \f(CW\*(C`EVBACKEND_KQUEUE\*(C'\fR) or libev was compiled on a |
747 | system like NetBSD. |
757 | known-to-be-good (\-enough) system like NetBSD. |
748 | .Sp |
758 | .Sp |
749 | You still can embed kqueue into a normal poll or select backend and use it |
759 | You still can embed kqueue into a normal poll or select backend and use it |
750 | only for sockets (after having made sure that sockets work with kqueue on |
760 | only for sockets (after having made sure that sockets work with kqueue on |
751 | the target platform). See \f(CW\*(C`ev_embed\*(C'\fR watchers for more info. |
761 | the target platform). See \f(CW\*(C`ev_embed\*(C'\fR watchers for more info. |
752 | .Sp |
762 | .Sp |
753 | It scales in the same way as the epoll backend, but the interface to the |
763 | It scales in the same way as the epoll backend, but the interface to the |
754 | kernel is more efficient (which says nothing about its actual speed, of |
764 | kernel is more efficient (which says nothing about its actual speed, of |
755 | course). While stopping, setting and starting an I/O watcher does never |
765 | course). While stopping, setting and starting an I/O watcher does never |
756 | cause an extra system call as with \f(CW\*(C`EVBACKEND_EPOLL\*(C'\fR, it still adds up to |
766 | cause an extra system call as with \f(CW\*(C`EVBACKEND_EPOLL\*(C'\fR, it still adds up to |
757 | two event changes per incident. Support for \f(CW\*(C`fork ()\*(C'\fR is very bad (you |
767 | two event changes per incident. Support for \f(CW\*(C`fork ()\*(C'\fR is very bad (you |
758 | might have to leak fd's on fork, but it's more sane than epoll) and it |
768 | might have to leak fds on fork, but it's more sane than epoll) and it |
759 | drops fds silently in similarly hard-to-detect cases. |
769 | drops fds silently in similarly hard-to-detect cases. |
760 | .Sp |
770 | .Sp |
761 | This backend usually performs well under most conditions. |
771 | This backend usually performs well under most conditions. |
762 | .Sp |
772 | .Sp |
763 | While nominally embeddable in other event loops, this doesn't work |
773 | While nominally embeddable in other event loops, this doesn't work |