ViewVC Help
View File | Revision Log | Show Annotations | Download File
/cvs/libev/ev.pod
(Generate patch)

Comparing libev/ev.pod (file contents):
Revision 1.98 by root, Sat Dec 22 06:10:25 2007 UTC vs.
Revision 1.108 by root, Mon Dec 24 10:39:21 2007 UTC

4 4
5=head1 SYNOPSIS 5=head1 SYNOPSIS
6 6
7 #include <ev.h> 7 #include <ev.h>
8 8
9=head1 EXAMPLE PROGRAM 9=head2 EXAMPLE PROGRAM
10 10
11 #include <ev.h> 11 #include <ev.h>
12 12
13 ev_io stdin_watcher; 13 ev_io stdin_watcher;
14 ev_timer timeout_watcher; 14 ev_timer timeout_watcher;
65You register interest in certain events by registering so-called I<event 65You register interest in certain events by registering so-called I<event
66watchers>, which are relatively small C structures you initialise with the 66watchers>, which are relatively small C structures you initialise with the
67details of the event, and then hand it over to libev by I<starting> the 67details of the event, and then hand it over to libev by I<starting> the
68watcher. 68watcher.
69 69
70=head1 FEATURES 70=head2 FEATURES
71 71
72Libev supports C<select>, C<poll>, the Linux-specific C<epoll>, the 72Libev supports C<select>, C<poll>, the Linux-specific C<epoll>, the
73BSD-specific C<kqueue> and the Solaris-specific event port mechanisms 73BSD-specific C<kqueue> and the Solaris-specific event port mechanisms
74for file descriptor events (C<ev_io>), the Linux C<inotify> interface 74for file descriptor events (C<ev_io>), the Linux C<inotify> interface
75(for C<ev_stat>), relative timers (C<ev_timer>), absolute timers 75(for C<ev_stat>), relative timers (C<ev_timer>), absolute timers
82 82
83It also is quite fast (see this 83It also is quite fast (see this
84L<benchmark|http://libev.schmorp.de/bench.html> comparing it to libevent 84L<benchmark|http://libev.schmorp.de/bench.html> comparing it to libevent
85for example). 85for example).
86 86
87=head1 CONVENTIONS 87=head2 CONVENTIONS
88 88
89Libev is very configurable. In this manual the default configuration will 89Libev is very configurable. In this manual the default configuration will
90be described, which supports multiple event loops. For more info about 90be described, which supports multiple event loops. For more info about
91various configuration options please have a look at B<EMBED> section in 91various configuration options please have a look at B<EMBED> section in
92this manual. If libev was configured without support for multiple event 92this manual. If libev was configured without support for multiple event
93loops, then all functions taking an initial argument of name C<loop> 93loops, then all functions taking an initial argument of name C<loop>
94(which is always of type C<struct ev_loop *>) will not have this argument. 94(which is always of type C<struct ev_loop *>) will not have this argument.
95 95
96=head1 TIME REPRESENTATION 96=head2 TIME REPRESENTATION
97 97
98Libev represents time as a single floating point number, representing the 98Libev represents time as a single floating point number, representing the
99(fractional) number of seconds since the (POSIX) epoch (somewhere near 99(fractional) number of seconds since the (POSIX) epoch (somewhere near
100the beginning of 1970, details are complicated, don't ask). This type is 100the beginning of 1970, details are complicated, don't ask). This type is
101called C<ev_tstamp>, which is what you should use too. It usually aliases 101called C<ev_tstamp>, which is what you should use too. It usually aliases
306=item C<EVBACKEND_SELECT> (value 1, portable select backend) 306=item C<EVBACKEND_SELECT> (value 1, portable select backend)
307 307
308This is your standard select(2) backend. Not I<completely> standard, as 308This is your standard select(2) backend. Not I<completely> standard, as
309libev tries to roll its own fd_set with no limits on the number of fds, 309libev tries to roll its own fd_set with no limits on the number of fds,
310but if that fails, expect a fairly low limit on the number of fds when 310but if that fails, expect a fairly low limit on the number of fds when
311using this backend. It doesn't scale too well (O(highest_fd)), but its usually 311using this backend. It doesn't scale too well (O(highest_fd)), but its
312the fastest backend for a low number of fds. 312usually the fastest backend for a low number of (low-numbered :) fds.
313
314To get good performance out of this backend you need a high amount of
315parallelity (most of the file descriptors should be busy). If you are
316writing a server, you should C<accept ()> in a loop to accept as many
317connections as possible during one iteration. You might also want to have
318a look at C<ev_set_io_collect_interval ()> to increase the amount of
319readyness notifications you get per iteration.
313 320
314=item C<EVBACKEND_POLL> (value 2, poll backend, available everywhere except on windows) 321=item C<EVBACKEND_POLL> (value 2, poll backend, available everywhere except on windows)
315 322
316And this is your standard poll(2) backend. It's more complicated than 323And this is your standard poll(2) backend. It's more complicated
317select, but handles sparse fds better and has no artificial limit on the 324than select, but handles sparse fds better and has no artificial
318number of fds you can use (except it will slow down considerably with a 325limit on the number of fds you can use (except it will slow down
319lot of inactive fds). It scales similarly to select, i.e. O(total_fds). 326considerably with a lot of inactive fds). It scales similarly to select,
327i.e. O(total_fds). See the entry for C<EVBACKEND_SELECT>, above, for
328performance tips.
320 329
321=item C<EVBACKEND_EPOLL> (value 4, Linux) 330=item C<EVBACKEND_EPOLL> (value 4, Linux)
322 331
323For few fds, this backend is a bit little slower than poll and select, 332For few fds, this backend is a bit little slower than poll and select,
324but it scales phenomenally better. While poll and select usually scale 333but it scales phenomenally better. While poll and select usually scale
325like O(total_fds) where n is the total number of fds (or the highest fd), 334like O(total_fds) where n is the total number of fds (or the highest fd),
326epoll scales either O(1) or O(active_fds). The epoll design has a number 335epoll scales either O(1) or O(active_fds). The epoll design has a number
327of shortcomings, such as silently dropping events in some hard-to-detect 336of shortcomings, such as silently dropping events in some hard-to-detect
328cases and rewiring a syscall per fd change, no fork support and bad 337cases and rewiring a syscall per fd change, no fork support and bad
329support for dup: 338support for dup.
330 339
331While stopping, setting and starting an I/O watcher in the same iteration 340While stopping, setting and starting an I/O watcher in the same iteration
332will result in some caching, there is still a syscall per such incident 341will result in some caching, there is still a syscall per such incident
333(because the fd could point to a different file description now), so its 342(because the fd could point to a different file description now), so its
334best to avoid that. Also, C<dup ()>'ed file descriptors might not work 343best to avoid that. Also, C<dup ()>'ed file descriptors might not work
336 345
337Please note that epoll sometimes generates spurious notifications, so you 346Please note that epoll sometimes generates spurious notifications, so you
338need to use non-blocking I/O or other means to avoid blocking when no data 347need to use non-blocking I/O or other means to avoid blocking when no data
339(or space) is available. 348(or space) is available.
340 349
350Best performance from this backend is achieved by not unregistering all
351watchers for a file descriptor until it has been closed, if possible, i.e.
352keep at least one watcher active per fd at all times.
353
354While nominally embeddeble in other event loops, this feature is broken in
355all kernel versions tested so far.
356
341=item C<EVBACKEND_KQUEUE> (value 8, most BSD clones) 357=item C<EVBACKEND_KQUEUE> (value 8, most BSD clones)
342 358
343Kqueue deserves special mention, as at the time of this writing, it 359Kqueue deserves special mention, as at the time of this writing, it
344was broken on I<all> BSDs (usually it doesn't work with anything but 360was broken on all BSDs except NetBSD (usually it doesn't work reliably
345sockets and pipes, except on Darwin, where of course it's completely 361with anything but sockets and pipes, except on Darwin, where of course
346useless. On NetBSD, it seems to work for all the FD types I tested, so it
347is used by default there). For this reason it's not being "autodetected" 362it's completely useless). For this reason it's not being "autodetected"
348unless you explicitly specify it explicitly in the flags (i.e. using 363unless you explicitly specify it explicitly in the flags (i.e. using
349C<EVBACKEND_KQUEUE>) or libev was compiled on a known-to-be-good (-enough) 364C<EVBACKEND_KQUEUE>) or libev was compiled on a known-to-be-good (-enough)
350system like NetBSD. 365system like NetBSD.
351 366
367You still can embed kqueue into a normal poll or select backend and use it
368only for sockets (after having made sure that sockets work with kqueue on
369the target platform). See C<ev_embed> watchers for more info.
370
352It scales in the same way as the epoll backend, but the interface to the 371It scales in the same way as the epoll backend, but the interface to the
353kernel is more efficient (which says nothing about its actual speed, 372kernel is more efficient (which says nothing about its actual speed, of
354of course). While stopping, setting and starting an I/O watcher does 373course). While stopping, setting and starting an I/O watcher does never
355never cause an extra syscall as with epoll, it still adds up to two event 374cause an extra syscall as with C<EVBACKEND_EPOLL>, it still adds up to
356changes per incident, support for C<fork ()> is very bad and it drops fds 375two event changes per incident, support for C<fork ()> is very bad and it
357silently in similarly hard-to-detetc cases. 376drops fds silently in similarly hard-to-detect cases.
377
378This backend usually performs well under most conditions.
379
380While nominally embeddable in other event loops, this doesn't work
381everywhere, so you might need to test for this. And since it is broken
382almost everywhere, you should only use it when you have a lot of sockets
383(for which it usually works), by embedding it into another event loop
384(e.g. C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>) and using it only for
385sockets.
358 386
359=item C<EVBACKEND_DEVPOLL> (value 16, Solaris 8) 387=item C<EVBACKEND_DEVPOLL> (value 16, Solaris 8)
360 388
361This is not implemented yet (and might never be). 389This is not implemented yet (and might never be, unless you send me an
390implementation). According to reports, C</dev/poll> only supports sockets
391and is not embeddable, which would limit the usefulness of this backend
392immensely.
362 393
363=item C<EVBACKEND_PORT> (value 32, Solaris 10) 394=item C<EVBACKEND_PORT> (value 32, Solaris 10)
364 395
365This uses the Solaris 10 event port mechanism. As with everything on Solaris, 396This uses the Solaris 10 event port mechanism. As with everything on Solaris,
366it's really slow, but it still scales very well (O(active_fds)). 397it's really slow, but it still scales very well (O(active_fds)).
367 398
368Please note that solaris event ports can deliver a lot of spurious 399Please note that solaris event ports can deliver a lot of spurious
369notifications, so you need to use non-blocking I/O or other means to avoid 400notifications, so you need to use non-blocking I/O or other means to avoid
370blocking when no data (or space) is available. 401blocking when no data (or space) is available.
371 402
403While this backend scales well, it requires one system call per active
404file descriptor per loop iteration. For small and medium numbers of file
405descriptors a "slow" C<EVBACKEND_SELECT> or C<EVBACKEND_POLL> backend
406might perform better.
407
372=item C<EVBACKEND_ALL> 408=item C<EVBACKEND_ALL>
373 409
374Try all backends (even potentially broken ones that wouldn't be tried 410Try all backends (even potentially broken ones that wouldn't be tried
375with C<EVFLAG_AUTO>). Since this is a mask, you can do stuff such as 411with C<EVFLAG_AUTO>). Since this is a mask, you can do stuff such as
376C<EVBACKEND_ALL & ~EVBACKEND_KQUEUE>. 412C<EVBACKEND_ALL & ~EVBACKEND_KQUEUE>.
413
414It is definitely not recommended to use this flag.
377 415
378=back 416=back
379 417
380If one or more of these are ored into the flags value, then only these 418If one or more of these are ored into the flags value, then only these
381backends will be tried (in the reverse order as given here). If none are 419backends will be tried (in the reverse order as given here). If none are
596overhead for the actual polling but can deliver many events at once. 634overhead for the actual polling but can deliver many events at once.
597 635
598By setting a higher I<io collect interval> you allow libev to spend more 636By setting a higher I<io collect interval> you allow libev to spend more
599time collecting I/O events, so you can handle more events per iteration, 637time collecting I/O events, so you can handle more events per iteration,
600at the cost of increasing latency. Timeouts (both C<ev_periodic> and 638at the cost of increasing latency. Timeouts (both C<ev_periodic> and
601C<ev_timer>) will be not affected. 639C<ev_timer>) will be not affected. Setting this to a non-null value will
640introduce an additional C<ev_sleep ()> call into most loop iterations.
602 641
603Likewise, by setting a higher I<timeout collect interval> you allow libev 642Likewise, by setting a higher I<timeout collect interval> you allow libev
604to spend more time collecting timeouts, at the expense of increased 643to spend more time collecting timeouts, at the expense of increased
605latency (the watcher callback will be called later). C<ev_io> watchers 644latency (the watcher callback will be called later). C<ev_io> watchers
606will not be affected. 645will not be affected. Setting this to a non-null value will not introduce
646any overhead in libev.
607 647
608Many (busy) programs can usually benefit by setting the io collect 648Many (busy) programs can usually benefit by setting the io collect
609interval to a value near C<0.1> or so, which is often enough for 649interval to a value near C<0.1> or so, which is often enough for
610interactive servers (of course not for games), likewise for timeouts. It 650interactive servers (of course not for games), likewise for timeouts. It
611usually doesn't make much sense to set it to a lower value than C<0.01>, 651usually doesn't make much sense to set it to a lower value than C<0.01>,
992optimisations to libev. 1032optimisations to libev.
993 1033
994=head3 The special problem of dup'ed file descriptors 1034=head3 The special problem of dup'ed file descriptors
995 1035
996Some backends (e.g. epoll), cannot register events for file descriptors, 1036Some backends (e.g. epoll), cannot register events for file descriptors,
997but only events for the underlying file descriptions. That menas when you 1037but only events for the underlying file descriptions. That means when you
998have C<dup ()>'ed file descriptors and register events for them, only one 1038have C<dup ()>'ed file descriptors and register events for them, only one
999file descriptor might actually receive events. 1039file descriptor might actually receive events.
1000 1040
1001There is no workaorund possible except not registering events 1041There is no workaround possible except not registering events
1002for potentially C<dup ()>'ed file descriptors or to resort to 1042for potentially C<dup ()>'ed file descriptors, or to resort to
1003C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>. 1043C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>.
1004 1044
1005=head3 The special problem of fork 1045=head3 The special problem of fork
1006 1046
1007Some backends (epoll, kqueue) do not support C<fork ()> at all or exhibit 1047Some backends (epoll, kqueue) do not support C<fork ()> at all or exhibit
1456semantics of C<ev_stat> watchers, which means that libev sometimes needs 1496semantics of C<ev_stat> watchers, which means that libev sometimes needs
1457to fall back to regular polling again even with inotify, but changes are 1497to fall back to regular polling again even with inotify, but changes are
1458usually detected immediately, and if the file exists there will be no 1498usually detected immediately, and if the file exists there will be no
1459polling. 1499polling.
1460 1500
1501=head3 Inotify
1502
1503When C<inotify (7)> support has been compiled into libev (generally only
1504available on Linux) and present at runtime, it will be used to speed up
1505change detection where possible. The inotify descriptor will be created lazily
1506when the first C<ev_stat> watcher is being started.
1507
1508Inotify presense does not change the semantics of C<ev_stat> watchers
1509except that changes might be detected earlier, and in some cases, to avoid
1510making regular C<stat> calls. Even in the presense of inotify support
1511there are many cases where libev has to resort to regular C<stat> polling.
1512
1513(There is no support for kqueue, as apparently it cannot be used to
1514implement this functionality, due to the requirement of having a file
1515descriptor open on the object at all times).
1516
1517=head3 The special problem of stat time resolution
1518
1519The C<stat ()> syscall only supports full-second resolution portably, and
1520even on systems where the resolution is higher, many filesystems still
1521only support whole seconds.
1522
1523That means that, if the time is the only thing that changes, you might
1524miss updates: on the first update, C<ev_stat> detects a change and calls
1525your callback, which does something. When there is another update within
1526the same second, C<ev_stat> will be unable to detect it.
1527
1528The solution to this is to delay acting on a change for a second (or till
1529the next second boundary), using a roughly one-second delay C<ev_timer>
1530(C<ev_timer_set (w, 0., 1.01); ev_timer_again (loop, w)>). The C<.01>
1531is added to work around small timing inconsistencies of some operating
1532systems.
1533
1461=head3 Watcher-Specific Functions and Data Members 1534=head3 Watcher-Specific Functions and Data Members
1462 1535
1463=over 4 1536=over 4
1464 1537
1465=item ev_stat_init (ev_stat *, callback, const char *path, ev_tstamp interval) 1538=item ev_stat_init (ev_stat *, callback, const char *path, ev_tstamp interval)
1502=item const char *path [read-only] 1575=item const char *path [read-only]
1503 1576
1504The filesystem path that is being watched. 1577The filesystem path that is being watched.
1505 1578
1506=back 1579=back
1580
1581=head3 Examples
1507 1582
1508Example: Watch C</etc/passwd> for attribute changes. 1583Example: Watch C</etc/passwd> for attribute changes.
1509 1584
1510 static void 1585 static void
1511 passwd_cb (struct ev_loop *loop, ev_stat *w, int revents) 1586 passwd_cb (struct ev_loop *loop, ev_stat *w, int revents)
1524 } 1599 }
1525 1600
1526 ... 1601 ...
1527 ev_stat passwd; 1602 ev_stat passwd;
1528 1603
1529 ev_stat_init (&passwd, passwd_cb, "/etc/passwd"); 1604 ev_stat_init (&passwd, passwd_cb, "/etc/passwd", 0.);
1530 ev_stat_start (loop, &passwd); 1605 ev_stat_start (loop, &passwd);
1606
1607Example: Like above, but additionally use a one-second delay so we do not
1608miss updates (however, frequent updates will delay processing, too, so
1609one might do the work both on C<ev_stat> callback invocation I<and> on
1610C<ev_timer> callback invocation).
1611
1612 static ev_stat passwd;
1613 static ev_timer timer;
1614
1615 static void
1616 timer_cb (EV_P_ ev_timer *w, int revents)
1617 {
1618 ev_timer_stop (EV_A_ w);
1619
1620 /* now it's one second after the most recent passwd change */
1621 }
1622
1623 static void
1624 stat_cb (EV_P_ ev_stat *w, int revents)
1625 {
1626 /* reset the one-second timer */
1627 ev_timer_again (EV_A_ &timer);
1628 }
1629
1630 ...
1631 ev_stat_init (&passwd, stat_cb, "/etc/passwd", 0.);
1632 ev_stat_start (loop, &passwd);
1633 ev_timer_init (&timer, timer_cb, 0., 1.01);
1531 1634
1532 1635
1533=head2 C<ev_idle> - when you've got nothing better to do... 1636=head2 C<ev_idle> - when you've got nothing better to do...
1534 1637
1535Idle watchers trigger events when no other events of the same or higher 1638Idle watchers trigger events when no other events of the same or higher
1621 1724
1622It is recommended to give C<ev_check> watchers highest (C<EV_MAXPRI>) 1725It is recommended to give C<ev_check> watchers highest (C<EV_MAXPRI>)
1623priority, to ensure that they are being run before any other watchers 1726priority, to ensure that they are being run before any other watchers
1624after the poll. Also, C<ev_check> watchers (and C<ev_prepare> watchers, 1727after the poll. Also, C<ev_check> watchers (and C<ev_prepare> watchers,
1625too) should not activate ("feed") events into libev. While libev fully 1728too) should not activate ("feed") events into libev. While libev fully
1626supports this, they will be called before other C<ev_check> watchers did 1729supports this, they will be called before other C<ev_check> watchers
1627their job. As C<ev_check> watchers are often used to embed other event 1730did their job. As C<ev_check> watchers are often used to embed other
1628loops those other event loops might be in an unusable state until their 1731(non-libev) event loops those other event loops might be in an unusable
1629C<ev_check> watcher ran (always remind yourself to coexist peacefully with 1732state until their C<ev_check> watcher ran (always remind yourself to
1630others). 1733coexist peacefully with others).
1631 1734
1632=head3 Watcher-Specific Functions and Data Members 1735=head3 Watcher-Specific Functions and Data Members
1633 1736
1634=over 4 1737=over 4
1635 1738
1774=head2 C<ev_embed> - when one backend isn't enough... 1877=head2 C<ev_embed> - when one backend isn't enough...
1775 1878
1776This is a rather advanced watcher type that lets you embed one event loop 1879This is a rather advanced watcher type that lets you embed one event loop
1777into another (currently only C<ev_io> events are supported in the embedded 1880into another (currently only C<ev_io> events are supported in the embedded
1778loop, other types of watchers might be handled in a delayed or incorrect 1881loop, other types of watchers might be handled in a delayed or incorrect
1779fashion and must not be used). (See portability notes, below). 1882fashion and must not be used).
1780 1883
1781There are primarily two reasons you would want that: work around bugs and 1884There are primarily two reasons you would want that: work around bugs and
1782prioritise I/O. 1885prioritise I/O.
1783 1886
1784As an example for a bug workaround, the kqueue backend might only support 1887As an example for a bug workaround, the kqueue backend might only support
1839 ev_embed_start (loop_hi, &embed); 1942 ev_embed_start (loop_hi, &embed);
1840 } 1943 }
1841 else 1944 else
1842 loop_lo = loop_hi; 1945 loop_lo = loop_hi;
1843 1946
1844=head2 Portability notes
1845
1846Kqueue is nominally embeddable, but this is broken on all BSDs that I
1847tried, in various ways. Usually the embedded event loop will simply never
1848receive events, sometimes it will only trigger a few times, sometimes in a
1849loop. Epoll is also nominally embeddable, but many Linux kernel versions
1850will always eport the epoll fd as ready, even when no events are pending.
1851
1852While libev allows embedding these backends (they are contained in
1853C<ev_embeddable_backends ()>), take extreme care that it will actually
1854work.
1855
1856When in doubt, create a dynamic event loop forced to use sockets (this
1857usually works) and possibly another thread and a pipe or so to report to
1858your main event loop.
1859
1860=head3 Watcher-Specific Functions and Data Members 1947=head3 Watcher-Specific Functions and Data Members
1861 1948
1862=over 4 1949=over 4
1863 1950
1864=item ev_embed_init (ev_embed *, callback, struct ev_loop *embedded_loop) 1951=item ev_embed_init (ev_embed *, callback, struct ev_loop *embedded_loop)
2502than enough. If you need to manage thousands of children you might want to 2589than enough. If you need to manage thousands of children you might want to
2503increase this value (I<must> be a power of two). 2590increase this value (I<must> be a power of two).
2504 2591
2505=item EV_INOTIFY_HASHSIZE 2592=item EV_INOTIFY_HASHSIZE
2506 2593
2507C<ev_staz> watchers use a small hash table to distribute workload by 2594C<ev_stat> watchers use a small hash table to distribute workload by
2508inotify watch id. The default size is C<16> (or C<1> with C<EV_MINIMAL>), 2595inotify watch id. The default size is C<16> (or C<1> with C<EV_MINIMAL>),
2509usually more than enough. If you need to manage thousands of C<ev_stat> 2596usually more than enough. If you need to manage thousands of C<ev_stat>
2510watchers you might want to increase this value (I<must> be a power of 2597watchers you might want to increase this value (I<must> be a power of
2511two). 2598two).
2512 2599
2608 2695
2609=item Starting and stopping timer/periodic watchers: O(log skipped_other_timers) 2696=item Starting and stopping timer/periodic watchers: O(log skipped_other_timers)
2610 2697
2611This means that, when you have a watcher that triggers in one hour and 2698This means that, when you have a watcher that triggers in one hour and
2612there are 100 watchers that would trigger before that then inserting will 2699there are 100 watchers that would trigger before that then inserting will
2613have to skip those 100 watchers. 2700have to skip roughly seven (C<ld 100>) of these watchers.
2614 2701
2615=item Changing timer/periodic watchers (by autorepeat, again): O(log skipped_other_timers) 2702=item Changing timer/periodic watchers (by autorepeat or calling again): O(log skipped_other_timers)
2616 2703
2617That means that for changing a timer costs less than removing/adding them 2704That means that changing a timer costs less than removing/adding them
2618as only the relative motion in the event queue has to be paid for. 2705as only the relative motion in the event queue has to be paid for.
2619 2706
2620=item Starting io/check/prepare/idle/signal/child watchers: O(1) 2707=item Starting io/check/prepare/idle/signal/child watchers: O(1)
2621 2708
2622These just add the watcher into an array or at the head of a list. 2709These just add the watcher into an array or at the head of a list.
2710
2623=item Stopping check/prepare/idle watchers: O(1) 2711=item Stopping check/prepare/idle watchers: O(1)
2624 2712
2625=item Stopping an io/signal/child watcher: O(number_of_watchers_for_this_(fd/signal/pid % EV_PID_HASHSIZE)) 2713=item Stopping an io/signal/child watcher: O(number_of_watchers_for_this_(fd/signal/pid % EV_PID_HASHSIZE))
2626 2714
2627These watchers are stored in lists then need to be walked to find the 2715These watchers are stored in lists then need to be walked to find the
2628correct watcher to remove. The lists are usually short (you don't usually 2716correct watcher to remove. The lists are usually short (you don't usually
2629have many watchers waiting for the same fd or signal). 2717have many watchers waiting for the same fd or signal).
2630 2718
2631=item Finding the next timer per loop iteration: O(1) 2719=item Finding the next timer in each loop iteration: O(1)
2720
2721By virtue of using a binary heap, the next timer is always found at the
2722beginning of the storage array.
2632 2723
2633=item Each change on a file descriptor per loop iteration: O(number_of_watchers_for_this_fd) 2724=item Each change on a file descriptor per loop iteration: O(number_of_watchers_for_this_fd)
2634 2725
2635A change means an I/O watcher gets started or stopped, which requires 2726A change means an I/O watcher gets started or stopped, which requires
2636libev to recalculate its status (and possibly tell the kernel). 2727libev to recalculate its status (and possibly tell the kernel, depending
2728on backend and wether C<ev_io_set> was used).
2637 2729
2638=item Activating one watcher: O(1) 2730=item Activating one watcher (putting it into the pending state): O(1)
2639 2731
2640=item Priority handling: O(number_of_priorities) 2732=item Priority handling: O(number_of_priorities)
2641 2733
2642Priorities are implemented by allocating some space for each 2734Priorities are implemented by allocating some space for each
2643priority. When doing priority-based operations, libev usually has to 2735priority. When doing priority-based operations, libev usually has to
2644linearly search all the priorities. 2736linearly search all the priorities, but starting/stopping and activating
2737watchers becomes O(1) w.r.t. prioritiy handling.
2645 2738
2646=back 2739=back
2647 2740
2648 2741
2649=head1 AUTHOR 2742=head1 AUTHOR

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines