… | |
… | |
4 | |
4 | |
5 | =head1 SYNOPSIS |
5 | =head1 SYNOPSIS |
6 | |
6 | |
7 | #include <ev.h> |
7 | #include <ev.h> |
8 | |
8 | |
9 | =head1 EXAMPLE PROGRAM |
9 | =head2 EXAMPLE PROGRAM |
10 | |
10 | |
11 | #include <ev.h> |
11 | #include <ev.h> |
12 | |
12 | |
13 | ev_io stdin_watcher; |
13 | ev_io stdin_watcher; |
14 | ev_timer timeout_watcher; |
14 | ev_timer timeout_watcher; |
… | |
… | |
65 | You register interest in certain events by registering so-called I<event |
65 | You register interest in certain events by registering so-called I<event |
66 | watchers>, which are relatively small C structures you initialise with the |
66 | watchers>, which are relatively small C structures you initialise with the |
67 | details of the event, and then hand it over to libev by I<starting> the |
67 | details of the event, and then hand it over to libev by I<starting> the |
68 | watcher. |
68 | watcher. |
69 | |
69 | |
70 | =head1 FEATURES |
70 | =head2 FEATURES |
71 | |
71 | |
72 | Libev supports C<select>, C<poll>, the Linux-specific C<epoll>, the |
72 | Libev supports C<select>, C<poll>, the Linux-specific C<epoll>, the |
73 | BSD-specific C<kqueue> and the Solaris-specific event port mechanisms |
73 | BSD-specific C<kqueue> and the Solaris-specific event port mechanisms |
74 | for file descriptor events (C<ev_io>), the Linux C<inotify> interface |
74 | for file descriptor events (C<ev_io>), the Linux C<inotify> interface |
75 | (for C<ev_stat>), relative timers (C<ev_timer>), absolute timers |
75 | (for C<ev_stat>), relative timers (C<ev_timer>), absolute timers |
… | |
… | |
82 | |
82 | |
83 | It also is quite fast (see this |
83 | It also is quite fast (see this |
84 | L<benchmark|http://libev.schmorp.de/bench.html> comparing it to libevent |
84 | L<benchmark|http://libev.schmorp.de/bench.html> comparing it to libevent |
85 | for example). |
85 | for example). |
86 | |
86 | |
87 | =head1 CONVENTIONS |
87 | =head2 CONVENTIONS |
88 | |
88 | |
89 | Libev is very configurable. In this manual the default configuration will |
89 | Libev is very configurable. In this manual the default configuration will |
90 | be described, which supports multiple event loops. For more info about |
90 | be described, which supports multiple event loops. For more info about |
91 | various configuration options please have a look at B<EMBED> section in |
91 | various configuration options please have a look at B<EMBED> section in |
92 | this manual. If libev was configured without support for multiple event |
92 | this manual. If libev was configured without support for multiple event |
93 | loops, then all functions taking an initial argument of name C<loop> |
93 | loops, then all functions taking an initial argument of name C<loop> |
94 | (which is always of type C<struct ev_loop *>) will not have this argument. |
94 | (which is always of type C<struct ev_loop *>) will not have this argument. |
95 | |
95 | |
96 | =head1 TIME REPRESENTATION |
96 | =head2 TIME REPRESENTATION |
97 | |
97 | |
98 | Libev represents time as a single floating point number, representing the |
98 | Libev represents time as a single floating point number, representing the |
99 | (fractional) number of seconds since the (POSIX) epoch (somewhere near |
99 | (fractional) number of seconds since the (POSIX) epoch (somewhere near |
100 | the beginning of 1970, details are complicated, don't ask). This type is |
100 | the beginning of 1970, details are complicated, don't ask). This type is |
101 | called C<ev_tstamp>, which is what you should use too. It usually aliases |
101 | called C<ev_tstamp>, which is what you should use too. It usually aliases |
… | |
… | |
306 | =item C<EVBACKEND_SELECT> (value 1, portable select backend) |
306 | =item C<EVBACKEND_SELECT> (value 1, portable select backend) |
307 | |
307 | |
308 | This is your standard select(2) backend. Not I<completely> standard, as |
308 | This is your standard select(2) backend. Not I<completely> standard, as |
309 | libev tries to roll its own fd_set with no limits on the number of fds, |
309 | libev tries to roll its own fd_set with no limits on the number of fds, |
310 | but if that fails, expect a fairly low limit on the number of fds when |
310 | but if that fails, expect a fairly low limit on the number of fds when |
311 | using this backend. It doesn't scale too well (O(highest_fd)), but its usually |
311 | using this backend. It doesn't scale too well (O(highest_fd)), but its |
312 | the fastest backend for a low number of fds. |
312 | usually the fastest backend for a low number of (low-numbered :) fds. |
|
|
313 | |
|
|
314 | To get good performance out of this backend you need a high amount of |
|
|
315 | parallelity (most of the file descriptors should be busy). If you are |
|
|
316 | writing a server, you should C<accept ()> in a loop to accept as many |
|
|
317 | connections as possible during one iteration. You might also want to have |
|
|
318 | a look at C<ev_set_io_collect_interval ()> to increase the amount of |
|
|
319 | readyness notifications you get per iteration. |
313 | |
320 | |
314 | =item C<EVBACKEND_POLL> (value 2, poll backend, available everywhere except on windows) |
321 | =item C<EVBACKEND_POLL> (value 2, poll backend, available everywhere except on windows) |
315 | |
322 | |
316 | And this is your standard poll(2) backend. It's more complicated than |
323 | And this is your standard poll(2) backend. It's more complicated |
317 | select, but handles sparse fds better and has no artificial limit on the |
324 | than select, but handles sparse fds better and has no artificial |
318 | number of fds you can use (except it will slow down considerably with a |
325 | limit on the number of fds you can use (except it will slow down |
319 | lot of inactive fds). It scales similarly to select, i.e. O(total_fds). |
326 | considerably with a lot of inactive fds). It scales similarly to select, |
|
|
327 | i.e. O(total_fds). See the entry for C<EVBACKEND_SELECT>, above, for |
|
|
328 | performance tips. |
320 | |
329 | |
321 | =item C<EVBACKEND_EPOLL> (value 4, Linux) |
330 | =item C<EVBACKEND_EPOLL> (value 4, Linux) |
322 | |
331 | |
323 | For few fds, this backend is a bit little slower than poll and select, |
332 | For few fds, this backend is a bit little slower than poll and select, |
324 | but it scales phenomenally better. While poll and select usually scale |
333 | but it scales phenomenally better. While poll and select usually scale |
325 | like O(total_fds) where n is the total number of fds (or the highest fd), |
334 | like O(total_fds) where n is the total number of fds (or the highest fd), |
326 | epoll scales either O(1) or O(active_fds). The epoll design has a number |
335 | epoll scales either O(1) or O(active_fds). The epoll design has a number |
327 | of shortcomings, such as silently dropping events in some hard-to-detect |
336 | of shortcomings, such as silently dropping events in some hard-to-detect |
328 | cases and rewiring a syscall per fd change, no fork support and bad |
337 | cases and rewiring a syscall per fd change, no fork support and bad |
329 | support for dup: |
338 | support for dup. |
330 | |
339 | |
331 | While stopping, setting and starting an I/O watcher in the same iteration |
340 | While stopping, setting and starting an I/O watcher in the same iteration |
332 | will result in some caching, there is still a syscall per such incident |
341 | will result in some caching, there is still a syscall per such incident |
333 | (because the fd could point to a different file description now), so its |
342 | (because the fd could point to a different file description now), so its |
334 | best to avoid that. Also, C<dup ()>'ed file descriptors might not work |
343 | best to avoid that. Also, C<dup ()>'ed file descriptors might not work |
335 | very well if you register events for both fds. |
344 | very well if you register events for both fds. |
336 | |
345 | |
337 | Please note that epoll sometimes generates spurious notifications, so you |
346 | Please note that epoll sometimes generates spurious notifications, so you |
338 | need to use non-blocking I/O or other means to avoid blocking when no data |
347 | need to use non-blocking I/O or other means to avoid blocking when no data |
339 | (or space) is available. |
348 | (or space) is available. |
|
|
349 | |
|
|
350 | Best performance from this backend is achieved by not unregistering all |
|
|
351 | watchers for a file descriptor until it has been closed, if possible, i.e. |
|
|
352 | keep at least one watcher active per fd at all times. |
|
|
353 | |
|
|
354 | While nominally embeddeble in other event loops, this feature is broken in |
|
|
355 | all kernel versions tested so far. |
340 | |
356 | |
341 | =item C<EVBACKEND_KQUEUE> (value 8, most BSD clones) |
357 | =item C<EVBACKEND_KQUEUE> (value 8, most BSD clones) |
342 | |
358 | |
343 | Kqueue deserves special mention, as at the time of this writing, it |
359 | Kqueue deserves special mention, as at the time of this writing, it |
344 | was broken on all BSDs except NetBSD (usually it doesn't work reliably |
360 | was broken on all BSDs except NetBSD (usually it doesn't work reliably |
… | |
… | |
357 | course). While stopping, setting and starting an I/O watcher does never |
373 | course). While stopping, setting and starting an I/O watcher does never |
358 | cause an extra syscall as with C<EVBACKEND_EPOLL>, it still adds up to |
374 | cause an extra syscall as with C<EVBACKEND_EPOLL>, it still adds up to |
359 | two event changes per incident, support for C<fork ()> is very bad and it |
375 | two event changes per incident, support for C<fork ()> is very bad and it |
360 | drops fds silently in similarly hard-to-detect cases. |
376 | drops fds silently in similarly hard-to-detect cases. |
361 | |
377 | |
|
|
378 | This backend usually performs well under most conditions. |
|
|
379 | |
|
|
380 | While nominally embeddable in other event loops, this doesn't work |
|
|
381 | everywhere, so you might need to test for this. And since it is broken |
|
|
382 | almost everywhere, you should only use it when you have a lot of sockets |
|
|
383 | (for which it usually works), by embedding it into another event loop |
|
|
384 | (e.g. C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>) and using it only for |
|
|
385 | sockets. |
|
|
386 | |
362 | =item C<EVBACKEND_DEVPOLL> (value 16, Solaris 8) |
387 | =item C<EVBACKEND_DEVPOLL> (value 16, Solaris 8) |
363 | |
388 | |
364 | This is not implemented yet (and might never be). |
389 | This is not implemented yet (and might never be, unless you send me an |
|
|
390 | implementation). According to reports, C</dev/poll> only supports sockets |
|
|
391 | and is not embeddable, which would limit the usefulness of this backend |
|
|
392 | immensely. |
365 | |
393 | |
366 | =item C<EVBACKEND_PORT> (value 32, Solaris 10) |
394 | =item C<EVBACKEND_PORT> (value 32, Solaris 10) |
367 | |
395 | |
368 | This uses the Solaris 10 event port mechanism. As with everything on Solaris, |
396 | This uses the Solaris 10 event port mechanism. As with everything on Solaris, |
369 | it's really slow, but it still scales very well (O(active_fds)). |
397 | it's really slow, but it still scales very well (O(active_fds)). |
370 | |
398 | |
371 | Please note that solaris event ports can deliver a lot of spurious |
399 | Please note that solaris event ports can deliver a lot of spurious |
372 | notifications, so you need to use non-blocking I/O or other means to avoid |
400 | notifications, so you need to use non-blocking I/O or other means to avoid |
373 | blocking when no data (or space) is available. |
401 | blocking when no data (or space) is available. |
374 | |
402 | |
|
|
403 | While this backend scales well, it requires one system call per active |
|
|
404 | file descriptor per loop iteration. For small and medium numbers of file |
|
|
405 | descriptors a "slow" C<EVBACKEND_SELECT> or C<EVBACKEND_POLL> backend |
|
|
406 | might perform better. |
|
|
407 | |
375 | =item C<EVBACKEND_ALL> |
408 | =item C<EVBACKEND_ALL> |
376 | |
409 | |
377 | Try all backends (even potentially broken ones that wouldn't be tried |
410 | Try all backends (even potentially broken ones that wouldn't be tried |
378 | with C<EVFLAG_AUTO>). Since this is a mask, you can do stuff such as |
411 | with C<EVFLAG_AUTO>). Since this is a mask, you can do stuff such as |
379 | C<EVBACKEND_ALL & ~EVBACKEND_KQUEUE>. |
412 | C<EVBACKEND_ALL & ~EVBACKEND_KQUEUE>. |
|
|
413 | |
|
|
414 | It is definitely not recommended to use this flag. |
380 | |
415 | |
381 | =back |
416 | =back |
382 | |
417 | |
383 | If one or more of these are ored into the flags value, then only these |
418 | If one or more of these are ored into the flags value, then only these |
384 | backends will be tried (in the reverse order as given here). If none are |
419 | backends will be tried (in the reverse order as given here). If none are |
… | |
… | |
948 | In general you can register as many read and/or write event watchers per |
983 | In general you can register as many read and/or write event watchers per |
949 | fd as you want (as long as you don't confuse yourself). Setting all file |
984 | fd as you want (as long as you don't confuse yourself). Setting all file |
950 | descriptors to non-blocking mode is also usually a good idea (but not |
985 | descriptors to non-blocking mode is also usually a good idea (but not |
951 | required if you know what you are doing). |
986 | required if you know what you are doing). |
952 | |
987 | |
953 | You have to be careful with dup'ed file descriptors, though. Some backends |
|
|
954 | (the linux epoll backend is a notable example) cannot handle dup'ed file |
|
|
955 | descriptors correctly if you register interest in two or more fds pointing |
|
|
956 | to the same underlying file/socket/etc. description (that is, they share |
|
|
957 | the same underlying "file open"). |
|
|
958 | |
|
|
959 | If you must do this, then force the use of a known-to-be-good backend |
988 | If you must do this, then force the use of a known-to-be-good backend |
960 | (at the time of this writing, this includes only C<EVBACKEND_SELECT> and |
989 | (at the time of this writing, this includes only C<EVBACKEND_SELECT> and |
961 | C<EVBACKEND_POLL>). |
990 | C<EVBACKEND_POLL>). |
962 | |
991 | |
963 | Another thing you have to watch out for is that it is quite easy to |
992 | Another thing you have to watch out for is that it is quite easy to |
… | |
… | |
997 | optimisations to libev. |
1026 | optimisations to libev. |
998 | |
1027 | |
999 | =head3 The special problem of dup'ed file descriptors |
1028 | =head3 The special problem of dup'ed file descriptors |
1000 | |
1029 | |
1001 | Some backends (e.g. epoll), cannot register events for file descriptors, |
1030 | Some backends (e.g. epoll), cannot register events for file descriptors, |
1002 | but only events for the underlying file descriptions. That menas when you |
1031 | but only events for the underlying file descriptions. That means when you |
1003 | have C<dup ()>'ed file descriptors and register events for them, only one |
1032 | have C<dup ()>'ed file descriptors or weirder constellations, and register |
1004 | file descriptor might actually receive events. |
1033 | events for them, only one file descriptor might actually receive events. |
1005 | |
1034 | |
1006 | There is no workaorund possible except not registering events |
1035 | There is no workaround possible except not registering events |
1007 | for potentially C<dup ()>'ed file descriptors or to resort to |
1036 | for potentially C<dup ()>'ed file descriptors, or to resort to |
1008 | C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>. |
1037 | C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>. |
1009 | |
1038 | |
1010 | =head3 The special problem of fork |
1039 | =head3 The special problem of fork |
1011 | |
1040 | |
1012 | Some backends (epoll, kqueue) do not support C<fork ()> at all or exhibit |
1041 | Some backends (epoll, kqueue) do not support C<fork ()> at all or exhibit |
… | |
… | |
1461 | semantics of C<ev_stat> watchers, which means that libev sometimes needs |
1490 | semantics of C<ev_stat> watchers, which means that libev sometimes needs |
1462 | to fall back to regular polling again even with inotify, but changes are |
1491 | to fall back to regular polling again even with inotify, but changes are |
1463 | usually detected immediately, and if the file exists there will be no |
1492 | usually detected immediately, and if the file exists there will be no |
1464 | polling. |
1493 | polling. |
1465 | |
1494 | |
|
|
1495 | =head3 Inotify |
|
|
1496 | |
|
|
1497 | When C<inotify (7)> support has been compiled into libev (generally only |
|
|
1498 | available on Linux) and present at runtime, it will be used to speed up |
|
|
1499 | change detection where possible. The inotify descriptor will be created lazily |
|
|
1500 | when the first C<ev_stat> watcher is being started. |
|
|
1501 | |
|
|
1502 | Inotify presense does not change the semantics of C<ev_stat> watchers |
|
|
1503 | except that changes might be detected earlier, and in some cases, to avoid |
|
|
1504 | making regular C<stat> calls. Even in the presense of inotify support |
|
|
1505 | there are many cases where libev has to resort to regular C<stat> polling. |
|
|
1506 | |
|
|
1507 | (There is no support for kqueue, as apparently it cannot be used to |
|
|
1508 | implement this functionality, due to the requirement of having a file |
|
|
1509 | descriptor open on the object at all times). |
|
|
1510 | |
|
|
1511 | =head3 The special problem of stat time resolution |
|
|
1512 | |
|
|
1513 | The C<stat ()> syscall only supports full-second resolution portably, and |
|
|
1514 | even on systems where the resolution is higher, many filesystems still |
|
|
1515 | only support whole seconds. |
|
|
1516 | |
|
|
1517 | That means that, if the time is the only thing that changes, you might |
|
|
1518 | miss updates: on the first update, C<ev_stat> detects a change and calls |
|
|
1519 | your callback, which does something. When there is another update within |
|
|
1520 | the same second, C<ev_stat> will be unable to detect it. |
|
|
1521 | |
|
|
1522 | The solution to this is to delay acting on a change for a second (or till |
|
|
1523 | the next second boundary), using a roughly one-second delay C<ev_timer> |
|
|
1524 | (C<ev_timer_set (w, 0., 1.01); ev_timer_again (loop, w)>). The C<.01> |
|
|
1525 | is added to work around small timing inconsistencies of some operating |
|
|
1526 | systems. |
|
|
1527 | |
1466 | =head3 Watcher-Specific Functions and Data Members |
1528 | =head3 Watcher-Specific Functions and Data Members |
1467 | |
1529 | |
1468 | =over 4 |
1530 | =over 4 |
1469 | |
1531 | |
1470 | =item ev_stat_init (ev_stat *, callback, const char *path, ev_tstamp interval) |
1532 | =item ev_stat_init (ev_stat *, callback, const char *path, ev_tstamp interval) |
… | |
… | |
1507 | =item const char *path [read-only] |
1569 | =item const char *path [read-only] |
1508 | |
1570 | |
1509 | The filesystem path that is being watched. |
1571 | The filesystem path that is being watched. |
1510 | |
1572 | |
1511 | =back |
1573 | =back |
|
|
1574 | |
|
|
1575 | =head3 Examples |
1512 | |
1576 | |
1513 | Example: Watch C</etc/passwd> for attribute changes. |
1577 | Example: Watch C</etc/passwd> for attribute changes. |
1514 | |
1578 | |
1515 | static void |
1579 | static void |
1516 | passwd_cb (struct ev_loop *loop, ev_stat *w, int revents) |
1580 | passwd_cb (struct ev_loop *loop, ev_stat *w, int revents) |
… | |
… | |
1529 | } |
1593 | } |
1530 | |
1594 | |
1531 | ... |
1595 | ... |
1532 | ev_stat passwd; |
1596 | ev_stat passwd; |
1533 | |
1597 | |
1534 | ev_stat_init (&passwd, passwd_cb, "/etc/passwd"); |
1598 | ev_stat_init (&passwd, passwd_cb, "/etc/passwd", 0.); |
1535 | ev_stat_start (loop, &passwd); |
1599 | ev_stat_start (loop, &passwd); |
|
|
1600 | |
|
|
1601 | Example: Like above, but additionally use a one-second delay so we do not |
|
|
1602 | miss updates (however, frequent updates will delay processing, too, so |
|
|
1603 | one might do the work both on C<ev_stat> callback invocation I<and> on |
|
|
1604 | C<ev_timer> callback invocation). |
|
|
1605 | |
|
|
1606 | static ev_stat passwd; |
|
|
1607 | static ev_timer timer; |
|
|
1608 | |
|
|
1609 | static void |
|
|
1610 | timer_cb (EV_P_ ev_timer *w, int revents) |
|
|
1611 | { |
|
|
1612 | ev_timer_stop (EV_A_ w); |
|
|
1613 | |
|
|
1614 | /* now it's one second after the most recent passwd change */ |
|
|
1615 | } |
|
|
1616 | |
|
|
1617 | static void |
|
|
1618 | stat_cb (EV_P_ ev_stat *w, int revents) |
|
|
1619 | { |
|
|
1620 | /* reset the one-second timer */ |
|
|
1621 | ev_timer_again (EV_A_ &timer); |
|
|
1622 | } |
|
|
1623 | |
|
|
1624 | ... |
|
|
1625 | ev_stat_init (&passwd, stat_cb, "/etc/passwd", 0.); |
|
|
1626 | ev_stat_start (loop, &passwd); |
|
|
1627 | ev_timer_init (&timer, timer_cb, 0., 1.01); |
1536 | |
1628 | |
1537 | |
1629 | |
1538 | =head2 C<ev_idle> - when you've got nothing better to do... |
1630 | =head2 C<ev_idle> - when you've got nothing better to do... |
1539 | |
1631 | |
1540 | Idle watchers trigger events when no other events of the same or higher |
1632 | Idle watchers trigger events when no other events of the same or higher |
… | |
… | |
2491 | than enough. If you need to manage thousands of children you might want to |
2583 | than enough. If you need to manage thousands of children you might want to |
2492 | increase this value (I<must> be a power of two). |
2584 | increase this value (I<must> be a power of two). |
2493 | |
2585 | |
2494 | =item EV_INOTIFY_HASHSIZE |
2586 | =item EV_INOTIFY_HASHSIZE |
2495 | |
2587 | |
2496 | C<ev_staz> watchers use a small hash table to distribute workload by |
2588 | C<ev_stat> watchers use a small hash table to distribute workload by |
2497 | inotify watch id. The default size is C<16> (or C<1> with C<EV_MINIMAL>), |
2589 | inotify watch id. The default size is C<16> (or C<1> with C<EV_MINIMAL>), |
2498 | usually more than enough. If you need to manage thousands of C<ev_stat> |
2590 | usually more than enough. If you need to manage thousands of C<ev_stat> |
2499 | watchers you might want to increase this value (I<must> be a power of |
2591 | watchers you might want to increase this value (I<must> be a power of |
2500 | two). |
2592 | two). |
2501 | |
2593 | |
… | |
… | |
2597 | |
2689 | |
2598 | =item Starting and stopping timer/periodic watchers: O(log skipped_other_timers) |
2690 | =item Starting and stopping timer/periodic watchers: O(log skipped_other_timers) |
2599 | |
2691 | |
2600 | This means that, when you have a watcher that triggers in one hour and |
2692 | This means that, when you have a watcher that triggers in one hour and |
2601 | there are 100 watchers that would trigger before that then inserting will |
2693 | there are 100 watchers that would trigger before that then inserting will |
2602 | have to skip those 100 watchers. |
2694 | have to skip roughly seven (C<ld 100>) of these watchers. |
2603 | |
2695 | |
2604 | =item Changing timer/periodic watchers (by autorepeat, again): O(log skipped_other_timers) |
2696 | =item Changing timer/periodic watchers (by autorepeat or calling again): O(log skipped_other_timers) |
2605 | |
2697 | |
2606 | That means that for changing a timer costs less than removing/adding them |
2698 | That means that changing a timer costs less than removing/adding them |
2607 | as only the relative motion in the event queue has to be paid for. |
2699 | as only the relative motion in the event queue has to be paid for. |
2608 | |
2700 | |
2609 | =item Starting io/check/prepare/idle/signal/child watchers: O(1) |
2701 | =item Starting io/check/prepare/idle/signal/child watchers: O(1) |
2610 | |
2702 | |
2611 | These just add the watcher into an array or at the head of a list. |
2703 | These just add the watcher into an array or at the head of a list. |
|
|
2704 | |
2612 | =item Stopping check/prepare/idle watchers: O(1) |
2705 | =item Stopping check/prepare/idle watchers: O(1) |
2613 | |
2706 | |
2614 | =item Stopping an io/signal/child watcher: O(number_of_watchers_for_this_(fd/signal/pid % EV_PID_HASHSIZE)) |
2707 | =item Stopping an io/signal/child watcher: O(number_of_watchers_for_this_(fd/signal/pid % EV_PID_HASHSIZE)) |
2615 | |
2708 | |
2616 | These watchers are stored in lists then need to be walked to find the |
2709 | These watchers are stored in lists then need to be walked to find the |
2617 | correct watcher to remove. The lists are usually short (you don't usually |
2710 | correct watcher to remove. The lists are usually short (you don't usually |
2618 | have many watchers waiting for the same fd or signal). |
2711 | have many watchers waiting for the same fd or signal). |
2619 | |
2712 | |
2620 | =item Finding the next timer per loop iteration: O(1) |
2713 | =item Finding the next timer in each loop iteration: O(1) |
|
|
2714 | |
|
|
2715 | By virtue of using a binary heap, the next timer is always found at the |
|
|
2716 | beginning of the storage array. |
2621 | |
2717 | |
2622 | =item Each change on a file descriptor per loop iteration: O(number_of_watchers_for_this_fd) |
2718 | =item Each change on a file descriptor per loop iteration: O(number_of_watchers_for_this_fd) |
2623 | |
2719 | |
2624 | A change means an I/O watcher gets started or stopped, which requires |
2720 | A change means an I/O watcher gets started or stopped, which requires |
2625 | libev to recalculate its status (and possibly tell the kernel). |
2721 | libev to recalculate its status (and possibly tell the kernel, depending |
|
|
2722 | on backend and wether C<ev_io_set> was used). |
2626 | |
2723 | |
2627 | =item Activating one watcher: O(1) |
2724 | =item Activating one watcher (putting it into the pending state): O(1) |
2628 | |
2725 | |
2629 | =item Priority handling: O(number_of_priorities) |
2726 | =item Priority handling: O(number_of_priorities) |
2630 | |
2727 | |
2631 | Priorities are implemented by allocating some space for each |
2728 | Priorities are implemented by allocating some space for each |
2632 | priority. When doing priority-based operations, libev usually has to |
2729 | priority. When doing priority-based operations, libev usually has to |
2633 | linearly search all the priorities. |
2730 | linearly search all the priorities, but starting/stopping and activating |
|
|
2731 | watchers becomes O(1) w.r.t. prioritiy handling. |
2634 | |
2732 | |
2635 | =back |
2733 | =back |
2636 | |
2734 | |
2637 | |
2735 | |
2638 | =head1 AUTHOR |
2736 | =head1 AUTHOR |