… | |
… | |
260 | flags. If that is troubling you, check C<ev_backend ()> afterwards). |
260 | flags. If that is troubling you, check C<ev_backend ()> afterwards). |
261 | |
261 | |
262 | If you don't know what event loop to use, use the one returned from this |
262 | If you don't know what event loop to use, use the one returned from this |
263 | function. |
263 | function. |
264 | |
264 | |
|
|
265 | The default loop is the only loop that can handle C<ev_signal> and |
|
|
266 | C<ev_child> watchers, and to do this, it always registers a handler |
|
|
267 | for C<SIGCHLD>. If this is a problem for your app you can either |
|
|
268 | create a dynamic loop with C<ev_loop_new> that doesn't do that, or you |
|
|
269 | can simply overwrite the C<SIGCHLD> signal handler I<after> calling |
|
|
270 | C<ev_default_init>. |
|
|
271 | |
265 | The flags argument can be used to specify special behaviour or specific |
272 | The flags argument can be used to specify special behaviour or specific |
266 | backends to use, and is usually specified as C<0> (or C<EVFLAG_AUTO>). |
273 | backends to use, and is usually specified as C<0> (or C<EVFLAG_AUTO>). |
267 | |
274 | |
268 | The following flags are supported: |
275 | The following flags are supported: |
269 | |
276 | |
… | |
… | |
403 | While this backend scales well, it requires one system call per active |
410 | While this backend scales well, it requires one system call per active |
404 | file descriptor per loop iteration. For small and medium numbers of file |
411 | file descriptor per loop iteration. For small and medium numbers of file |
405 | descriptors a "slow" C<EVBACKEND_SELECT> or C<EVBACKEND_POLL> backend |
412 | descriptors a "slow" C<EVBACKEND_SELECT> or C<EVBACKEND_POLL> backend |
406 | might perform better. |
413 | might perform better. |
407 | |
414 | |
|
|
415 | On the positive side, ignoring the spurious readyness notifications, this |
|
|
416 | backend actually performed to specification in all tests and is fully |
|
|
417 | embeddable, which is a rare feat among the OS-specific backends. |
|
|
418 | |
408 | =item C<EVBACKEND_ALL> |
419 | =item C<EVBACKEND_ALL> |
409 | |
420 | |
410 | Try all backends (even potentially broken ones that wouldn't be tried |
421 | Try all backends (even potentially broken ones that wouldn't be tried |
411 | with C<EVFLAG_AUTO>). Since this is a mask, you can do stuff such as |
422 | with C<EVFLAG_AUTO>). Since this is a mask, you can do stuff such as |
412 | C<EVBACKEND_ALL & ~EVBACKEND_KQUEUE>. |
423 | C<EVBACKEND_ALL & ~EVBACKEND_KQUEUE>. |
… | |
… | |
414 | It is definitely not recommended to use this flag. |
425 | It is definitely not recommended to use this flag. |
415 | |
426 | |
416 | =back |
427 | =back |
417 | |
428 | |
418 | If one or more of these are ored into the flags value, then only these |
429 | If one or more of these are ored into the flags value, then only these |
419 | backends will be tried (in the reverse order as given here). If none are |
430 | backends will be tried (in the reverse order as listed here). If none are |
420 | specified, most compiled-in backend will be tried, usually in reverse |
431 | specified, all backends in C<ev_recommended_backends ()> will be tried. |
421 | order of their flag values :) |
|
|
422 | |
432 | |
423 | The most typical usage is like this: |
433 | The most typical usage is like this: |
424 | |
434 | |
425 | if (!ev_default_loop (0)) |
435 | if (!ev_default_loop (0)) |
426 | fatal ("could not initialise libev, bad $LIBEV_FLAGS in environment?"); |
436 | fatal ("could not initialise libev, bad $LIBEV_FLAGS in environment?"); |
… | |
… | |
473 | Like C<ev_default_destroy>, but destroys an event loop created by an |
483 | Like C<ev_default_destroy>, but destroys an event loop created by an |
474 | earlier call to C<ev_loop_new>. |
484 | earlier call to C<ev_loop_new>. |
475 | |
485 | |
476 | =item ev_default_fork () |
486 | =item ev_default_fork () |
477 | |
487 | |
|
|
488 | This function sets a flag that causes subsequent C<ev_loop> iterations |
478 | This function reinitialises the kernel state for backends that have |
489 | to reinitialise the kernel state for backends that have one. Despite the |
479 | one. Despite the name, you can call it anytime, but it makes most sense |
490 | name, you can call it anytime, but it makes most sense after forking, in |
480 | after forking, in either the parent or child process (or both, but that |
491 | the child process (or both child and parent, but that again makes little |
481 | again makes little sense). |
492 | sense). You I<must> call it in the child before using any of the libev |
|
|
493 | functions, and it will only take effect at the next C<ev_loop> iteration. |
482 | |
494 | |
483 | You I<must> call this function in the child process after forking if and |
495 | On the other hand, you only need to call this function in the child |
484 | only if you want to use the event library in both processes. If you just |
496 | process if and only if you want to use the event library in the child. If |
485 | fork+exec, you don't have to call it. |
497 | you just fork+exec, you don't have to call it at all. |
486 | |
498 | |
487 | The function itself is quite fast and it's usually not a problem to call |
499 | The function itself is quite fast and it's usually not a problem to call |
488 | it just in case after a fork. To make this easy, the function will fit in |
500 | it just in case after a fork. To make this easy, the function will fit in |
489 | quite nicely into a call to C<pthread_atfork>: |
501 | quite nicely into a call to C<pthread_atfork>: |
490 | |
502 | |
491 | pthread_atfork (0, 0, ev_default_fork); |
503 | pthread_atfork (0, 0, ev_default_fork); |
492 | |
504 | |
493 | At the moment, C<EVBACKEND_SELECT> and C<EVBACKEND_POLL> are safe to use |
|
|
494 | without calling this function, so if you force one of those backends you |
|
|
495 | do not need to care. |
|
|
496 | |
|
|
497 | =item ev_loop_fork (loop) |
505 | =item ev_loop_fork (loop) |
498 | |
506 | |
499 | Like C<ev_default_fork>, but acts on an event loop created by |
507 | Like C<ev_default_fork>, but acts on an event loop created by |
500 | C<ev_loop_new>. Yes, you have to call this on every allocated event loop |
508 | C<ev_loop_new>. Yes, you have to call this on every allocated event loop |
501 | after fork, and how you do this is entirely your own problem. |
509 | after fork, and how you do this is entirely your own problem. |
|
|
510 | |
|
|
511 | =item int ev_is_default_loop (loop) |
|
|
512 | |
|
|
513 | Returns true when the given loop actually is the default loop, false otherwise. |
502 | |
514 | |
503 | =item unsigned int ev_loop_count (loop) |
515 | =item unsigned int ev_loop_count (loop) |
504 | |
516 | |
505 | Returns the count of loop iterations for the loop, which is identical to |
517 | Returns the count of loop iterations for the loop, which is identical to |
506 | the number of times libev did poll for new events. It starts at C<0> and |
518 | the number of times libev did poll for new events. It starts at C<0> and |
… | |
… | |
590 | Can be used to make a call to C<ev_loop> return early (but only after it |
602 | Can be used to make a call to C<ev_loop> return early (but only after it |
591 | has processed all outstanding events). The C<how> argument must be either |
603 | has processed all outstanding events). The C<how> argument must be either |
592 | C<EVUNLOOP_ONE>, which will make the innermost C<ev_loop> call return, or |
604 | C<EVUNLOOP_ONE>, which will make the innermost C<ev_loop> call return, or |
593 | C<EVUNLOOP_ALL>, which will make all nested C<ev_loop> calls return. |
605 | C<EVUNLOOP_ALL>, which will make all nested C<ev_loop> calls return. |
594 | |
606 | |
|
|
607 | This "unloop state" will be cleared when entering C<ev_loop> again. |
|
|
608 | |
595 | =item ev_ref (loop) |
609 | =item ev_ref (loop) |
596 | |
610 | |
597 | =item ev_unref (loop) |
611 | =item ev_unref (loop) |
598 | |
612 | |
599 | Ref/unref can be used to add or remove a reference count on the event |
613 | Ref/unref can be used to add or remove a reference count on the event |
… | |
… | |
603 | returning, ev_unref() after starting, and ev_ref() before stopping it. For |
617 | returning, ev_unref() after starting, and ev_ref() before stopping it. For |
604 | example, libev itself uses this for its internal signal pipe: It is not |
618 | example, libev itself uses this for its internal signal pipe: It is not |
605 | visible to the libev user and should not keep C<ev_loop> from exiting if |
619 | visible to the libev user and should not keep C<ev_loop> from exiting if |
606 | no event watchers registered by it are active. It is also an excellent |
620 | no event watchers registered by it are active. It is also an excellent |
607 | way to do this for generic recurring timers or from within third-party |
621 | way to do this for generic recurring timers or from within third-party |
608 | libraries. Just remember to I<unref after start> and I<ref before stop>. |
622 | libraries. Just remember to I<unref after start> and I<ref before stop> |
|
|
623 | (but only if the watcher wasn't active before, or was active before, |
|
|
624 | respectively). |
609 | |
625 | |
610 | Example: Create a signal watcher, but keep it from keeping C<ev_loop> |
626 | Example: Create a signal watcher, but keep it from keeping C<ev_loop> |
611 | running when nothing else is active. |
627 | running when nothing else is active. |
612 | |
628 | |
613 | struct ev_signal exitsig; |
629 | struct ev_signal exitsig; |
… | |
… | |
761 | |
777 | |
762 | =item C<EV_FORK> |
778 | =item C<EV_FORK> |
763 | |
779 | |
764 | The event loop has been resumed in the child process after fork (see |
780 | The event loop has been resumed in the child process after fork (see |
765 | C<ev_fork>). |
781 | C<ev_fork>). |
|
|
782 | |
|
|
783 | =item C<EV_ASYNC> |
|
|
784 | |
|
|
785 | The given async watcher has been asynchronously notified (see C<ev_async>). |
766 | |
786 | |
767 | =item C<EV_ERROR> |
787 | =item C<EV_ERROR> |
768 | |
788 | |
769 | An unspecified error has occured, the watcher has been stopped. This might |
789 | An unspecified error has occured, the watcher has been stopped. This might |
770 | happen because the watcher could not be properly started because libev |
790 | happen because the watcher could not be properly started because libev |
… | |
… | |
1426 | |
1446 | |
1427 | =head3 Watcher-Specific Functions and Data Members |
1447 | =head3 Watcher-Specific Functions and Data Members |
1428 | |
1448 | |
1429 | =over 4 |
1449 | =over 4 |
1430 | |
1450 | |
1431 | =item ev_child_init (ev_child *, callback, int pid) |
1451 | =item ev_child_init (ev_child *, callback, int pid, int trace) |
1432 | |
1452 | |
1433 | =item ev_child_set (ev_child *, int pid) |
1453 | =item ev_child_set (ev_child *, int pid, int trace) |
1434 | |
1454 | |
1435 | Configures the watcher to wait for status changes of process C<pid> (or |
1455 | Configures the watcher to wait for status changes of process C<pid> (or |
1436 | I<any> process if C<pid> is specified as C<0>). The callback can look |
1456 | I<any> process if C<pid> is specified as C<0>). The callback can look |
1437 | at the C<rstatus> member of the C<ev_child> watcher structure to see |
1457 | at the C<rstatus> member of the C<ev_child> watcher structure to see |
1438 | the status word (use the macros from C<sys/wait.h> and see your systems |
1458 | the status word (use the macros from C<sys/wait.h> and see your systems |
1439 | C<waitpid> documentation). The C<rpid> member contains the pid of the |
1459 | C<waitpid> documentation). The C<rpid> member contains the pid of the |
1440 | process causing the status change. |
1460 | process causing the status change. C<trace> must be either C<0> (only |
|
|
1461 | activate the watcher when the process terminates) or C<1> (additionally |
|
|
1462 | activate the watcher when the process is stopped or continued). |
1441 | |
1463 | |
1442 | =item int pid [read-only] |
1464 | =item int pid [read-only] |
1443 | |
1465 | |
1444 | The process id this watcher watches out for, or C<0>, meaning any process id. |
1466 | The process id this watcher watches out for, or C<0>, meaning any process id. |
1445 | |
1467 | |
… | |
… | |
1681 | static void |
1703 | static void |
1682 | idle_cb (struct ev_loop *loop, struct ev_idle *w, int revents) |
1704 | idle_cb (struct ev_loop *loop, struct ev_idle *w, int revents) |
1683 | { |
1705 | { |
1684 | free (w); |
1706 | free (w); |
1685 | // now do something you wanted to do when the program has |
1707 | // now do something you wanted to do when the program has |
1686 | // no longer asnything immediate to do. |
1708 | // no longer anything immediate to do. |
1687 | } |
1709 | } |
1688 | |
1710 | |
1689 | struct ev_idle *idle_watcher = malloc (sizeof (struct ev_idle)); |
1711 | struct ev_idle *idle_watcher = malloc (sizeof (struct ev_idle)); |
1690 | ev_idle_init (idle_watcher, idle_cb); |
1712 | ev_idle_init (idle_watcher, idle_cb); |
1691 | ev_idle_start (loop, idle_cb); |
1713 | ev_idle_start (loop, idle_cb); |
… | |
… | |
2032 | believe me. |
2054 | believe me. |
2033 | |
2055 | |
2034 | =back |
2056 | =back |
2035 | |
2057 | |
2036 | |
2058 | |
|
|
2059 | =head2 C<ev_async> - how to wake up another event loop |
|
|
2060 | |
|
|
2061 | In general, you cannot use an C<ev_loop> from multiple threads or other |
|
|
2062 | asynchronous sources such as signal handlers (as opposed to multiple event |
|
|
2063 | loops - those are of course safe to use in different threads). |
|
|
2064 | |
|
|
2065 | Sometimes, however, you need to wake up another event loop you do not |
|
|
2066 | control, for example because it belongs to another thread. This is what |
|
|
2067 | C<ev_async> watchers do: as long as the C<ev_async> watcher is active, you |
|
|
2068 | can signal it by calling C<ev_async_send>, which is thread- and signal |
|
|
2069 | safe. |
|
|
2070 | |
|
|
2071 | This functionality is very similar to C<ev_signal> watchers, as signals, |
|
|
2072 | too, are asynchronous in nature, and signals, too, will be compressed |
|
|
2073 | (i.e. the number of callback invocations may be less than the number of |
|
|
2074 | C<ev_async_sent> calls). |
|
|
2075 | |
|
|
2076 | Unlike C<ev_signal> watchers, C<ev_async> works with any event loop, not |
|
|
2077 | just the default loop. |
|
|
2078 | |
|
|
2079 | =head3 Queueing |
|
|
2080 | |
|
|
2081 | C<ev_async> does not support queueing of data in any way. The reason |
|
|
2082 | is that the author does not know of a simple (or any) algorithm for a |
|
|
2083 | multiple-writer-single-reader queue that works in all cases and doesn't |
|
|
2084 | need elaborate support such as pthreads. |
|
|
2085 | |
|
|
2086 | That means that if you want to queue data, you have to provide your own |
|
|
2087 | queue. But at least I can tell you would implement locking around your |
|
|
2088 | queue: |
|
|
2089 | |
|
|
2090 | =over 4 |
|
|
2091 | |
|
|
2092 | =item queueing from a signal handler context |
|
|
2093 | |
|
|
2094 | To implement race-free queueing, you simply add to the queue in the signal |
|
|
2095 | handler but you block the signal handler in the watcher callback. Here is an example that does that for |
|
|
2096 | some fictitiuous SIGUSR1 handler: |
|
|
2097 | |
|
|
2098 | static ev_async mysig; |
|
|
2099 | |
|
|
2100 | static void |
|
|
2101 | sigusr1_handler (void) |
|
|
2102 | { |
|
|
2103 | sometype data; |
|
|
2104 | |
|
|
2105 | // no locking etc. |
|
|
2106 | queue_put (data); |
|
|
2107 | ev_async_send (DEFAULT_ &mysig); |
|
|
2108 | } |
|
|
2109 | |
|
|
2110 | static void |
|
|
2111 | mysig_cb (EV_P_ ev_async *w, int revents) |
|
|
2112 | { |
|
|
2113 | sometype data; |
|
|
2114 | sigset_t block, prev; |
|
|
2115 | |
|
|
2116 | sigemptyset (&block); |
|
|
2117 | sigaddset (&block, SIGUSR1); |
|
|
2118 | sigprocmask (SIG_BLOCK, &block, &prev); |
|
|
2119 | |
|
|
2120 | while (queue_get (&data)) |
|
|
2121 | process (data); |
|
|
2122 | |
|
|
2123 | if (sigismember (&prev, SIGUSR1) |
|
|
2124 | sigprocmask (SIG_UNBLOCK, &block, 0); |
|
|
2125 | } |
|
|
2126 | |
|
|
2127 | (Note: pthreads in theory requires you to use C<pthread_setmask> |
|
|
2128 | instead of C<sigprocmask> when you use threads, but libev doesn't do it |
|
|
2129 | either...). |
|
|
2130 | |
|
|
2131 | =item queueing from a thread context |
|
|
2132 | |
|
|
2133 | The strategy for threads is different, as you cannot (easily) block |
|
|
2134 | threads but you can easily preempt them, so to queue safely you need to |
|
|
2135 | employ a traditional mutex lock, such as in this pthread example: |
|
|
2136 | |
|
|
2137 | static ev_async mysig; |
|
|
2138 | static pthread_mutex_t mymutex = PTHREAD_MUTEX_INITIALIZER; |
|
|
2139 | |
|
|
2140 | static void |
|
|
2141 | otherthread (void) |
|
|
2142 | { |
|
|
2143 | // only need to lock the actual queueing operation |
|
|
2144 | pthread_mutex_lock (&mymutex); |
|
|
2145 | queue_put (data); |
|
|
2146 | pthread_mutex_unlock (&mymutex); |
|
|
2147 | |
|
|
2148 | ev_async_send (DEFAULT_ &mysig); |
|
|
2149 | } |
|
|
2150 | |
|
|
2151 | static void |
|
|
2152 | mysig_cb (EV_P_ ev_async *w, int revents) |
|
|
2153 | { |
|
|
2154 | pthread_mutex_lock (&mymutex); |
|
|
2155 | |
|
|
2156 | while (queue_get (&data)) |
|
|
2157 | process (data); |
|
|
2158 | |
|
|
2159 | pthread_mutex_unlock (&mymutex); |
|
|
2160 | } |
|
|
2161 | |
|
|
2162 | =back |
|
|
2163 | |
|
|
2164 | |
|
|
2165 | =head3 Watcher-Specific Functions and Data Members |
|
|
2166 | |
|
|
2167 | =over 4 |
|
|
2168 | |
|
|
2169 | =item ev_async_init (ev_async *, callback) |
|
|
2170 | |
|
|
2171 | Initialises and configures the async watcher - it has no parameters of any |
|
|
2172 | kind. There is a C<ev_asynd_set> macro, but using it is utterly pointless, |
|
|
2173 | believe me. |
|
|
2174 | |
|
|
2175 | =item ev_async_send (loop, ev_async *) |
|
|
2176 | |
|
|
2177 | Sends/signals/activates the given C<ev_async> watcher, that is, feeds |
|
|
2178 | an C<EV_ASYNC> event on the watcher into the event loop. Unlike |
|
|
2179 | C<ev_feed_event>, this call is safe to do in other threads, signal or |
|
|
2180 | similar contexts (see the dicusssion of C<EV_ATOMIC_T> in the embedding |
|
|
2181 | section below on what exactly this means). |
|
|
2182 | |
|
|
2183 | This call incurs the overhead of a syscall only once per loop iteration, |
|
|
2184 | so while the overhead might be noticable, it doesn't apply to repeated |
|
|
2185 | calls to C<ev_async_send>. |
|
|
2186 | |
|
|
2187 | =back |
|
|
2188 | |
|
|
2189 | |
2037 | =head1 OTHER FUNCTIONS |
2190 | =head1 OTHER FUNCTIONS |
2038 | |
2191 | |
2039 | There are some other functions of possible interest. Described. Here. Now. |
2192 | There are some other functions of possible interest. Described. Here. Now. |
2040 | |
2193 | |
2041 | =over 4 |
2194 | =over 4 |
… | |
… | |
2268 | Example: Define a class with an IO and idle watcher, start one of them in |
2421 | Example: Define a class with an IO and idle watcher, start one of them in |
2269 | the constructor. |
2422 | the constructor. |
2270 | |
2423 | |
2271 | class myclass |
2424 | class myclass |
2272 | { |
2425 | { |
2273 | ev_io io; void io_cb (ev::io &w, int revents); |
2426 | ev::io io; void io_cb (ev::io &w, int revents); |
2274 | ev_idle idle void idle_cb (ev::idle &w, int revents); |
2427 | ev:idle idle void idle_cb (ev::idle &w, int revents); |
2275 | |
2428 | |
2276 | myclass (); |
2429 | myclass (int fd) |
2277 | } |
|
|
2278 | |
|
|
2279 | myclass::myclass (int fd) |
|
|
2280 | { |
2430 | { |
2281 | io .set <myclass, &myclass::io_cb > (this); |
2431 | io .set <myclass, &myclass::io_cb > (this); |
2282 | idle.set <myclass, &myclass::idle_cb> (this); |
2432 | idle.set <myclass, &myclass::idle_cb> (this); |
2283 | |
2433 | |
2284 | io.start (fd, ev::READ); |
2434 | io.start (fd, ev::READ); |
|
|
2435 | } |
2285 | } |
2436 | }; |
2286 | |
2437 | |
2287 | |
2438 | |
2288 | =head1 MACRO MAGIC |
2439 | =head1 MACRO MAGIC |
2289 | |
2440 | |
2290 | Libev can be compiled with a variety of options, the most fundamantal |
2441 | Libev can be compiled with a variety of options, the most fundamantal |
… | |
… | |
2546 | |
2697 | |
2547 | If defined to be C<1>, libev will compile in support for the Linux inotify |
2698 | If defined to be C<1>, libev will compile in support for the Linux inotify |
2548 | interface to speed up C<ev_stat> watchers. Its actual availability will |
2699 | interface to speed up C<ev_stat> watchers. Its actual availability will |
2549 | be detected at runtime. |
2700 | be detected at runtime. |
2550 | |
2701 | |
|
|
2702 | =item EV_ATOMIC_T |
|
|
2703 | |
|
|
2704 | Libev requires an integer type (suitable for storing C<0> or C<1>) whose |
|
|
2705 | access is atomic with respect to other threads or signal contexts. No such |
|
|
2706 | type is easily found in the C language, so you can provide your own type |
|
|
2707 | that you know is safe for your purposes. It is used both for signal handler "locking" |
|
|
2708 | as well as for signal and thread safety in C<ev_async> watchers. |
|
|
2709 | |
|
|
2710 | In the absense of this define, libev will use C<sig_atomic_t volatile> |
|
|
2711 | (from F<signal.h>), which is usually good enough on most platforms. |
|
|
2712 | |
2551 | =item EV_H |
2713 | =item EV_H |
2552 | |
2714 | |
2553 | The name of the F<ev.h> header file used to include it. The default if |
2715 | The name of the F<ev.h> header file used to include it. The default if |
2554 | undefined is C<"ev.h"> in F<event.h> and F<ev.c>. This can be used to |
2716 | undefined is C<"ev.h"> in F<event.h>, F<ev.c> and F<ev++.h>. This can be |
2555 | virtually rename the F<ev.h> header file in case of conflicts. |
2717 | used to virtually rename the F<ev.h> header file in case of conflicts. |
2556 | |
2718 | |
2557 | =item EV_CONFIG_H |
2719 | =item EV_CONFIG_H |
2558 | |
2720 | |
2559 | If C<EV_STANDALONE> isn't C<1>, this variable can be used to override |
2721 | If C<EV_STANDALONE> isn't C<1>, this variable can be used to override |
2560 | F<ev.c>'s idea of where to find the F<config.h> file, similarly to |
2722 | F<ev.c>'s idea of where to find the F<config.h> file, similarly to |
2561 | C<EV_H>, above. |
2723 | C<EV_H>, above. |
2562 | |
2724 | |
2563 | =item EV_EVENT_H |
2725 | =item EV_EVENT_H |
2564 | |
2726 | |
2565 | Similarly to C<EV_H>, this macro can be used to override F<event.c>'s idea |
2727 | Similarly to C<EV_H>, this macro can be used to override F<event.c>'s idea |
2566 | of how the F<event.h> header can be found, the dfeault is C<"event.h">. |
2728 | of how the F<event.h> header can be found, the default is C<"event.h">. |
2567 | |
2729 | |
2568 | =item EV_PROTOTYPES |
2730 | =item EV_PROTOTYPES |
2569 | |
2731 | |
2570 | If defined to be C<0>, then F<ev.h> will not define any function |
2732 | If defined to be C<0>, then F<ev.h> will not define any function |
2571 | prototypes, but still define all the structs and other symbols. This is |
2733 | prototypes, but still define all the structs and other symbols. This is |
… | |
… | |
2620 | defined to be C<0>, then they are not. |
2782 | defined to be C<0>, then they are not. |
2621 | |
2783 | |
2622 | =item EV_FORK_ENABLE |
2784 | =item EV_FORK_ENABLE |
2623 | |
2785 | |
2624 | If undefined or defined to be C<1>, then fork watchers are supported. If |
2786 | If undefined or defined to be C<1>, then fork watchers are supported. If |
|
|
2787 | defined to be C<0>, then they are not. |
|
|
2788 | |
|
|
2789 | =item EV_ASYNC_ENABLE |
|
|
2790 | |
|
|
2791 | If undefined or defined to be C<1>, then async watchers are supported. If |
2625 | defined to be C<0>, then they are not. |
2792 | defined to be C<0>, then they are not. |
2626 | |
2793 | |
2627 | =item EV_MINIMAL |
2794 | =item EV_MINIMAL |
2628 | |
2795 | |
2629 | If you need to shave off some kilobytes of code at the expense of some |
2796 | If you need to shave off some kilobytes of code at the expense of some |
… | |
… | |
2750 | =item Changing timer/periodic watchers (by autorepeat or calling again): O(log skipped_other_timers) |
2917 | =item Changing timer/periodic watchers (by autorepeat or calling again): O(log skipped_other_timers) |
2751 | |
2918 | |
2752 | That means that changing a timer costs less than removing/adding them |
2919 | That means that changing a timer costs less than removing/adding them |
2753 | as only the relative motion in the event queue has to be paid for. |
2920 | as only the relative motion in the event queue has to be paid for. |
2754 | |
2921 | |
2755 | =item Starting io/check/prepare/idle/signal/child watchers: O(1) |
2922 | =item Starting io/check/prepare/idle/signal/child/fork/async watchers: O(1) |
2756 | |
2923 | |
2757 | These just add the watcher into an array or at the head of a list. |
2924 | These just add the watcher into an array or at the head of a list. |
2758 | |
2925 | |
2759 | =item Stopping check/prepare/idle watchers: O(1) |
2926 | =item Stopping check/prepare/idle/fork/async watchers: O(1) |
2760 | |
2927 | |
2761 | =item Stopping an io/signal/child watcher: O(number_of_watchers_for_this_(fd/signal/pid % EV_PID_HASHSIZE)) |
2928 | =item Stopping an io/signal/child watcher: O(number_of_watchers_for_this_(fd/signal/pid % EV_PID_HASHSIZE)) |
2762 | |
2929 | |
2763 | These watchers are stored in lists then need to be walked to find the |
2930 | These watchers are stored in lists then need to be walked to find the |
2764 | correct watcher to remove. The lists are usually short (you don't usually |
2931 | correct watcher to remove. The lists are usually short (you don't usually |
… | |
… | |
2780 | =item Priority handling: O(number_of_priorities) |
2947 | =item Priority handling: O(number_of_priorities) |
2781 | |
2948 | |
2782 | Priorities are implemented by allocating some space for each |
2949 | Priorities are implemented by allocating some space for each |
2783 | priority. When doing priority-based operations, libev usually has to |
2950 | priority. When doing priority-based operations, libev usually has to |
2784 | linearly search all the priorities, but starting/stopping and activating |
2951 | linearly search all the priorities, but starting/stopping and activating |
2785 | watchers becomes O(1) w.r.t. prioritiy handling. |
2952 | watchers becomes O(1) w.r.t. priority handling. |
|
|
2953 | |
|
|
2954 | =item Sending an ev_async: O(1) |
|
|
2955 | |
|
|
2956 | =item Processing ev_async_send: O(number_of_async_watchers) |
|
|
2957 | |
|
|
2958 | =item Processing signals: O(max_signal_number) |
|
|
2959 | |
|
|
2960 | Sending involves a syscall I<iff> there were no other C<ev_async_send> |
|
|
2961 | calls in the current loop iteration. Checking for async and signal events |
|
|
2962 | involves iterating over all running async watchers or all signal numbers. |
2786 | |
2963 | |
2787 | =back |
2964 | =back |
2788 | |
2965 | |
2789 | |
2966 | |
2790 | =head1 Win32 platform limitations and workarounds |
2967 | =head1 Win32 platform limitations and workarounds |