ViewVC Help
View File | Revision Log | Show Annotations | Download File
/cvs/libev/ev.pod
Revision: 1.469
Committed: Sat Jun 3 08:53:03 2023 UTC (12 months, 1 week ago) by root
Branch: MAIN
CVS Tags: HEAD
Changes since 1.468: +35 -19 lines
Log Message:
*** empty log message ***

File Contents

# User Rev Content
1 root 1.430 =encoding utf-8
2    
3 root 1.1 =head1 NAME
4    
5     libev - a high performance full-featured event loop written in C
6    
7     =head1 SYNOPSIS
8    
9 root 1.164 #include <ev.h>
10 root 1.1
11 root 1.105 =head2 EXAMPLE PROGRAM
12 root 1.54
13 root 1.164 // a single header file is required
14     #include <ev.h>
15 root 1.54
16 root 1.217 #include <stdio.h> // for puts
17    
18 root 1.164 // every watcher type has its own typedef'd struct
19 root 1.200 // with the name ev_TYPE
20 root 1.164 ev_io stdin_watcher;
21     ev_timer timeout_watcher;
22    
23     // all watcher callbacks have a similar signature
24     // this callback is called when data is readable on stdin
25     static void
26 root 1.198 stdin_cb (EV_P_ ev_io *w, int revents)
27 root 1.164 {
28     puts ("stdin ready");
29     // for one-shot events, one must manually stop the watcher
30     // with its corresponding stop function.
31     ev_io_stop (EV_A_ w);
32    
33 root 1.310 // this causes all nested ev_run's to stop iterating
34     ev_break (EV_A_ EVBREAK_ALL);
35 root 1.164 }
36    
37     // another callback, this time for a time-out
38     static void
39 root 1.198 timeout_cb (EV_P_ ev_timer *w, int revents)
40 root 1.164 {
41     puts ("timeout");
42 root 1.310 // this causes the innermost ev_run to stop iterating
43     ev_break (EV_A_ EVBREAK_ONE);
44 root 1.164 }
45    
46     int
47     main (void)
48     {
49     // use the default event loop unless you have special needs
50 root 1.322 struct ev_loop *loop = EV_DEFAULT;
51 root 1.164
52     // initialise an io watcher, then start it
53     // this one will watch for stdin to become readable
54     ev_io_init (&stdin_watcher, stdin_cb, /*STDIN_FILENO*/ 0, EV_READ);
55     ev_io_start (loop, &stdin_watcher);
56    
57     // initialise a timer watcher, then start it
58     // simple non-repeating 5.5 second timeout
59     ev_timer_init (&timeout_watcher, timeout_cb, 5.5, 0.);
60     ev_timer_start (loop, &timeout_watcher);
61    
62     // now wait for events to arrive
63 root 1.310 ev_run (loop, 0);
64 root 1.164
65 root 1.362 // break was called, so exit
66 root 1.164 return 0;
67     }
68 root 1.53
69 root 1.236 =head1 ABOUT THIS DOCUMENT
70    
71     This document documents the libev software package.
72 root 1.1
73 root 1.135 The newest version of this document is also available as an html-formatted
74 root 1.69 web page you might find easier to navigate when reading it for the first
75 root 1.154 time: L<http://pod.tst.eu/http://cvs.schmorp.de/libev/ev.pod>.
76 root 1.69
77 root 1.236 While this document tries to be as complete as possible in documenting
78     libev, its usage and the rationale behind its design, it is not a tutorial
79     on event-based programming, nor will it introduce event-based programming
80     with libev.
81    
82 sf-exg 1.298 Familiarity with event based programming techniques in general is assumed
83 root 1.236 throughout this document.
84    
85 root 1.334 =head1 WHAT TO READ WHEN IN A HURRY
86    
87     This manual tries to be very detailed, but unfortunately, this also makes
88     it very long. If you just want to know the basics of libev, I suggest
89 root 1.414 reading L</ANATOMY OF A WATCHER>, then the L</EXAMPLE PROGRAM> above and
90 root 1.413 look up the missing functions in L</GLOBAL FUNCTIONS> and the C<ev_io> and
91     C<ev_timer> sections in L</WATCHER TYPES>.
92 root 1.334
93 root 1.236 =head1 ABOUT LIBEV
94    
95 root 1.1 Libev is an event loop: you register interest in certain events (such as a
96 root 1.92 file descriptor being readable or a timeout occurring), and it will manage
97 root 1.4 these event sources and provide your program with events.
98 root 1.1
99     To do this, it must take more or less complete control over your process
100     (or thread) by executing the I<event loop> handler, and will then
101     communicate events via a callback mechanism.
102    
103     You register interest in certain events by registering so-called I<event
104     watchers>, which are relatively small C structures you initialise with the
105     details of the event, and then hand it over to libev by I<starting> the
106     watcher.
107    
108 root 1.105 =head2 FEATURES
109 root 1.1
110 root 1.447 Libev supports C<select>, C<poll>, the Linux-specific aio and C<epoll>
111     interfaces, the BSD-specific C<kqueue> and the Solaris-specific event port
112     mechanisms for file descriptor events (C<ev_io>), the Linux C<inotify>
113     interface (for C<ev_stat>), Linux eventfd/signalfd (for faster and cleaner
114 root 1.261 inter-thread wakeup (C<ev_async>)/signal handling (C<ev_signal>)) relative
115     timers (C<ev_timer>), absolute timers with customised rescheduling
116     (C<ev_periodic>), synchronous signals (C<ev_signal>), process status
117     change events (C<ev_child>), and event watchers dealing with the event
118     loop mechanism itself (C<ev_idle>, C<ev_embed>, C<ev_prepare> and
119     C<ev_check> watchers) as well as file watchers (C<ev_stat>) and even
120     limited support for fork events (C<ev_fork>).
121 root 1.54
122     It also is quite fast (see this
123     L<benchmark|http://libev.schmorp.de/bench.html> comparing it to libevent
124     for example).
125 root 1.1
126 root 1.105 =head2 CONVENTIONS
127 root 1.1
128 root 1.135 Libev is very configurable. In this manual the default (and most common)
129     configuration will be described, which supports multiple event loops. For
130     more info about various configuration options please have a look at
131     B<EMBED> section in this manual. If libev was configured without support
132     for multiple event loops, then all functions taking an initial argument of
133 root 1.274 name C<loop> (which is always of type C<struct ev_loop *>) will not have
134 root 1.135 this argument.
135 root 1.1
136 root 1.105 =head2 TIME REPRESENTATION
137 root 1.1
138 root 1.237 Libev represents time as a single floating point number, representing
139 root 1.317 the (fractional) number of seconds since the (POSIX) epoch (in practice
140 root 1.293 somewhere near the beginning of 1970, details are complicated, don't
141     ask). This type is called C<ev_tstamp>, which is what you should use
142     too. It usually aliases to the C<double> type in C. When you need to do
143     any calculations on it, you should treat it as some floating point value.
144    
145     Unlike the name component C<stamp> might indicate, it is also used for
146     time differences (e.g. delays) throughout libev.
147 root 1.34
148 root 1.160 =head1 ERROR HANDLING
149    
150     Libev knows three classes of errors: operating system errors, usage errors
151     and internal errors (bugs).
152    
153     When libev catches an operating system error it cannot handle (for example
154 root 1.161 a system call indicating a condition libev cannot fix), it calls the callback
155 root 1.160 set via C<ev_set_syserr_cb>, which is supposed to fix the problem or
156     abort. The default is to print a diagnostic message and to call C<abort
157     ()>.
158    
159     When libev detects a usage error such as a negative timer interval, then
160     it will print a diagnostic message and abort (via the C<assert> mechanism,
161     so C<NDEBUG> will disable this checking): these are programming errors in
162     the libev caller and need to be fixed there.
163    
164 root 1.455 Via the C<EV_FREQUENT> macro you can compile in and/or enable extensive
165     consistency checking code inside libev that can be used to check for
166     internal inconsistencies, suually caused by application bugs.
167    
168     Libev also has a few internal error-checking C<assert>ions. These do not
169     trigger under normal circumstances, as they indicate either a bug in libev
170     or worse.
171 root 1.160
172    
173 root 1.17 =head1 GLOBAL FUNCTIONS
174    
175 root 1.18 These functions can be called anytime, even before initialising the
176     library in any way.
177    
178 root 1.1 =over 4
179    
180     =item ev_tstamp ev_time ()
181    
182 root 1.26 Returns the current time as libev would use it. Please note that the
183     C<ev_now> function is usually faster and also often returns the timestamp
184 sf-exg 1.321 you actually want to know. Also interesting is the combination of
185 sf-exg 1.384 C<ev_now_update> and C<ev_now>.
186 root 1.1
187 root 1.97 =item ev_sleep (ev_tstamp interval)
188    
189 root 1.371 Sleep for the given interval: The current thread will be blocked
190     until either it is interrupted or the given time interval has
191     passed (approximately - it might return a bit earlier even if not
192     interrupted). Returns immediately if C<< interval <= 0 >>.
193    
194     Basically this is a sub-second-resolution C<sleep ()>.
195    
196     The range of the C<interval> is limited - libev only guarantees to work
197     with sleep times of up to one day (C<< interval <= 86400 >>).
198 root 1.97
199 root 1.1 =item int ev_version_major ()
200    
201     =item int ev_version_minor ()
202    
203 root 1.80 You can find out the major and minor ABI version numbers of the library
204 root 1.1 you linked against by calling the functions C<ev_version_major> and
205     C<ev_version_minor>. If you want, you can compare against the global
206     symbols C<EV_VERSION_MAJOR> and C<EV_VERSION_MINOR>, which specify the
207     version of the library your program was compiled against.
208    
209 root 1.80 These version numbers refer to the ABI version of the library, not the
210     release version.
211 root 1.79
212 root 1.9 Usually, it's a good idea to terminate if the major versions mismatch,
213 root 1.79 as this indicates an incompatible change. Minor versions are usually
214 root 1.1 compatible to older versions, so a larger minor version alone is usually
215     not a problem.
216    
217 root 1.54 Example: Make sure we haven't accidentally been linked against the wrong
218 root 1.320 version (note, however, that this will not detect other ABI mismatches,
219     such as LFS or reentrancy).
220 root 1.34
221 root 1.164 assert (("libev version mismatch",
222     ev_version_major () == EV_VERSION_MAJOR
223     && ev_version_minor () >= EV_VERSION_MINOR));
224 root 1.34
225 root 1.31 =item unsigned int ev_supported_backends ()
226    
227     Return the set of all backends (i.e. their corresponding C<EV_BACKEND_*>
228     value) compiled into this binary of libev (independent of their
229     availability on the system you are running on). See C<ev_default_loop> for
230     a description of the set values.
231    
232 root 1.34 Example: make sure we have the epoll method, because yeah this is cool and
233     a must have and can we have a torrent of it please!!!11
234    
235 root 1.164 assert (("sorry, no epoll, no sex",
236     ev_supported_backends () & EVBACKEND_EPOLL));
237 root 1.34
238 root 1.31 =item unsigned int ev_recommended_backends ()
239    
240 root 1.318 Return the set of all backends compiled into this binary of libev and
241     also recommended for this platform, meaning it will work for most file
242     descriptor types. This set is often smaller than the one returned by
243     C<ev_supported_backends>, as for example kqueue is broken on most BSDs
244     and will not be auto-detected unless you explicitly request it (assuming
245     you know what you are doing). This is the set of backends that libev will
246     probe for if you specify no backends explicitly.
247 root 1.31
248 root 1.35 =item unsigned int ev_embeddable_backends ()
249    
250     Returns the set of backends that are embeddable in other event loops. This
251 root 1.319 value is platform-specific but can include backends not available on the
252     current system. To find which embeddable backends might be supported on
253     the current system, you would need to look at C<ev_embeddable_backends ()
254     & ev_supported_backends ()>, likewise for recommended ones.
255 root 1.35
256     See the description of C<ev_embed> watchers for more info.
257    
258 root 1.401 =item ev_set_allocator (void *(*cb)(void *ptr, long size) throw ())
259 root 1.1
260 root 1.59 Sets the allocation function to use (the prototype is similar - the
261 root 1.145 semantics are identical to the C<realloc> C89/SuS/POSIX function). It is
262     used to allocate and free memory (no surprises here). If it returns zero
263     when memory needs to be allocated (C<size != 0>), the library might abort
264     or take some potentially destructive action.
265    
266     Since some systems (at least OpenBSD and Darwin) fail to implement
267     correct C<realloc> semantics, libev will use a wrapper around the system
268     C<realloc> and C<free> functions by default.
269 root 1.1
270     You could override this function in high-availability programs to, say,
271     free some memory if it cannot allocate memory, to use a special allocator,
272     or even to sleep a while and retry until some memory is available.
273    
274 root 1.446 Example: The following is the C<realloc> function that libev itself uses
275     which should work with C<realloc> and C<free> functions of all kinds and
276     is probably a good basis for your own implementation.
277    
278     static void *
279     ev_realloc_emul (void *ptr, long size) EV_NOEXCEPT
280     {
281     if (size)
282     return realloc (ptr, size);
283    
284     free (ptr);
285     return 0;
286     }
287    
288 root 1.54 Example: Replace the libev allocator with one that waits a bit and then
289 root 1.446 retries.
290 root 1.34
291     static void *
292 root 1.52 persistent_realloc (void *ptr, size_t size)
293 root 1.34 {
294 root 1.446 if (!size)
295     {
296     free (ptr);
297     return 0;
298     }
299    
300 root 1.34 for (;;)
301     {
302     void *newptr = realloc (ptr, size);
303    
304     if (newptr)
305     return newptr;
306    
307     sleep (60);
308     }
309     }
310    
311     ...
312     ev_set_allocator (persistent_realloc);
313    
314 root 1.401 =item ev_set_syserr_cb (void (*cb)(const char *msg) throw ())
315 root 1.1
316 root 1.161 Set the callback function to call on a retryable system call error (such
317 root 1.1 as failed select, poll, epoll_wait). The message is a printable string
318     indicating the system call or subsystem causing the problem. If this
319 root 1.161 callback is set, then libev will expect it to remedy the situation, no
320 root 1.7 matter what, when it returns. That is, libev will generally retry the
321 root 1.1 requested operation, or, if the condition doesn't go away, do bad stuff
322     (such as abort).
323    
324 root 1.54 Example: This is basically the same thing that libev does internally, too.
325 root 1.34
326     static void
327     fatal_error (const char *msg)
328     {
329     perror (msg);
330     abort ();
331     }
332    
333     ...
334     ev_set_syserr_cb (fatal_error);
335    
336 root 1.349 =item ev_feed_signal (int signum)
337    
338     This function can be used to "simulate" a signal receive. It is completely
339     safe to call this function at any time, from any context, including signal
340     handlers or random threads.
341    
342 sf-exg 1.350 Its main use is to customise signal handling in your process, especially
343 root 1.349 in the presence of threads. For example, you could block signals
344     by default in all threads (and specifying C<EVFLAG_NOSIGMASK> when
345     creating any loops), and in one thread, use C<sigwait> or any other
346     mechanism to wait for signals, then "deliver" them to libev by calling
347     C<ev_feed_signal>.
348    
349 root 1.1 =back
350    
351 root 1.322 =head1 FUNCTIONS CONTROLLING EVENT LOOPS
352 root 1.1
353 root 1.310 An event loop is described by a C<struct ev_loop *> (the C<struct> is
354 root 1.311 I<not> optional in this case unless libev 3 compatibility is disabled, as
355     libev 3 had an C<ev_loop> function colliding with the struct name).
356 root 1.200
357     The library knows two types of such loops, the I<default> loop, which
358 root 1.331 supports child process events, and dynamically created event loops which
359     do not.
360 root 1.1
361     =over 4
362    
363     =item struct ev_loop *ev_default_loop (unsigned int flags)
364    
365 root 1.322 This returns the "default" event loop object, which is what you should
366     normally use when you just need "the event loop". Event loop objects and
367     the C<flags> parameter are described in more detail in the entry for
368     C<ev_loop_new>.
369    
370     If the default loop is already initialised then this function simply
371     returns it (and ignores the flags. If that is troubling you, check
372     C<ev_backend ()> afterwards). Otherwise it will create it with the given
373     flags, which should almost always be C<0>, unless the caller is also the
374     one calling C<ev_run> or otherwise qualifies as "the main program".
375 root 1.1
376     If you don't know what event loop to use, use the one returned from this
377 root 1.322 function (or via the C<EV_DEFAULT> macro).
378 root 1.1
379 root 1.139 Note that this function is I<not> thread-safe, so if you want to use it
380 root 1.322 from multiple threads, you have to employ some kind of mutex (note also
381     that this case is unlikely, as loops cannot be shared easily between
382     threads anyway).
383    
384     The default loop is the only loop that can handle C<ev_child> watchers,
385     and to do this, it always registers a handler for C<SIGCHLD>. If this is
386     a problem for your application you can either create a dynamic loop with
387     C<ev_loop_new> which doesn't do that, or you can simply overwrite the
388     C<SIGCHLD> signal handler I<after> calling C<ev_default_init>.
389 root 1.139
390 root 1.322 Example: This is the most typical usage.
391    
392     if (!ev_default_loop (0))
393     fatal ("could not initialise libev, bad $LIBEV_FLAGS in environment?");
394    
395     Example: Restrict libev to the select and poll backends, and do not allow
396     environment settings to be taken into account:
397    
398     ev_default_loop (EVBACKEND_POLL | EVBACKEND_SELECT | EVFLAG_NOENV);
399    
400     =item struct ev_loop *ev_loop_new (unsigned int flags)
401    
402     This will create and initialise a new event loop object. If the loop
403     could not be initialised, returns false.
404    
405 root 1.343 This function is thread-safe, and one common way to use libev with
406     threads is indeed to create one loop per thread, and using the default
407     loop in the "main" or "initial" thread.
408 root 1.118
409 root 1.1 The flags argument can be used to specify special behaviour or specific
410 root 1.33 backends to use, and is usually specified as C<0> (or C<EVFLAG_AUTO>).
411 root 1.1
412 root 1.33 The following flags are supported:
413 root 1.1
414     =over 4
415    
416 root 1.10 =item C<EVFLAG_AUTO>
417 root 1.1
418 root 1.9 The default flags value. Use this if you have no clue (it's the right
419 root 1.1 thing, believe me).
420    
421 root 1.10 =item C<EVFLAG_NOENV>
422 root 1.1
423 root 1.161 If this flag bit is or'ed into the flag value (or the program runs setuid
424 root 1.8 or setgid) then libev will I<not> look at the environment variable
425     C<LIBEV_FLAGS>. Otherwise (the default), this environment variable will
426     override the flags completely if it is found in the environment. This is
427 root 1.427 useful to try out specific backends to test their performance, to work
428     around bugs, or to make libev threadsafe (accessing environment variables
429     cannot be done in a threadsafe way, but usually it works if no other
430     thread modifies them).
431 root 1.1
432 root 1.62 =item C<EVFLAG_FORKCHECK>
433    
434 root 1.291 Instead of calling C<ev_loop_fork> manually after a fork, you can also
435     make libev check for a fork in each iteration by enabling this flag.
436 root 1.62
437     This works by calling C<getpid ()> on every iteration of the loop,
438     and thus this might slow down your event loop if you do a lot of loop
439 ayin 1.65 iterations and little real work, but is usually not noticeable (on my
440 root 1.441 GNU/Linux system for example, C<getpid> is actually a simple 5-insn
441     sequence without a system call and thus I<very> fast, but my GNU/Linux
442     system also has C<pthread_atfork> which is even faster). (Update: glibc
443     versions 2.25 apparently removed the C<getpid> optimisation again).
444 root 1.62
445     The big advantage of this flag is that you can forget about fork (and
446 root 1.436 forget about forgetting to tell libev about forking, although you still
447     have to ignore C<SIGPIPE>) when you use this flag.
448 root 1.62
449 root 1.161 This flag setting cannot be overridden or specified in the C<LIBEV_FLAGS>
450 root 1.62 environment variable.
451    
452 root 1.260 =item C<EVFLAG_NOINOTIFY>
453    
454     When this flag is specified, then libev will not attempt to use the
455 root 1.340 I<inotify> API for its C<ev_stat> watchers. Apart from debugging and
456 root 1.260 testing, this flag can be useful to conserve inotify file descriptors, as
457     otherwise each loop using C<ev_stat> watchers consumes one inotify handle.
458    
459 root 1.277 =item C<EVFLAG_SIGNALFD>
460 root 1.260
461 root 1.277 When this flag is specified, then libev will attempt to use the
462 root 1.340 I<signalfd> API for its C<ev_signal> (and C<ev_child>) watchers. This API
463 root 1.278 delivers signals synchronously, which makes it both faster and might make
464     it possible to get the queued signal data. It can also simplify signal
465     handling with threads, as long as you properly block signals in your
466     threads that are not interested in handling them.
467 root 1.277
468     Signalfd will not be used by default as this changes your signal mask, and
469     there are a lot of shoddy libraries and programs (glib's threadpool for
470     example) that can't properly initialise their signal masks.
471 root 1.260
472 root 1.349 =item C<EVFLAG_NOSIGMASK>
473    
474     When this flag is specified, then libev will avoid to modify the signal
475 sf-exg 1.374 mask. Specifically, this means you have to make sure signals are unblocked
476 root 1.349 when you want to receive them.
477    
478     This behaviour is useful when you want to do your own signal handling, or
479     want to handle signals only in specific threads and want to avoid libev
480     unblocking the signals.
481    
482 root 1.360 It's also required by POSIX in a threaded program, as libev calls
483     C<sigprocmask>, whose behaviour is officially unspecified.
484    
485 root 1.458 =item C<EVFLAG_NOTIMERFD>
486    
487     When this flag is specified, the libev will avoid using a C<timerfd> to
488     detect time jumps. It will still be able to detect time jumps, but takes
489     longer and has a lower accuracy in doing so, but saves a file descriptor
490     per loop.
491    
492     The current implementation only tries to use a C<timerfd> when the first
493     C<ev_periodic> watcher is started and falls back on other methods if it
494     cannot be created, but this behaviour might change in the future.
495 root 1.349
496 root 1.31 =item C<EVBACKEND_SELECT> (value 1, portable select backend)
497 root 1.1
498 root 1.29 This is your standard select(2) backend. Not I<completely> standard, as
499     libev tries to roll its own fd_set with no limits on the number of fds,
500     but if that fails, expect a fairly low limit on the number of fds when
501 root 1.102 using this backend. It doesn't scale too well (O(highest_fd)), but its
502     usually the fastest backend for a low number of (low-numbered :) fds.
503    
504     To get good performance out of this backend you need a high amount of
505 root 1.161 parallelism (most of the file descriptors should be busy). If you are
506 root 1.102 writing a server, you should C<accept ()> in a loop to accept as many
507     connections as possible during one iteration. You might also want to have
508     a look at C<ev_set_io_collect_interval ()> to increase the amount of
509 root 1.155 readiness notifications you get per iteration.
510 root 1.1
511 root 1.179 This backend maps C<EV_READ> to the C<readfds> set and C<EV_WRITE> to the
512     C<writefds> set (and to work around Microsoft Windows bugs, also onto the
513     C<exceptfds> set on that platform).
514    
515 root 1.31 =item C<EVBACKEND_POLL> (value 2, poll backend, available everywhere except on windows)
516 root 1.1
517 root 1.102 And this is your standard poll(2) backend. It's more complicated
518     than select, but handles sparse fds better and has no artificial
519     limit on the number of fds you can use (except it will slow down
520     considerably with a lot of inactive fds). It scales similarly to select,
521     i.e. O(total_fds). See the entry for C<EVBACKEND_SELECT>, above, for
522     performance tips.
523 root 1.1
524 root 1.179 This backend maps C<EV_READ> to C<POLLIN | POLLERR | POLLHUP>, and
525     C<EV_WRITE> to C<POLLOUT | POLLERR | POLLHUP>.
526    
527 root 1.31 =item C<EVBACKEND_EPOLL> (value 4, Linux)
528 root 1.1
529 root 1.454 Use the Linux-specific epoll(7) interface (for both pre- and post-2.6.9
530 root 1.272 kernels).
531    
532 root 1.368 For few fds, this backend is a bit little slower than poll and select, but
533     it scales phenomenally better. While poll and select usually scale like
534     O(total_fds) where total_fds is the total number of fds (or the highest
535     fd), epoll scales either O(1) or O(active_fds).
536 root 1.205
537 root 1.210 The epoll mechanism deserves honorable mention as the most misdesigned
538     of the more advanced event mechanisms: mere annoyances include silently
539     dropping file descriptors, requiring a system call per change per file
540 root 1.337 descriptor (and unnecessary guessing of parameters), problems with dup,
541 root 1.338 returning before the timeout value, resulting in additional iterations
542     (and only giving 5ms accuracy while select on the same platform gives
543     0.1ms) and so on. The biggest issue is fork races, however - if a program
544     forks then I<both> parent and child process have to recreate the epoll
545     set, which can take considerable time (one syscall per file descriptor)
546     and is of course hard to detect.
547 root 1.1
548 root 1.370 Epoll is also notoriously buggy - embedding epoll fds I<should> work,
549     but of course I<doesn't>, and epoll just loves to report events for
550     totally I<different> file descriptors (even already closed ones, so
551     one cannot even remove them from the set) than registered in the set
552     (especially on SMP systems). Libev tries to counter these spurious
553     notifications by employing an additional generation counter and comparing
554     that against the events to filter out spurious ones, recreating the set
555 sf-exg 1.374 when required. Epoll also erroneously rounds down timeouts, but gives you
556 root 1.370 no way to know when and by how much, so sometimes you have to busy-wait
557     because epoll returns immediately despite a nonzero timeout. And last
558 root 1.306 not least, it also refuses to work with some file descriptors which work
559     perfectly fine with C<select> (files, many character devices...).
560 root 1.204
561 root 1.370 Epoll is truly the train wreck among event poll mechanisms, a frankenpoll,
562     cobbled together in a hurry, no thought to design or interaction with
563     others. Oh, the pain, will it ever stop...
564 root 1.338
565 root 1.94 While stopping, setting and starting an I/O watcher in the same iteration
566 root 1.210 will result in some caching, there is still a system call per such
567     incident (because the same I<file descriptor> could point to a different
568     I<file description> now), so its best to avoid that. Also, C<dup ()>'ed
569     file descriptors might not work very well if you register events for both
570     file descriptors.
571 root 1.29
572 root 1.102 Best performance from this backend is achieved by not unregistering all
573 root 1.183 watchers for a file descriptor until it has been closed, if possible,
574     i.e. keep at least one watcher active per fd at all times. Stopping and
575     starting a watcher (without re-setting it) also usually doesn't cause
576 root 1.206 extra overhead. A fork can both result in spurious notifications as well
577     as in libev having to destroy and recreate the epoll object, which can
578     take considerable time and thus should be avoided.
579 root 1.102
580 root 1.215 All this means that, in practice, C<EVBACKEND_SELECT> can be as fast or
581     faster than epoll for maybe up to a hundred file descriptors, depending on
582 root 1.214 the usage. So sad.
583 root 1.213
584 root 1.161 While nominally embeddable in other event loops, this feature is broken in
585 root 1.447 a lot of kernel revisions, but probably(!) works in current versions.
586    
587     This backend maps C<EV_READ> and C<EV_WRITE> in the same way as
588     C<EVBACKEND_POLL>.
589    
590     =item C<EVBACKEND_LINUXAIO> (value 64, Linux)
591    
592 root 1.454 Use the Linux-specific Linux AIO (I<not> C<< aio(7) >> but C<<
593 root 1.452 io_submit(2) >>) event interface available in post-4.18 kernels (but libev
594     only tries to use it in 4.19+).
595    
596 root 1.454 This is another Linux train wreck of an event interface.
597 root 1.447
598     If this backend works for you (as of this writing, it was very
599 root 1.454 experimental), it is the best event interface available on Linux and might
600 root 1.448 be well worth enabling it - if it isn't available in your kernel this will
601     be detected and this backend will be skipped.
602    
603     This backend can batch oneshot requests and supports a user-space ring
604     buffer to receive events. It also doesn't suffer from most of the design
605 root 1.452 problems of epoll (such as not being able to remove event sources from
606     the epoll set), and generally sounds too good to be true. Because, this
607 root 1.454 being the Linux kernel, of course it suffers from a whole new set of
608 root 1.452 limitations, forcing you to fall back to epoll, inheriting all its design
609     issues.
610 root 1.447
611     For one, it is not easily embeddable (but probably could be done using
612 root 1.448 an event fd at some extra overhead). It also is subject to a system wide
613 root 1.454 limit that can be configured in F</proc/sys/fs/aio-max-nr>. If no AIO
614 root 1.452 requests are left, this backend will be skipped during initialisation, and
615     will switch to epoll when the loop is active.
616 root 1.448
617 root 1.452 Most problematic in practice, however, is that not all file descriptors
618 root 1.454 work with it. For example, in Linux 5.1, TCP sockets, pipes, event fds,
619     files, F</dev/null> and many others are supported, but ttys do not work
620 root 1.450 properly (a known bug that the kernel developers don't care about, see
621     L<https://lore.kernel.org/patchwork/patch/1047453/>), so this is not
622     (yet?) a generic event polling interface.
623 root 1.448
624 root 1.454 Overall, it seems the Linux developers just don't want it to have a
625 root 1.451 generic event handling mechanism other than C<select> or C<poll>.
626    
627 root 1.452 To work around all these problem, the current version of libev uses its
628     epoll backend as a fallback for file descriptor types that do not work. Or
629     falls back completely to epoll if the kernel acts up.
630 root 1.102
631 root 1.179 This backend maps C<EV_READ> and C<EV_WRITE> in the same way as
632     C<EVBACKEND_POLL>.
633    
634 root 1.31 =item C<EVBACKEND_KQUEUE> (value 8, most BSD clones)
635 root 1.29
636 root 1.453 Kqueue deserves special mention, as at the time this backend was
637     implemented, it was broken on all BSDs except NetBSD (usually it doesn't
638     work reliably with anything but sockets and pipes, except on Darwin,
639     where of course it's completely useless). Unlike epoll, however, whose
640     brokenness is by design, these kqueue bugs can be (and mostly have been)
641     fixed without API changes to existing programs. For this reason it's not
642     being "auto-detected" on all platforms unless you explicitly specify it
643     in the flags (i.e. using C<EVBACKEND_KQUEUE>) or libev was compiled on a
644     known-to-be-good (-enough) system like NetBSD.
645 root 1.29
646 root 1.100 You still can embed kqueue into a normal poll or select backend and use it
647     only for sockets (after having made sure that sockets work with kqueue on
648     the target platform). See C<ev_embed> watchers for more info.
649    
650 root 1.29 It scales in the same way as the epoll backend, but the interface to the
651 root 1.100 kernel is more efficient (which says nothing about its actual speed, of
652     course). While stopping, setting and starting an I/O watcher does never
653 root 1.161 cause an extra system call as with C<EVBACKEND_EPOLL>, it still adds up to
654 root 1.398 two event changes per incident. Support for C<fork ()> is very bad (you
655 root 1.454 might have to leak fds on fork, but it's more sane than epoll) and it
656 root 1.422 drops fds silently in similarly hard-to-detect cases.
657 root 1.29
658 root 1.102 This backend usually performs well under most conditions.
659    
660     While nominally embeddable in other event loops, this doesn't work
661     everywhere, so you might need to test for this. And since it is broken
662     almost everywhere, you should only use it when you have a lot of sockets
663     (for which it usually works), by embedding it into another event loop
664 root 1.223 (e.g. C<EVBACKEND_SELECT> or C<EVBACKEND_POLL> (but C<poll> is of course
665     also broken on OS X)) and, did I mention it, using it only for sockets.
666 root 1.102
667 root 1.179 This backend maps C<EV_READ> into an C<EVFILT_READ> kevent with
668     C<NOTE_EOF>, and C<EV_WRITE> into an C<EVFILT_WRITE> kevent with
669     C<NOTE_EOF>.
670    
671 root 1.31 =item C<EVBACKEND_DEVPOLL> (value 16, Solaris 8)
672 root 1.29
673 root 1.102 This is not implemented yet (and might never be, unless you send me an
674     implementation). According to reports, C</dev/poll> only supports sockets
675     and is not embeddable, which would limit the usefulness of this backend
676     immensely.
677 root 1.29
678 root 1.31 =item C<EVBACKEND_PORT> (value 32, Solaris 10)
679 root 1.29
680 root 1.469 This uses the Solaris 10 event port mechanism. As with everything on
681     Solaris, it's really slow, but it still scales very well (O(active_fds)).
682 root 1.29
683 root 1.102 While this backend scales well, it requires one system call per active
684     file descriptor per loop iteration. For small and medium numbers of file
685     descriptors a "slow" C<EVBACKEND_SELECT> or C<EVBACKEND_POLL> backend
686     might perform better.
687    
688 root 1.351 On the positive side, this backend actually performed fully to
689     specification in all tests and is fully embeddable, which is a rare feat
690     among the OS-specific backends (I vastly prefer correctness over speed
691     hacks).
692    
693 root 1.352 On the negative side, the interface is I<bizarre> - so bizarre that
694     even sun itself gets it wrong in their code examples: The event polling
695 root 1.375 function sometimes returns events to the caller even though an error
696 root 1.353 occurred, but with no indication whether it has done so or not (yes, it's
697 root 1.375 even documented that way) - deadly for edge-triggered interfaces where you
698     absolutely have to know whether an event occurred or not because you have
699     to re-arm the watcher.
700 root 1.352
701     Fortunately libev seems to be able to work around these idiocies.
702 root 1.117
703 root 1.179 This backend maps C<EV_READ> and C<EV_WRITE> in the same way as
704     C<EVBACKEND_POLL>.
705    
706 root 1.31 =item C<EVBACKEND_ALL>
707 root 1.29
708     Try all backends (even potentially broken ones that wouldn't be tried
709     with C<EVFLAG_AUTO>). Since this is a mask, you can do stuff such as
710 root 1.31 C<EVBACKEND_ALL & ~EVBACKEND_KQUEUE>.
711 root 1.1
712 root 1.349 It is definitely not recommended to use this flag, use whatever
713     C<ev_recommended_backends ()> returns, or simply do not specify a backend
714     at all.
715    
716     =item C<EVBACKEND_MASK>
717    
718     Not a backend at all, but a mask to select all backend bits from a
719     C<flags> value, in case you want to mask out any backends from a flags
720     value (e.g. when modifying the C<LIBEV_FLAGS> environment variable).
721 root 1.102
722 root 1.1 =back
723    
724 root 1.260 If one or more of the backend flags are or'ed into the flags value,
725     then only these backends will be tried (in the reverse order as listed
726     here). If none are specified, all backends in C<ev_recommended_backends
727     ()> will be tried.
728 root 1.29
729 root 1.54 Example: Try to create a event loop that uses epoll and nothing else.
730 root 1.34
731 root 1.164 struct ev_loop *epoller = ev_loop_new (EVBACKEND_EPOLL | EVFLAG_NOENV);
732     if (!epoller)
733     fatal ("no epoll found here, maybe it hides under your chair");
734 root 1.34
735 root 1.323 Example: Use whatever libev has to offer, but make sure that kqueue is
736     used if available.
737    
738     struct ev_loop *loop = ev_loop_new (ev_recommended_backends () | EVBACKEND_KQUEUE);
739    
740 root 1.447 Example: Similarly, on linux, you mgiht want to take advantage of the
741     linux aio backend if possible, but fall back to something else if that
742     isn't available.
743    
744     struct ev_loop *loop = ev_loop_new (ev_recommended_backends () | EVBACKEND_LINUXAIO);
745    
746 root 1.322 =item ev_loop_destroy (loop)
747 root 1.1
748 root 1.322 Destroys an event loop object (frees all memory and kernel state
749     etc.). None of the active event watchers will be stopped in the normal
750     sense, so e.g. C<ev_is_active> might still return true. It is your
751     responsibility to either stop all watchers cleanly yourself I<before>
752     calling this function, or cope with the fact afterwards (which is usually
753     the easiest thing, you can just ignore the watchers and/or C<free ()> them
754     for example).
755 root 1.1
756 root 1.203 Note that certain global state, such as signal state (and installed signal
757     handlers), will not be freed by this function, and related watchers (such
758     as signal and child watchers) would need to be stopped manually.
759 root 1.87
760 root 1.322 This function is normally used on loop objects allocated by
761     C<ev_loop_new>, but it can also be used on the default loop returned by
762     C<ev_default_loop>, in which case it is not thread-safe.
763    
764     Note that it is not advisable to call this function on the default loop
765 root 1.340 except in the rare occasion where you really need to free its resources.
766 root 1.322 If you need dynamically allocated loops it is better to use C<ev_loop_new>
767     and C<ev_loop_destroy>.
768 root 1.87
769 root 1.322 =item ev_loop_fork (loop)
770 root 1.1
771 root 1.433 This function sets a flag that causes subsequent C<ev_run> iterations
772     to reinitialise the kernel state for backends that have one. Despite
773     the name, you can call it anytime you are allowed to start or stop
774     watchers (except inside an C<ev_prepare> callback), but it makes most
775     sense after forking, in the child process. You I<must> call it (or use
776     C<EVFLAG_FORKCHECK>) in the child before resuming or calling C<ev_run>.
777 root 1.119
778 root 1.437 In addition, if you want to reuse a loop (via this function or
779 root 1.436 C<EVFLAG_FORKCHECK>), you I<also> have to ignore C<SIGPIPE>.
780    
781 root 1.429 Again, you I<have> to call it on I<any> loop that you want to re-use after
782 root 1.291 a fork, I<even if you do not plan to use the loop in the parent>. This is
783     because some kernel interfaces *cough* I<kqueue> *cough* do funny things
784     during fork.
785    
786 root 1.119 On the other hand, you only need to call this function in the child
787 root 1.310 process if and only if you want to use the event loop in the child. If
788     you just fork+exec or create a new loop in the child, you don't have to
789     call it at all (in fact, C<epoll> is so badly broken that it makes a
790     difference, but libev will usually detect this case on its own and do a
791     costly reset of the backend).
792 root 1.1
793 root 1.9 The function itself is quite fast and it's usually not a problem to call
794 root 1.322 it just in case after a fork.
795 root 1.1
796 root 1.322 Example: Automate calling C<ev_loop_fork> on the default loop when
797     using pthreads.
798 root 1.1
799 root 1.322 static void
800     post_fork_child (void)
801     {
802     ev_loop_fork (EV_DEFAULT);
803     }
804 root 1.1
805 root 1.322 ...
806     pthread_atfork (0, 0, post_fork_child);
807 root 1.1
808 root 1.131 =item int ev_is_default_loop (loop)
809    
810 root 1.183 Returns true when the given loop is, in fact, the default loop, and false
811     otherwise.
812 root 1.131
813 root 1.291 =item unsigned int ev_iteration (loop)
814 root 1.66
815 root 1.310 Returns the current iteration count for the event loop, which is identical
816     to the number of times libev did poll for new events. It starts at C<0>
817     and happily wraps around with enough iterations.
818 root 1.66
819     This value can sometimes be useful as a generation counter of sorts (it
820     "ticks" the number of loop iterations), as it roughly corresponds with
821 root 1.291 C<ev_prepare> and C<ev_check> calls - and is incremented between the
822     prepare and check phases.
823 root 1.66
824 root 1.291 =item unsigned int ev_depth (loop)
825 root 1.247
826 root 1.310 Returns the number of times C<ev_run> was entered minus the number of
827 root 1.343 times C<ev_run> was exited normally, in other words, the recursion depth.
828 root 1.247
829 root 1.310 Outside C<ev_run>, this number is zero. In a callback, this number is
830     C<1>, unless C<ev_run> was invoked recursively (or from another thread),
831 root 1.247 in which case it is higher.
832    
833 root 1.343 Leaving C<ev_run> abnormally (setjmp/longjmp, cancelling the thread,
834     throwing an exception etc.), doesn't count as "exit" - consider this
835     as a hint to avoid such ungentleman-like behaviour unless it's really
836     convenient, in which case it is fully supported.
837 root 1.247
838 root 1.31 =item unsigned int ev_backend (loop)
839 root 1.1
840 root 1.31 Returns one of the C<EVBACKEND_*> flags indicating the event backend in
841 root 1.1 use.
842    
843 root 1.9 =item ev_tstamp ev_now (loop)
844 root 1.1
845     Returns the current "event loop time", which is the time the event loop
846 root 1.34 received events and started processing them. This timestamp does not
847     change as long as callbacks are being processed, and this is also the base
848     time used for relative timers. You can treat it as the timestamp of the
849 root 1.92 event occurring (or more correctly, libev finding out about it).
850 root 1.1
851 root 1.176 =item ev_now_update (loop)
852    
853     Establishes the current time by querying the kernel, updating the time
854     returned by C<ev_now ()> in the progress. This is a costly operation and
855 root 1.310 is usually done automatically within C<ev_run ()>.
856 root 1.176
857     This function is rarely useful, but when some event callback runs for a
858     very long time without entering the event loop, updating libev's idea of
859     the current time is a good idea.
860    
861 root 1.413 See also L</The special problem of time updates> in the C<ev_timer> section.
862 root 1.176
863 root 1.231 =item ev_suspend (loop)
864    
865     =item ev_resume (loop)
866    
867 root 1.310 These two functions suspend and resume an event loop, for use when the
868     loop is not used for a while and timeouts should not be processed.
869 root 1.231
870     A typical use case would be an interactive program such as a game: When
871     the user presses C<^Z> to suspend the game and resumes it an hour later it
872     would be best to handle timeouts as if no time had actually passed while
873     the program was suspended. This can be achieved by calling C<ev_suspend>
874     in your C<SIGTSTP> handler, sending yourself a C<SIGSTOP> and calling
875     C<ev_resume> directly afterwards to resume timer processing.
876    
877     Effectively, all C<ev_timer> watchers will be delayed by the time spend
878     between C<ev_suspend> and C<ev_resume>, and all C<ev_periodic> watchers
879     will be rescheduled (that is, they will lose any events that would have
880 sf-exg 1.298 occurred while suspended).
881 root 1.231
882     After calling C<ev_suspend> you B<must not> call I<any> function on the
883     given loop other than C<ev_resume>, and you B<must not> call C<ev_resume>
884     without a previous call to C<ev_suspend>.
885    
886     Calling C<ev_suspend>/C<ev_resume> has the side effect of updating the
887     event loop time (see C<ev_now_update>).
888    
889 root 1.399 =item bool ev_run (loop, int flags)
890 root 1.1
891     Finally, this is it, the event handler. This function usually is called
892 root 1.268 after you have initialised all your watchers and you want to start
893 root 1.310 handling events. It will ask the operating system for any new events, call
894 root 1.399 the watcher callbacks, and then repeat the whole process indefinitely: This
895 root 1.310 is why event loops are called I<loops>.
896 root 1.1
897 root 1.310 If the flags argument is specified as C<0>, it will keep handling events
898     until either no event watchers are active anymore or C<ev_break> was
899     called.
900 root 1.1
901 root 1.399 The return value is false if there are no more active watchers (which
902     usually means "all jobs done" or "deadlock"), and true in all other cases
903     (which usually means " you should call C<ev_run> again").
904    
905 root 1.310 Please note that an explicit C<ev_break> is usually better than
906 root 1.34 relying on all watchers to be stopped when deciding when a program has
907 root 1.183 finished (especially in interactive programs), but having a program
908     that automatically loops as long as it has to and no longer by virtue
909     of relying on its watchers stopping correctly, that is truly a thing of
910     beauty.
911 root 1.34
912 root 1.399 This function is I<mostly> exception-safe - you can break out of a
913     C<ev_run> call by calling C<longjmp> in a callback, throwing a C++
914 root 1.343 exception and so on. This does not decrement the C<ev_depth> value, nor
915     will it clear any outstanding C<EVBREAK_ONE> breaks.
916    
917 root 1.310 A flags value of C<EVRUN_NOWAIT> will look for new events, will handle
918     those events and any already outstanding ones, but will not wait and
919     block your process in case there are no events and will return after one
920     iteration of the loop. This is sometimes useful to poll and handle new
921     events while doing lengthy calculations, to keep the program responsive.
922 root 1.1
923 root 1.310 A flags value of C<EVRUN_ONCE> will look for new events (waiting if
924 root 1.183 necessary) and will handle those and any already outstanding ones. It
925     will block your process until at least one new event arrives (which could
926 root 1.208 be an event internal to libev itself, so there is no guarantee that a
927 root 1.183 user-registered callback will be called), and will return after one
928     iteration of the loop.
929    
930     This is useful if you are waiting for some external event in conjunction
931     with something not expressible using other libev watchers (i.e. "roll your
932 root 1.310 own C<ev_run>"). However, a pair of C<ev_prepare>/C<ev_check> watchers is
933 root 1.33 usually a better approach for this kind of thing.
934    
935 root 1.369 Here are the gory details of what C<ev_run> does (this is for your
936     understanding, not a guarantee that things will work exactly like this in
937     future versions):
938 root 1.33
939 root 1.310 - Increment loop depth.
940     - Reset the ev_break status.
941 root 1.77 - Before the first iteration, call any pending watchers.
942 root 1.310 LOOP:
943     - If EVFLAG_FORKCHECK was used, check for a fork.
944 root 1.171 - If a fork was detected (by any means), queue and call all fork watchers.
945 root 1.113 - Queue and call all prepare watchers.
946 root 1.310 - If ev_break was called, goto FINISH.
947 root 1.171 - If we have been forked, detach and recreate the kernel state
948     as to not disturb the other process.
949 root 1.33 - Update the kernel state with all outstanding changes.
950 root 1.171 - Update the "event loop time" (ev_now ()).
951 root 1.113 - Calculate for how long to sleep or block, if at all
952 root 1.310 (active idle watchers, EVRUN_NOWAIT or not having
953 root 1.113 any active watchers at all will result in not sleeping).
954     - Sleep if the I/O and timer collect interval say so.
955 root 1.310 - Increment loop iteration counter.
956 root 1.33 - Block the process, waiting for any events.
957     - Queue all outstanding I/O (fd) events.
958 root 1.171 - Update the "event loop time" (ev_now ()), and do time jump adjustments.
959 root 1.183 - Queue all expired timers.
960     - Queue all expired periodics.
961 root 1.310 - Queue all idle watchers with priority higher than that of pending events.
962 root 1.33 - Queue all check watchers.
963     - Call all queued watchers in reverse order (i.e. check watchers first).
964 root 1.467 Signals, async and child watchers are implemented as I/O watchers, and
965     will be handled here by queueing them when their watcher gets executed.
966 root 1.310 - If ev_break has been called, or EVRUN_ONCE or EVRUN_NOWAIT
967     were used, or there are no active watchers, goto FINISH, otherwise
968     continue with step LOOP.
969     FINISH:
970     - Reset the ev_break status iff it was EVBREAK_ONE.
971     - Decrement the loop depth.
972     - Return.
973 root 1.27
974 root 1.114 Example: Queue some jobs and then loop until no events are outstanding
975 root 1.34 anymore.
976    
977     ... queue jobs here, make sure they register event watchers as long
978     ... as they still have work to do (even an idle watcher will do..)
979 root 1.310 ev_run (my_loop, 0);
980 root 1.362 ... jobs done or somebody called break. yeah!
981 root 1.34
982 root 1.310 =item ev_break (loop, how)
983 root 1.1
984 root 1.310 Can be used to make a call to C<ev_run> return early (but only after it
985 root 1.9 has processed all outstanding events). The C<how> argument must be either
986 root 1.310 C<EVBREAK_ONE>, which will make the innermost C<ev_run> call return, or
987     C<EVBREAK_ALL>, which will make all nested C<ev_run> calls return.
988 root 1.1
989 root 1.343 This "break state" will be cleared on the next call to C<ev_run>.
990 root 1.115
991 root 1.343 It is safe to call C<ev_break> from outside any C<ev_run> calls, too, in
992     which case it will have no effect.
993 root 1.194
994 root 1.1 =item ev_ref (loop)
995    
996     =item ev_unref (loop)
997    
998 root 1.9 Ref/unref can be used to add or remove a reference count on the event
999     loop: Every watcher keeps one reference, and as long as the reference
1000 root 1.310 count is nonzero, C<ev_run> will not return on its own.
1001 root 1.183
1002 root 1.276 This is useful when you have a watcher that you never intend to
1003 root 1.310 unregister, but that nevertheless should not keep C<ev_run> from
1004 root 1.276 returning. In such a case, call C<ev_unref> after starting, and C<ev_ref>
1005     before stopping it.
1006 root 1.183
1007 root 1.229 As an example, libev itself uses this for its internal signal pipe: It
1008 root 1.310 is not visible to the libev user and should not keep C<ev_run> from
1009 root 1.229 exiting if no event watchers registered by it are active. It is also an
1010     excellent way to do this for generic recurring timers or from within
1011     third-party libraries. Just remember to I<unref after start> and I<ref
1012     before stop> (but only if the watcher wasn't active before, or was active
1013     before, respectively. Note also that libev might stop watchers itself
1014     (e.g. non-repeating timers) in which case you have to C<ev_ref>
1015     in the callback).
1016 root 1.1
1017 root 1.310 Example: Create a signal watcher, but keep it from keeping C<ev_run>
1018 root 1.34 running when nothing else is active.
1019    
1020 root 1.198 ev_signal exitsig;
1021 root 1.164 ev_signal_init (&exitsig, sig_cb, SIGINT);
1022     ev_signal_start (loop, &exitsig);
1023 sf-exg 1.348 ev_unref (loop);
1024 root 1.34
1025 root 1.54 Example: For some weird reason, unregister the above signal handler again.
1026 root 1.34
1027 root 1.164 ev_ref (loop);
1028     ev_signal_stop (loop, &exitsig);
1029 root 1.34
1030 root 1.97 =item ev_set_io_collect_interval (loop, ev_tstamp interval)
1031    
1032     =item ev_set_timeout_collect_interval (loop, ev_tstamp interval)
1033    
1034     These advanced functions influence the time that libev will spend waiting
1035 root 1.171 for events. Both time intervals are by default C<0>, meaning that libev
1036     will try to invoke timer/periodic callbacks and I/O callbacks with minimum
1037     latency.
1038 root 1.97
1039     Setting these to a higher value (the C<interval> I<must> be >= C<0>)
1040 root 1.171 allows libev to delay invocation of I/O and timer/periodic callbacks
1041     to increase efficiency of loop iterations (or to increase power-saving
1042     opportunities).
1043 root 1.97
1044 root 1.183 The idea is that sometimes your program runs just fast enough to handle
1045     one (or very few) event(s) per loop iteration. While this makes the
1046     program responsive, it also wastes a lot of CPU time to poll for new
1047 root 1.97 events, especially with backends like C<select ()> which have a high
1048     overhead for the actual polling but can deliver many events at once.
1049    
1050     By setting a higher I<io collect interval> you allow libev to spend more
1051     time collecting I/O events, so you can handle more events per iteration,
1052     at the cost of increasing latency. Timeouts (both C<ev_periodic> and
1053 root 1.372 C<ev_timer>) will not be affected. Setting this to a non-null value will
1054 root 1.245 introduce an additional C<ev_sleep ()> call into most loop iterations. The
1055     sleep time ensures that libev will not poll for I/O events more often then
1056 root 1.373 once per this interval, on average (as long as the host time resolution is
1057     good enough).
1058 root 1.97
1059     Likewise, by setting a higher I<timeout collect interval> you allow libev
1060     to spend more time collecting timeouts, at the expense of increased
1061 root 1.183 latency/jitter/inexactness (the watcher callback will be called
1062     later). C<ev_io> watchers will not be affected. Setting this to a non-null
1063     value will not introduce any overhead in libev.
1064 root 1.97
1065 root 1.161 Many (busy) programs can usually benefit by setting the I/O collect
1066 root 1.98 interval to a value near C<0.1> or so, which is often enough for
1067     interactive servers (of course not for games), likewise for timeouts. It
1068     usually doesn't make much sense to set it to a lower value than C<0.01>,
1069 root 1.245 as this approaches the timing granularity of most systems. Note that if
1070     you do transactions with the outside world and you can't increase the
1071     parallelity, then this setting will limit your transaction rate (if you
1072     need to poll once per transaction and the I/O collect interval is 0.01,
1073 sf-exg 1.298 then you can't do more than 100 transactions per second).
1074 root 1.97
1075 root 1.171 Setting the I<timeout collect interval> can improve the opportunity for
1076     saving power, as the program will "bundle" timer callback invocations that
1077     are "near" in time together, by delaying some, thus reducing the number of
1078     times the process sleeps and wakes up again. Another useful technique to
1079     reduce iterations/wake-ups is to use C<ev_periodic> watchers and make sure
1080     they fire on, say, one-second boundaries only.
1081    
1082 root 1.245 Example: we only need 0.1s timeout granularity, and we wish not to poll
1083     more often than 100 times per second:
1084    
1085     ev_set_timeout_collect_interval (EV_DEFAULT_UC_ 0.1);
1086     ev_set_io_collect_interval (EV_DEFAULT_UC_ 0.01);
1087    
1088 root 1.253 =item ev_invoke_pending (loop)
1089    
1090     This call will simply invoke all pending watchers while resetting their
1091 root 1.310 pending state. Normally, C<ev_run> does this automatically when required,
1092 root 1.313 but when overriding the invoke callback this call comes handy. This
1093     function can be invoked from a watcher - this can be useful for example
1094     when you want to do some lengthy calculation and want to pass further
1095     event handling to another thread (you still have to make sure only one
1096     thread executes within C<ev_invoke_pending> or C<ev_run> of course).
1097 root 1.253
1098 root 1.256 =item int ev_pending_count (loop)
1099    
1100     Returns the number of pending watchers - zero indicates that no watchers
1101     are pending.
1102    
1103 root 1.253 =item ev_set_invoke_pending_cb (loop, void (*invoke_pending_cb)(EV_P))
1104    
1105     This overrides the invoke pending functionality of the loop: Instead of
1106 root 1.310 invoking all pending watchers when there are any, C<ev_run> will call
1107 root 1.253 this callback instead. This is useful, for example, when you want to
1108     invoke the actual watchers inside another context (another thread etc.).
1109    
1110     If you want to reset the callback, use C<ev_invoke_pending> as new
1111     callback.
1112    
1113 root 1.401 =item ev_set_loop_release_cb (loop, void (*release)(EV_P) throw (), void (*acquire)(EV_P) throw ())
1114 root 1.253
1115     Sometimes you want to share the same loop between multiple threads. This
1116     can be done relatively simply by putting mutex_lock/unlock calls around
1117     each call to a libev function.
1118    
1119 root 1.310 However, C<ev_run> can run an indefinite time, so it is not feasible
1120     to wait for it to return. One way around this is to wake up the event
1121 root 1.385 loop via C<ev_break> and C<ev_async_send>, another way is to set these
1122 root 1.310 I<release> and I<acquire> callbacks on the loop.
1123 root 1.253
1124     When set, then C<release> will be called just before the thread is
1125     suspended waiting for new events, and C<acquire> is called just
1126     afterwards.
1127    
1128     Ideally, C<release> will just call your mutex_unlock function, and
1129     C<acquire> will just call the mutex_lock function again.
1130    
1131 root 1.254 While event loop modifications are allowed between invocations of
1132     C<release> and C<acquire> (that's their only purpose after all), no
1133     modifications done will affect the event loop, i.e. adding watchers will
1134     have no effect on the set of file descriptors being watched, or the time
1135 root 1.310 waited. Use an C<ev_async> watcher to wake up C<ev_run> when you want it
1136 root 1.254 to take note of any changes you made.
1137    
1138 root 1.310 In theory, threads executing C<ev_run> will be async-cancel safe between
1139 root 1.254 invocations of C<release> and C<acquire>.
1140    
1141     See also the locking example in the C<THREADS> section later in this
1142     document.
1143    
1144 root 1.253 =item ev_set_userdata (loop, void *data)
1145    
1146 root 1.344 =item void *ev_userdata (loop)
1147 root 1.253
1148     Set and retrieve a single C<void *> associated with a loop. When
1149     C<ev_set_userdata> has never been called, then C<ev_userdata> returns
1150 root 1.342 C<0>.
1151 root 1.253
1152     These two functions can be used to associate arbitrary data with a loop,
1153     and are intended solely for the C<invoke_pending_cb>, C<release> and
1154     C<acquire> callbacks described above, but of course can be (ab-)used for
1155     any other purpose as well.
1156    
1157 root 1.309 =item ev_verify (loop)
1158 root 1.159
1159     This function only does something when C<EV_VERIFY> support has been
1160 root 1.200 compiled in, which is the default for non-minimal builds. It tries to go
1161 root 1.183 through all internal structures and checks them for validity. If anything
1162     is found to be inconsistent, it will print an error message to standard
1163     error and call C<abort ()>.
1164 root 1.159
1165     This can be used to catch bugs inside libev itself: under normal
1166     circumstances, this function will never abort as of course libev keeps its
1167     data structures consistent.
1168    
1169 root 1.1 =back
1170    
1171 root 1.42
1172 root 1.1 =head1 ANATOMY OF A WATCHER
1173    
1174 root 1.200 In the following description, uppercase C<TYPE> in names stands for the
1175     watcher type, e.g. C<ev_TYPE_start> can mean C<ev_timer_start> for timer
1176     watchers and C<ev_io_start> for I/O watchers.
1177    
1178 root 1.311 A watcher is an opaque structure that you allocate and register to record
1179     your interest in some event. To make a concrete example, imagine you want
1180     to wait for STDIN to become readable, you would create an C<ev_io> watcher
1181     for that:
1182 root 1.1
1183 root 1.198 static void my_cb (struct ev_loop *loop, ev_io *w, int revents)
1184 root 1.164 {
1185     ev_io_stop (w);
1186 root 1.310 ev_break (loop, EVBREAK_ALL);
1187 root 1.164 }
1188    
1189     struct ev_loop *loop = ev_default_loop (0);
1190 root 1.200
1191 root 1.198 ev_io stdin_watcher;
1192 root 1.200
1193 root 1.164 ev_init (&stdin_watcher, my_cb);
1194     ev_io_set (&stdin_watcher, STDIN_FILENO, EV_READ);
1195     ev_io_start (loop, &stdin_watcher);
1196 root 1.200
1197 root 1.310 ev_run (loop, 0);
1198 root 1.1
1199     As you can see, you are responsible for allocating the memory for your
1200 root 1.200 watcher structures (and it is I<usually> a bad idea to do this on the
1201     stack).
1202    
1203     Each watcher has an associated watcher structure (called C<struct ev_TYPE>
1204     or simply C<ev_TYPE>, as typedefs are provided for all watcher structs).
1205 root 1.1
1206 root 1.311 Each watcher structure must be initialised by a call to C<ev_init (watcher
1207     *, callback)>, which expects a callback to be provided. This callback is
1208     invoked each time the event occurs (or, in the case of I/O watchers, each
1209     time the event loop detects that the file descriptor given is readable
1210     and/or writable).
1211 root 1.1
1212 root 1.200 Each watcher type further has its own C<< ev_TYPE_set (watcher *, ...) >>
1213     macro to configure it, with arguments specific to the watcher type. There
1214     is also a macro to combine initialisation and setting in one call: C<<
1215     ev_TYPE_init (watcher *, callback, ...) >>.
1216 root 1.1
1217     To make the watcher actually watch out for events, you have to start it
1218 root 1.200 with a watcher-specific start function (C<< ev_TYPE_start (loop, watcher
1219 root 1.1 *) >>), and you can stop watching for events at any time by calling the
1220 root 1.200 corresponding stop function (C<< ev_TYPE_stop (loop, watcher *) >>.
1221 root 1.1
1222     As long as your watcher is active (has been started but not stopped) you
1223 root 1.460 must not touch the values stored in it except when explicitly documented
1224     otherwise. Most specifically you must never reinitialise it or call its
1225     C<ev_TYPE_set> macro.
1226 root 1.1
1227     Each and every callback receives the event loop pointer as first, the
1228     registered watcher structure as second, and a bitset of received events as
1229     third argument.
1230    
1231 root 1.14 The received events usually include a single bit per event type received
1232 root 1.1 (you can receive multiple events at the same time). The possible bit masks
1233     are:
1234    
1235     =over 4
1236    
1237 root 1.10 =item C<EV_READ>
1238 root 1.1
1239 root 1.10 =item C<EV_WRITE>
1240 root 1.1
1241 root 1.10 The file descriptor in the C<ev_io> watcher has become readable and/or
1242 root 1.1 writable.
1243    
1244 root 1.289 =item C<EV_TIMER>
1245 root 1.1
1246 root 1.10 The C<ev_timer> watcher has timed out.
1247 root 1.1
1248 root 1.10 =item C<EV_PERIODIC>
1249 root 1.1
1250 root 1.10 The C<ev_periodic> watcher has timed out.
1251 root 1.1
1252 root 1.10 =item C<EV_SIGNAL>
1253 root 1.1
1254 root 1.10 The signal specified in the C<ev_signal> watcher has been received by a thread.
1255 root 1.1
1256 root 1.10 =item C<EV_CHILD>
1257 root 1.1
1258 root 1.10 The pid specified in the C<ev_child> watcher has received a status change.
1259 root 1.1
1260 root 1.48 =item C<EV_STAT>
1261    
1262     The path specified in the C<ev_stat> watcher changed its attributes somehow.
1263    
1264 root 1.10 =item C<EV_IDLE>
1265 root 1.1
1266 root 1.10 The C<ev_idle> watcher has determined that you have nothing better to do.
1267 root 1.1
1268 root 1.10 =item C<EV_PREPARE>
1269 root 1.1
1270 root 1.10 =item C<EV_CHECK>
1271 root 1.1
1272 root 1.405 All C<ev_prepare> watchers are invoked just I<before> C<ev_run> starts to
1273     gather new events, and all C<ev_check> watchers are queued (not invoked)
1274     just after C<ev_run> has gathered them, but before it queues any callbacks
1275     for any received events. That means C<ev_prepare> watchers are the last
1276     watchers invoked before the event loop sleeps or polls for new events, and
1277     C<ev_check> watchers will be invoked before any other watchers of the same
1278     or lower priority within an event loop iteration.
1279    
1280     Callbacks of both watcher types can start and stop as many watchers as
1281     they want, and all of them will be taken into account (for example, a
1282     C<ev_prepare> watcher might start an idle watcher to keep C<ev_run> from
1283     blocking).
1284 root 1.1
1285 root 1.50 =item C<EV_EMBED>
1286    
1287     The embedded event loop specified in the C<ev_embed> watcher needs attention.
1288    
1289     =item C<EV_FORK>
1290    
1291     The event loop has been resumed in the child process after fork (see
1292     C<ev_fork>).
1293    
1294 root 1.324 =item C<EV_CLEANUP>
1295    
1296 sf-exg 1.330 The event loop is about to be destroyed (see C<ev_cleanup>).
1297 root 1.324
1298 root 1.122 =item C<EV_ASYNC>
1299    
1300     The given async watcher has been asynchronously notified (see C<ev_async>).
1301    
1302 root 1.229 =item C<EV_CUSTOM>
1303    
1304     Not ever sent (or otherwise used) by libev itself, but can be freely used
1305     by libev users to signal watchers (e.g. via C<ev_feed_event>).
1306    
1307 root 1.10 =item C<EV_ERROR>
1308 root 1.1
1309 root 1.161 An unspecified error has occurred, the watcher has been stopped. This might
1310 root 1.1 happen because the watcher could not be properly started because libev
1311     ran out of memory, a file descriptor was found to be closed or any other
1312 root 1.197 problem. Libev considers these application bugs.
1313    
1314     You best act on it by reporting the problem and somehow coping with the
1315     watcher being stopped. Note that well-written programs should not receive
1316     an error ever, so when your watcher receives it, this usually indicates a
1317     bug in your program.
1318 root 1.1
1319 root 1.183 Libev will usually signal a few "dummy" events together with an error, for
1320     example it might indicate that a fd is readable or writable, and if your
1321     callbacks is well-written it can just attempt the operation and cope with
1322     the error from read() or write(). This will not work in multi-threaded
1323     programs, though, as the fd could already be closed and reused for another
1324     thing, so beware.
1325 root 1.1
1326     =back
1327    
1328 root 1.42 =head2 GENERIC WATCHER FUNCTIONS
1329 root 1.36
1330     =over 4
1331    
1332     =item C<ev_init> (ev_TYPE *watcher, callback)
1333    
1334     This macro initialises the generic portion of a watcher. The contents
1335     of the watcher object can be arbitrary (so C<malloc> will do). Only
1336     the generic parts of the watcher are initialised, you I<need> to call
1337     the type-specific C<ev_TYPE_set> macro afterwards to initialise the
1338     type-specific parts. For each type there is also a C<ev_TYPE_init> macro
1339     which rolls both calls into one.
1340    
1341     You can reinitialise a watcher at any time as long as it has been stopped
1342     (or never started) and there are no pending events outstanding.
1343    
1344 root 1.198 The callback is always of type C<void (*)(struct ev_loop *loop, ev_TYPE *watcher,
1345 root 1.36 int revents)>.
1346    
1347 root 1.183 Example: Initialise an C<ev_io> watcher in two steps.
1348    
1349     ev_io w;
1350     ev_init (&w, my_cb);
1351     ev_io_set (&w, STDIN_FILENO, EV_READ);
1352    
1353 root 1.274 =item C<ev_TYPE_set> (ev_TYPE *watcher, [args])
1354 root 1.36
1355     This macro initialises the type-specific parts of a watcher. You need to
1356     call C<ev_init> at least once before you call this macro, but you can
1357     call C<ev_TYPE_set> any number of times. You must not, however, call this
1358     macro on a watcher that is active (it can be pending, however, which is a
1359     difference to the C<ev_init> macro).
1360    
1361     Although some watcher types do not have type-specific arguments
1362     (e.g. C<ev_prepare>) you still need to call its C<set> macro.
1363    
1364 root 1.183 See C<ev_init>, above, for an example.
1365    
1366 root 1.36 =item C<ev_TYPE_init> (ev_TYPE *watcher, callback, [args])
1367    
1368 root 1.161 This convenience macro rolls both C<ev_init> and C<ev_TYPE_set> macro
1369     calls into a single call. This is the most convenient method to initialise
1370 root 1.36 a watcher. The same limitations apply, of course.
1371    
1372 root 1.183 Example: Initialise and set an C<ev_io> watcher in one step.
1373    
1374     ev_io_init (&w, my_cb, STDIN_FILENO, EV_READ);
1375    
1376 root 1.274 =item C<ev_TYPE_start> (loop, ev_TYPE *watcher)
1377 root 1.36
1378     Starts (activates) the given watcher. Only active watchers will receive
1379     events. If the watcher is already active nothing will happen.
1380    
1381 root 1.183 Example: Start the C<ev_io> watcher that is being abused as example in this
1382     whole section.
1383    
1384     ev_io_start (EV_DEFAULT_UC, &w);
1385    
1386 root 1.274 =item C<ev_TYPE_stop> (loop, ev_TYPE *watcher)
1387 root 1.36
1388 root 1.195 Stops the given watcher if active, and clears the pending status (whether
1389     the watcher was active or not).
1390    
1391     It is possible that stopped watchers are pending - for example,
1392     non-repeating timers are being stopped when they become pending - but
1393     calling C<ev_TYPE_stop> ensures that the watcher is neither active nor
1394     pending. If you want to free or reuse the memory used by the watcher it is
1395     therefore a good idea to always call its C<ev_TYPE_stop> function.
1396 root 1.36
1397     =item bool ev_is_active (ev_TYPE *watcher)
1398    
1399     Returns a true value iff the watcher is active (i.e. it has been started
1400     and not yet been stopped). As long as a watcher is active you must not modify
1401 root 1.465 it unless documented otherwise.
1402 root 1.36
1403 root 1.469 Obviously, it is safe to call this on an active watcher, or actually any
1404     watcher that is initialised.
1405    
1406 root 1.36 =item bool ev_is_pending (ev_TYPE *watcher)
1407    
1408     Returns a true value iff the watcher is pending, (i.e. it has outstanding
1409     events but its callback has not yet been invoked). As long as a watcher
1410     is pending (but not active) you must not call an init function on it (but
1411 root 1.73 C<ev_TYPE_set> is safe), you must not change its priority, and you must
1412     make sure the watcher is available to libev (e.g. you cannot C<free ()>
1413     it).
1414 root 1.36
1415 root 1.469 It is safe to call this on any watcher in any state as long as it is
1416     initialised.
1417    
1418 root 1.55 =item callback ev_cb (ev_TYPE *watcher)
1419 root 1.36
1420     Returns the callback currently set on the watcher.
1421    
1422 root 1.408 =item ev_set_cb (ev_TYPE *watcher, callback)
1423 root 1.36
1424     Change the callback. You can change the callback at virtually any time
1425     (modulo threads).
1426    
1427 root 1.274 =item ev_set_priority (ev_TYPE *watcher, int priority)
1428 root 1.67
1429     =item int ev_priority (ev_TYPE *watcher)
1430    
1431     Set and query the priority of the watcher. The priority is a small
1432     integer between C<EV_MAXPRI> (default: C<2>) and C<EV_MINPRI>
1433     (default: C<-2>). Pending watchers with higher priority will be invoked
1434     before watchers with lower priority, but priority will not keep watchers
1435     from being executed (except for C<ev_idle> watchers).
1436    
1437     If you need to suppress invocation when higher priority events are pending
1438     you need to look at C<ev_idle> watchers, which provide this functionality.
1439    
1440 root 1.469 You I<must not> change the priority of a watcher as long as it is active
1441     or pending. Reading the priority with C<ev_priority> is fine in any state.
1442 root 1.73
1443 root 1.233 Setting a priority outside the range of C<EV_MINPRI> to C<EV_MAXPRI> is
1444     fine, as long as you do not mind that the priority value you query might
1445     or might not have been clamped to the valid range.
1446    
1447 root 1.67 The default priority used by watchers when no priority has been set is
1448     always C<0>, which is supposed to not be too high and not be too low :).
1449    
1450 root 1.413 See L</WATCHER PRIORITY MODELS>, below, for a more thorough treatment of
1451 root 1.233 priorities.
1452 root 1.67
1453 root 1.74 =item ev_invoke (loop, ev_TYPE *watcher, int revents)
1454    
1455     Invoke the C<watcher> with the given C<loop> and C<revents>. Neither
1456     C<loop> nor C<revents> need to be valid as long as the watcher callback
1457 root 1.183 can deal with that fact, as both are simply passed through to the
1458     callback.
1459 root 1.74
1460     =item int ev_clear_pending (loop, ev_TYPE *watcher)
1461    
1462 root 1.183 If the watcher is pending, this function clears its pending status and
1463     returns its C<revents> bitset (as if its callback was invoked). If the
1464 root 1.74 watcher isn't pending it does nothing and returns C<0>.
1465    
1466 root 1.183 Sometimes it can be useful to "poll" a watcher instead of waiting for its
1467     callback to be invoked, which can be accomplished with this function.
1468    
1469 root 1.274 =item ev_feed_event (loop, ev_TYPE *watcher, int revents)
1470 root 1.273
1471     Feeds the given event set into the event loop, as if the specified event
1472     had happened for the specified watcher (which must be a pointer to an
1473 root 1.469 initialised but not necessarily started event watcher, though it can be
1474     active). Obviously you must not free the watcher as long as it has pending
1475     events.
1476 root 1.273
1477     Stopping the watcher, letting libev invoke it, or calling
1478     C<ev_clear_pending> will clear the pending event, even if the watcher was
1479     not started in the first place.
1480    
1481     See also C<ev_feed_fd_event> and C<ev_feed_signal_event> for related
1482     functions that do not need a watcher.
1483    
1484 root 1.36 =back
1485    
1486 root 1.413 See also the L</ASSOCIATING CUSTOM DATA WITH A WATCHER> and L</BUILDING YOUR
1487 root 1.357 OWN COMPOSITE WATCHERS> idioms.
1488 root 1.1
1489 root 1.335 =head2 WATCHER STATES
1490    
1491     There are various watcher states mentioned throughout this manual -
1492     active, pending and so on. In this section these states and the rules to
1493     transition between them will be described in more detail - and while these
1494     rules might look complicated, they usually do "the right thing".
1495    
1496     =over 4
1497    
1498 root 1.422 =item initialised
1499 root 1.335
1500 sf-exg 1.374 Before a watcher can be registered with the event loop it has to be
1501 root 1.335 initialised. This can be done with a call to C<ev_TYPE_init>, or calls to
1502     C<ev_init> followed by the watcher-specific C<ev_TYPE_set> function.
1503    
1504 root 1.361 In this state it is simply some block of memory that is suitable for
1505     use in an event loop. It can be moved around, freed, reused etc. at
1506     will - as long as you either keep the memory contents intact, or call
1507     C<ev_TYPE_init> again.
1508 root 1.335
1509     =item started/running/active
1510    
1511     Once a watcher has been started with a call to C<ev_TYPE_start> it becomes
1512     property of the event loop, and is actively waiting for events. While in
1513 root 1.469 this state it cannot be accessed (except in a few documented ways, such as
1514     stoping it), moved, freed or anything else - the only legal thing is to
1515     keep a pointer to it, and call libev functions on it that are documented
1516     to work on active watchers.
1517    
1518     As a rule of thumb, before accessing a member or calling any function on
1519     a watcher, it should be stopped (or freshly initialised). If that is not
1520     convenient, you can check the documentation for that function or member to
1521     see if it is safe to use on an active watcher.
1522 root 1.335
1523     =item pending
1524    
1525     If a watcher is active and libev determines that an event it is interested
1526 root 1.469 in has occurred (such as a timer expiring), it will become pending. It
1527     will stay in this pending state until either it is explicitly stopped or
1528     its callback is about to be invoked, so it is not normally pending inside
1529     the watcher callback.
1530    
1531     Generally, the watcher might or might not be active while it is pending
1532     (for example, an expired non-repeating timer can be pending but no longer
1533     active). If it is pending but not active, it can be freely accessed (e.g.
1534     by calling C<ev_TYPE_set>), but it is still property of the event loop at
1535     this time, so cannot be moved, freed or reused. And if it is active the
1536     rules described in the previous item still apply.
1537 root 1.335
1538 root 1.469 Explicitly stopping a watcher will also clear the pending state
1539     unconditionally, so it is safe to stop a watcher and then free it.
1540 root 1.335
1541     It is also possible to feed an event on a watcher that is not active (e.g.
1542     via C<ev_feed_event>), in which case it becomes pending without being
1543     active.
1544    
1545     =item stopped
1546    
1547     A watcher can be stopped implicitly by libev (in which case it might still
1548     be pending), or explicitly by calling its C<ev_TYPE_stop> function. The
1549     latter will clear any pending state the watcher might be in, regardless
1550     of whether it was active or not, so stopping a watcher explicitly before
1551     freeing it is often a good idea.
1552    
1553     While stopped (and not pending) the watcher is essentially in the
1554 root 1.361 initialised state, that is, it can be reused, moved, modified in any way
1555     you wish (but when you trash the memory block, you need to C<ev_TYPE_init>
1556     it again).
1557 root 1.335
1558     =back
1559    
1560 root 1.233 =head2 WATCHER PRIORITY MODELS
1561    
1562     Many event loops support I<watcher priorities>, which are usually small
1563     integers that influence the ordering of event callback invocation
1564     between watchers in some way, all else being equal.
1565    
1566 root 1.457 In libev, watcher priorities can be set using C<ev_set_priority>. See its
1567 root 1.233 description for the more technical details such as the actual priority
1568     range.
1569    
1570     There are two common ways how these these priorities are being interpreted
1571     by event loops:
1572    
1573     In the more common lock-out model, higher priorities "lock out" invocation
1574     of lower priority watchers, which means as long as higher priority
1575     watchers receive events, lower priority watchers are not being invoked.
1576    
1577     The less common only-for-ordering model uses priorities solely to order
1578     callback invocation within a single event loop iteration: Higher priority
1579     watchers are invoked before lower priority ones, but they all get invoked
1580     before polling for new events.
1581    
1582     Libev uses the second (only-for-ordering) model for all its watchers
1583     except for idle watchers (which use the lock-out model).
1584    
1585     The rationale behind this is that implementing the lock-out model for
1586     watchers is not well supported by most kernel interfaces, and most event
1587     libraries will just poll for the same events again and again as long as
1588     their callbacks have not been executed, which is very inefficient in the
1589     common case of one high-priority watcher locking out a mass of lower
1590     priority ones.
1591    
1592     Static (ordering) priorities are most useful when you have two or more
1593     watchers handling the same resource: a typical usage example is having an
1594     C<ev_io> watcher to receive data, and an associated C<ev_timer> to handle
1595     timeouts. Under load, data might be received while the program handles
1596     other jobs, but since timers normally get invoked first, the timeout
1597     handler will be executed before checking for data. In that case, giving
1598     the timer a lower priority than the I/O watcher ensures that I/O will be
1599     handled first even under adverse conditions (which is usually, but not
1600     always, what you want).
1601    
1602     Since idle watchers use the "lock-out" model, meaning that idle watchers
1603     will only be executed when no same or higher priority watchers have
1604     received events, they can be used to implement the "lock-out" model when
1605     required.
1606    
1607     For example, to emulate how many other event libraries handle priorities,
1608     you can associate an C<ev_idle> watcher to each such watcher, and in
1609     the normal watcher callback, you just start the idle watcher. The real
1610     processing is done in the idle watcher callback. This causes libev to
1611 sf-exg 1.298 continuously poll and process kernel event data for the watcher, but when
1612 root 1.233 the lock-out case is known to be rare (which in turn is rare :), this is
1613     workable.
1614    
1615     Usually, however, the lock-out model implemented that way will perform
1616     miserably under the type of load it was designed to handle. In that case,
1617     it might be preferable to stop the real watcher before starting the
1618     idle watcher, so the kernel will not have to process the event in case
1619     the actual processing will be delayed for considerable time.
1620    
1621     Here is an example of an I/O watcher that should run at a strictly lower
1622     priority than the default, and which should only process data when no
1623     other events are pending:
1624    
1625     ev_idle idle; // actual processing watcher
1626     ev_io io; // actual event watcher
1627    
1628     static void
1629     io_cb (EV_P_ ev_io *w, int revents)
1630     {
1631     // stop the I/O watcher, we received the event, but
1632     // are not yet ready to handle it.
1633     ev_io_stop (EV_A_ w);
1634    
1635 root 1.296 // start the idle watcher to handle the actual event.
1636 root 1.233 // it will not be executed as long as other watchers
1637     // with the default priority are receiving events.
1638     ev_idle_start (EV_A_ &idle);
1639     }
1640    
1641     static void
1642 root 1.242 idle_cb (EV_P_ ev_idle *w, int revents)
1643 root 1.233 {
1644     // actual processing
1645     read (STDIN_FILENO, ...);
1646    
1647     // have to start the I/O watcher again, as
1648     // we have handled the event
1649     ev_io_start (EV_P_ &io);
1650     }
1651    
1652     // initialisation
1653     ev_idle_init (&idle, idle_cb);
1654     ev_io_init (&io, io_cb, STDIN_FILENO, EV_READ);
1655     ev_io_start (EV_DEFAULT_ &io);
1656    
1657     In the "real" world, it might also be beneficial to start a timer, so that
1658     low-priority connections can not be locked out forever under load. This
1659     enables your program to keep a lower latency for important connections
1660     during short periods of high load, while not completely locking out less
1661     important ones.
1662    
1663 root 1.1
1664     =head1 WATCHER TYPES
1665    
1666     This section describes each watcher in detail, but will not repeat
1667 root 1.48 information given in the last section. Any initialisation/set macros,
1668     functions and members specific to the watcher type are explained.
1669    
1670 root 1.459 Most members are additionally marked with either I<[read-only]>, meaning
1671     that, while the watcher is active, you can look at the member and expect
1672     some sensible content, but you must not modify it (you can modify it while
1673     the watcher is stopped to your hearts content), or I<[read-write]>, which
1674 root 1.463 means you can expect it to have some sensible content while the watcher is
1675     active, but you can also modify it (within the same thread as the event
1676     loop, i.e. without creating data races). Modifying it may not do something
1677 root 1.48 sensible or take immediate effect (or do anything at all), but libev will
1678     not crash or malfunction in any way.
1679 root 1.1
1680 root 1.459 In any case, the documentation for each member will explain what the
1681     effects are, and if there are any additional access restrictions.
1682 root 1.34
1683 root 1.42 =head2 C<ev_io> - is this file descriptor readable or writable?
1684 root 1.1
1685 root 1.4 I/O watchers check whether a file descriptor is readable or writable
1686 root 1.42 in each iteration of the event loop, or, more precisely, when reading
1687     would not block the process and writing would at least be able to write
1688     some data. This behaviour is called level-triggering because you keep
1689     receiving events as long as the condition persists. Remember you can stop
1690     the watcher if you don't want to act on the event and neither want to
1691     receive future events.
1692 root 1.1
1693 root 1.23 In general you can register as many read and/or write event watchers per
1694 root 1.8 fd as you want (as long as you don't confuse yourself). Setting all file
1695     descriptors to non-blocking mode is also usually a good idea (but not
1696     required if you know what you are doing).
1697    
1698 root 1.42 Another thing you have to watch out for is that it is quite easy to
1699 root 1.354 receive "spurious" readiness notifications, that is, your callback might
1700 root 1.42 be called with C<EV_READ> but a subsequent C<read>(2) will actually block
1701 root 1.354 because there is no data. It is very easy to get into this situation even
1702     with a relatively standard program structure. Thus it is best to always
1703     use non-blocking I/O: An extra C<read>(2) returning C<EAGAIN> is far
1704     preferable to a program hanging until some data arrives.
1705 root 1.42
1706 root 1.183 If you cannot run the fd in non-blocking mode (for example you should
1707     not play around with an Xlib connection), then you have to separately
1708     re-test whether a file descriptor is really ready with a known-to-be good
1709 root 1.354 interface such as poll (fortunately in the case of Xlib, it already does
1710     this on its own, so its quite safe to use). Some people additionally
1711 root 1.183 use C<SIGALRM> and an interval timer, just to be sure you won't block
1712     indefinitely.
1713    
1714     But really, best use non-blocking mode.
1715 root 1.42
1716 root 1.81 =head3 The special problem of disappearing file descriptors
1717    
1718 root 1.447 Some backends (e.g. kqueue, epoll, linuxaio) need to be told about closing
1719     a file descriptor (either due to calling C<close> explicitly or any other
1720     means, such as C<dup2>). The reason is that you register interest in some
1721     file descriptor, but when it goes away, the operating system will silently
1722     drop this interest. If another file descriptor with the same number then
1723     is registered with libev, there is no efficient way to see that this is,
1724     in fact, a different file descriptor.
1725 root 1.81
1726     To avoid having to explicitly tell libev about such cases, libev follows
1727     the following policy: Each time C<ev_io_set> is being called, libev
1728     will assume that this is potentially a new file descriptor, otherwise
1729     it is assumed that the file descriptor stays the same. That means that
1730     you I<have> to call C<ev_io_set> (or C<ev_io_init>) when you change the
1731     descriptor even if the file descriptor number itself did not change.
1732    
1733     This is how one would do it normally anyway, the important point is that
1734     the libev application should not optimise around libev but should leave
1735     optimisations to libev.
1736    
1737 root 1.95 =head3 The special problem of dup'ed file descriptors
1738 root 1.94
1739     Some backends (e.g. epoll), cannot register events for file descriptors,
1740 root 1.103 but only events for the underlying file descriptions. That means when you
1741 root 1.109 have C<dup ()>'ed file descriptors or weirder constellations, and register
1742     events for them, only one file descriptor might actually receive events.
1743 root 1.94
1744 root 1.103 There is no workaround possible except not registering events
1745     for potentially C<dup ()>'ed file descriptors, or to resort to
1746 root 1.94 C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>.
1747    
1748 root 1.354 =head3 The special problem of files
1749    
1750     Many people try to use C<select> (or libev) on file descriptors
1751     representing files, and expect it to become ready when their program
1752     doesn't block on disk accesses (which can take a long time on their own).
1753    
1754     However, this cannot ever work in the "expected" way - you get a readiness
1755     notification as soon as the kernel knows whether and how much data is
1756     there, and in the case of open files, that's always the case, so you
1757     always get a readiness notification instantly, and your read (or possibly
1758     write) will still block on the disk I/O.
1759    
1760     Another way to view it is that in the case of sockets, pipes, character
1761     devices and so on, there is another party (the sender) that delivers data
1762 sf-exg 1.358 on its own, but in the case of files, there is no such thing: the disk
1763     will not send data on its own, simply because it doesn't know what you
1764 root 1.354 wish to read - you would first have to request some data.
1765    
1766     Since files are typically not-so-well supported by advanced notification
1767     mechanism, libev tries hard to emulate POSIX behaviour with respect
1768     to files, even though you should not use it. The reason for this is
1769     convenience: sometimes you want to watch STDIN or STDOUT, which is
1770     usually a tty, often a pipe, but also sometimes files or special devices
1771     (for example, C<epoll> on Linux works with F</dev/random> but not with
1772     F</dev/urandom>), and even though the file might better be served with
1773     asynchronous I/O instead of with non-blocking I/O, it is still useful when
1774     it "just works" instead of freezing.
1775    
1776     So avoid file descriptors pointing to files when you know it (e.g. use
1777     libeio), but use them when it is convenient, e.g. for STDIN/STDOUT, or
1778     when you rarely read from a file instead of from a socket, and want to
1779     reuse the same code path.
1780    
1781 root 1.94 =head3 The special problem of fork
1782    
1783 root 1.456 Some backends (epoll, kqueue, linuxaio, iouring) do not support C<fork ()>
1784 root 1.447 at all or exhibit useless behaviour. Libev fully supports fork, but needs
1785     to be told about it in the child if you want to continue to use it in the
1786     child.
1787 root 1.94
1788 root 1.354 To support fork in your child processes, you have to call C<ev_loop_fork
1789     ()> after a fork in the child, enable C<EVFLAG_FORKCHECK>, or resort to
1790     C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>.
1791 root 1.94
1792 root 1.138 =head3 The special problem of SIGPIPE
1793    
1794 root 1.183 While not really specific to libev, it is easy to forget about C<SIGPIPE>:
1795 root 1.174 when writing to a pipe whose other end has been closed, your program gets
1796 root 1.183 sent a SIGPIPE, which, by default, aborts your program. For most programs
1797 root 1.174 this is sensible behaviour, for daemons, this is usually undesirable.
1798 root 1.138
1799     So when you encounter spurious, unexplained daemon exits, make sure you
1800     ignore SIGPIPE (and maybe make sure you log the exit status of your daemon
1801     somewhere, as that would have given you a big clue).
1802    
1803 root 1.284 =head3 The special problem of accept()ing when you can't
1804    
1805     Many implementations of the POSIX C<accept> function (for example,
1806 sf-exg 1.292 found in post-2004 Linux) have the peculiar behaviour of not removing a
1807 root 1.284 connection from the pending queue in all error cases.
1808    
1809     For example, larger servers often run out of file descriptors (because
1810     of resource limits), causing C<accept> to fail with C<ENFILE> but not
1811     rejecting the connection, leading to libev signalling readiness on
1812     the next iteration again (the connection still exists after all), and
1813     typically causing the program to loop at 100% CPU usage.
1814    
1815     Unfortunately, the set of errors that cause this issue differs between
1816     operating systems, there is usually little the app can do to remedy the
1817     situation, and no known thread-safe method of removing the connection to
1818     cope with overload is known (to me).
1819    
1820     One of the easiest ways to handle this situation is to just ignore it
1821     - when the program encounters an overload, it will just loop until the
1822     situation is over. While this is a form of busy waiting, no OS offers an
1823     event-based way to handle this situation, so it's the best one can do.
1824    
1825     A better way to handle the situation is to log any errors other than
1826     C<EAGAIN> and C<EWOULDBLOCK>, making sure not to flood the log with such
1827     messages, and continue as usual, which at least gives the user an idea of
1828     what could be wrong ("raise the ulimit!"). For extra points one could stop
1829     the C<ev_io> watcher on the listening fd "for a while", which reduces CPU
1830     usage.
1831    
1832     If your program is single-threaded, then you could also keep a dummy file
1833     descriptor for overload situations (e.g. by opening F</dev/null>), and
1834     when you run into C<ENFILE> or C<EMFILE>, close it, run C<accept>,
1835     close that fd, and create a new dummy fd. This will gracefully refuse
1836     clients under typical overload conditions.
1837    
1838     The last way to handle it is to simply log the error and C<exit>, as
1839     is often done with C<malloc> failures, but this results in an easy
1840     opportunity for a DoS attack.
1841 root 1.81
1842 root 1.82 =head3 Watcher-Specific Functions
1843    
1844 root 1.1 =over 4
1845    
1846     =item ev_io_init (ev_io *, callback, int fd, int events)
1847    
1848     =item ev_io_set (ev_io *, int fd, int events)
1849    
1850 root 1.42 Configures an C<ev_io> watcher. The C<fd> is the file descriptor to
1851 root 1.462 receive events for and C<events> is either C<EV_READ>, C<EV_WRITE>, both
1852     C<EV_READ | EV_WRITE> or C<0>, to express the desire to receive the given
1853     events.
1854    
1855     Note that setting the C<events> to C<0> and starting the watcher is
1856     supported, but not specially optimized - if your program sometimes happens
1857     to generate this combination this is fine, but if it is easy to avoid
1858     starting an io watcher watching for no events you should do so.
1859 root 1.32
1860 root 1.459 =item ev_io_modify (ev_io *, int events)
1861 root 1.48
1862 root 1.464 Similar to C<ev_io_set>, but only changes the requested events. Using this
1863     might be faster with some backends, as libev can assume that the C<fd>
1864     still refers to the same underlying file description, something it cannot
1865     do when using C<ev_io_set>.
1866 root 1.48
1867 root 1.459 =item int fd [no-modify]
1868 root 1.48
1869 root 1.459 The file descriptor being watched. While it can be read at any time, you
1870     must not modify this member even when the watcher is stopped - always use
1871     C<ev_io_set> for that.
1872    
1873     =item int events [no-modify]
1874    
1875 root 1.460 The set of events the fd is being watched for, among other flags. Remember
1876     that this is a bit set - to test for C<EV_READ>, use C<< w->events &
1877     EV_READ >>, and similarly for C<EV_WRITE>.
1878 root 1.459
1879     As with C<fd>, you must not modify this member even when the watcher is
1880     stopped, always use C<ev_io_set> or C<ev_io_modify> for that.
1881 root 1.48
1882 root 1.1 =back
1883    
1884 root 1.111 =head3 Examples
1885    
1886 root 1.54 Example: Call C<stdin_readable_cb> when STDIN_FILENO has become, well
1887 root 1.34 readable, but only once. Since it is likely line-buffered, you could
1888 root 1.54 attempt to read a whole line in the callback.
1889 root 1.34
1890 root 1.164 static void
1891 root 1.198 stdin_readable_cb (struct ev_loop *loop, ev_io *w, int revents)
1892 root 1.164 {
1893     ev_io_stop (loop, w);
1894 root 1.183 .. read from stdin here (or from w->fd) and handle any I/O errors
1895 root 1.164 }
1896    
1897     ...
1898     struct ev_loop *loop = ev_default_init (0);
1899 root 1.198 ev_io stdin_readable;
1900 root 1.164 ev_io_init (&stdin_readable, stdin_readable_cb, STDIN_FILENO, EV_READ);
1901     ev_io_start (loop, &stdin_readable);
1902 root 1.310 ev_run (loop, 0);
1903 root 1.34
1904    
1905 root 1.42 =head2 C<ev_timer> - relative and optionally repeating timeouts
1906 root 1.1
1907     Timer watchers are simple relative timers that generate an event after a
1908     given time, and optionally repeating in regular intervals after that.
1909    
1910     The timers are based on real time, that is, if you register an event that
1911 root 1.161 times out after an hour and you reset your system clock to January last
1912 root 1.183 year, it will still time out after (roughly) one hour. "Roughly" because
1913 root 1.28 detecting time jumps is hard, and some inaccuracies are unavoidable (the
1914 root 1.1 monotonic clock option helps a lot here).
1915    
1916 root 1.183 The callback is guaranteed to be invoked only I<after> its timeout has
1917 root 1.240 passed (not I<at>, so on systems with very low-resolution clocks this
1918 root 1.381 might introduce a small delay, see "the special problem of being too
1919     early", below). If multiple timers become ready during the same loop
1920     iteration then the ones with earlier time-out values are invoked before
1921     ones of the same priority with later time-out values (but this is no
1922     longer true when a callback calls C<ev_run> recursively).
1923 root 1.175
1924 root 1.198 =head3 Be smart about timeouts
1925    
1926 root 1.199 Many real-world problems involve some kind of timeout, usually for error
1927 root 1.198 recovery. A typical example is an HTTP request - if the other side hangs,
1928     you want to raise some error after a while.
1929    
1930 root 1.199 What follows are some ways to handle this problem, from obvious and
1931     inefficient to smart and efficient.
1932 root 1.198
1933 root 1.199 In the following, a 60 second activity timeout is assumed - a timeout that
1934     gets reset to 60 seconds each time there is activity (e.g. each time some
1935     data or other life sign was received).
1936 root 1.198
1937     =over 4
1938    
1939 root 1.199 =item 1. Use a timer and stop, reinitialise and start it on activity.
1940 root 1.198
1941     This is the most obvious, but not the most simple way: In the beginning,
1942     start the watcher:
1943    
1944     ev_timer_init (timer, callback, 60., 0.);
1945     ev_timer_start (loop, timer);
1946    
1947 root 1.199 Then, each time there is some activity, C<ev_timer_stop> it, initialise it
1948     and start it again:
1949 root 1.198
1950     ev_timer_stop (loop, timer);
1951     ev_timer_set (timer, 60., 0.);
1952     ev_timer_start (loop, timer);
1953    
1954 root 1.199 This is relatively simple to implement, but means that each time there is
1955     some activity, libev will first have to remove the timer from its internal
1956     data structure and then add it again. Libev tries to be fast, but it's
1957     still not a constant-time operation.
1958 root 1.198
1959     =item 2. Use a timer and re-start it with C<ev_timer_again> inactivity.
1960    
1961     This is the easiest way, and involves using C<ev_timer_again> instead of
1962     C<ev_timer_start>.
1963    
1964 root 1.199 To implement this, configure an C<ev_timer> with a C<repeat> value
1965     of C<60> and then call C<ev_timer_again> at start and each time you
1966     successfully read or write some data. If you go into an idle state where
1967     you do not expect data to travel on the socket, you can C<ev_timer_stop>
1968     the timer, and C<ev_timer_again> will automatically restart it if need be.
1969    
1970     That means you can ignore both the C<ev_timer_start> function and the
1971     C<after> argument to C<ev_timer_set>, and only ever use the C<repeat>
1972     member and C<ev_timer_again>.
1973 root 1.198
1974     At start:
1975    
1976 root 1.243 ev_init (timer, callback);
1977 root 1.199 timer->repeat = 60.;
1978 root 1.198 ev_timer_again (loop, timer);
1979    
1980 root 1.199 Each time there is some activity:
1981 root 1.198
1982     ev_timer_again (loop, timer);
1983    
1984 root 1.199 It is even possible to change the time-out on the fly, regardless of
1985     whether the watcher is active or not:
1986 root 1.198
1987     timer->repeat = 30.;
1988     ev_timer_again (loop, timer);
1989    
1990     This is slightly more efficient then stopping/starting the timer each time
1991     you want to modify its timeout value, as libev does not have to completely
1992 root 1.199 remove and re-insert the timer from/into its internal data structure.
1993    
1994     It is, however, even simpler than the "obvious" way to do it.
1995 root 1.198
1996     =item 3. Let the timer time out, but then re-arm it as required.
1997    
1998     This method is more tricky, but usually most efficient: Most timeouts are
1999 root 1.199 relatively long compared to the intervals between other activity - in
2000     our example, within 60 seconds, there are usually many I/O events with
2001     associated activity resets.
2002 root 1.198
2003     In this case, it would be more efficient to leave the C<ev_timer> alone,
2004     but remember the time of last activity, and check for a real timeout only
2005     within the callback:
2006    
2007 root 1.387 ev_tstamp timeout = 60.;
2008 root 1.198 ev_tstamp last_activity; // time of last activity
2009 root 1.387 ev_timer timer;
2010 root 1.198
2011     static void
2012     callback (EV_P_ ev_timer *w, int revents)
2013     {
2014 root 1.387 // calculate when the timeout would happen
2015     ev_tstamp after = last_activity - ev_now (EV_A) + timeout;
2016 root 1.198
2017 sf-exg 1.403 // if negative, it means we the timeout already occurred
2018 root 1.387 if (after < 0.)
2019 root 1.198 {
2020 sf-exg 1.298 // timeout occurred, take action
2021 root 1.198 }
2022     else
2023     {
2024 root 1.387 // callback was invoked, but there was some recent
2025 root 1.392 // activity. simply restart the timer to time out
2026 root 1.387 // after "after" seconds, which is the earliest time
2027     // the timeout can occur.
2028     ev_timer_set (w, after, 0.);
2029     ev_timer_start (EV_A_ w);
2030 root 1.198 }
2031     }
2032    
2033 root 1.387 To summarise the callback: first calculate in how many seconds the
2034     timeout will occur (by calculating the absolute time when it would occur,
2035     C<last_activity + timeout>, and subtracting the current time, C<ev_now
2036     (EV_A)> from that).
2037    
2038     If this value is negative, then we are already past the timeout, i.e. we
2039     timed out, and need to do whatever is needed in this case.
2040    
2041     Otherwise, we now the earliest time at which the timeout would trigger,
2042     and simply start the timer with this timeout value.
2043    
2044     In other words, each time the callback is invoked it will check whether
2045 sf-exg 1.403 the timeout occurred. If not, it will simply reschedule itself to check
2046 root 1.387 again at the earliest time it could time out. Rinse. Repeat.
2047 root 1.198
2048 root 1.199 This scheme causes more callback invocations (about one every 60 seconds
2049     minus half the average time between activity), but virtually no calls to
2050     libev to change the timeout.
2051    
2052 root 1.387 To start the machinery, simply initialise the watcher and set
2053     C<last_activity> to the current time (meaning there was some activity just
2054     now), then call the callback, which will "do the right thing" and start
2055     the timer:
2056    
2057     last_activity = ev_now (EV_A);
2058     ev_init (&timer, callback);
2059     callback (EV_A_ &timer, 0);
2060 root 1.198
2061 root 1.387 When there is some activity, simply store the current time in
2062 root 1.199 C<last_activity>, no libev calls at all:
2063 root 1.198
2064 root 1.387 if (activity detected)
2065     last_activity = ev_now (EV_A);
2066    
2067     When your timeout value changes, then the timeout can be changed by simply
2068     providing a new value, stopping the timer and calling the callback, which
2069 sf-exg 1.403 will again do the right thing (for example, time out immediately :).
2070 root 1.387
2071     timeout = new_value;
2072     ev_timer_stop (EV_A_ &timer);
2073     callback (EV_A_ &timer, 0);
2074 root 1.198
2075     This technique is slightly more complex, but in most cases where the
2076     time-out is unlikely to be triggered, much more efficient.
2077    
2078 root 1.200 =item 4. Wee, just use a double-linked list for your timeouts.
2079 root 1.199
2080 root 1.200 If there is not one request, but many thousands (millions...), all
2081     employing some kind of timeout with the same timeout value, then one can
2082     do even better:
2083 root 1.199
2084     When starting the timeout, calculate the timeout value and put the timeout
2085     at the I<end> of the list.
2086    
2087     Then use an C<ev_timer> to fire when the timeout at the I<beginning> of
2088     the list is expected to fire (for example, using the technique #3).
2089    
2090     When there is some activity, remove the timer from the list, recalculate
2091     the timeout, append it to the end of the list again, and make sure to
2092     update the C<ev_timer> if it was taken from the beginning of the list.
2093    
2094     This way, one can manage an unlimited number of timeouts in O(1) time for
2095     starting, stopping and updating the timers, at the expense of a major
2096     complication, and having to use a constant timeout. The constant timeout
2097     ensures that the list stays sorted.
2098    
2099 root 1.198 =back
2100    
2101 root 1.200 So which method the best?
2102 root 1.199
2103 root 1.200 Method #2 is a simple no-brain-required solution that is adequate in most
2104     situations. Method #3 requires a bit more thinking, but handles many cases
2105     better, and isn't very complicated either. In most case, choosing either
2106     one is fine, with #3 being better in typical situations.
2107 root 1.199
2108     Method #1 is almost always a bad idea, and buys you nothing. Method #4 is
2109     rather complicated, but extremely efficient, something that really pays
2110 root 1.200 off after the first million or so of active timers, i.e. it's usually
2111 root 1.199 overkill :)
2112    
2113 root 1.381 =head3 The special problem of being too early
2114    
2115     If you ask a timer to call your callback after three seconds, then
2116     you expect it to be invoked after three seconds - but of course, this
2117     cannot be guaranteed to infinite precision. Less obviously, it cannot be
2118     guaranteed to any precision by libev - imagine somebody suspending the
2119 root 1.386 process with a STOP signal for a few hours for example.
2120 root 1.381
2121     So, libev tries to invoke your callback as soon as possible I<after> the
2122 sf-exg 1.382 delay has occurred, but cannot guarantee this.
2123 root 1.381
2124     A less obvious failure mode is calling your callback too early: many event
2125     loops compare timestamps with a "elapsed delay >= requested delay", but
2126     this can cause your callback to be invoked much earlier than you would
2127     expect.
2128    
2129     To see why, imagine a system with a clock that only offers full second
2130     resolution (think windows if you can't come up with a broken enough OS
2131     yourself). If you schedule a one-second timer at the time 500.9, then the
2132     event loop will schedule your timeout to elapse at a system time of 500
2133     (500.9 truncated to the resolution) + 1, or 501.
2134    
2135     If an event library looks at the timeout 0.1s later, it will see "501 >=
2136     501" and invoke the callback 0.1s after it was started, even though a
2137     one-second delay was requested - this is being "too early", despite best
2138     intentions.
2139    
2140     This is the reason why libev will never invoke the callback if the elapsed
2141     delay equals the requested delay, but only when the elapsed delay is
2142     larger than the requested delay. In the example above, libev would only invoke
2143     the callback at system time 502, or 1.1s after the timer was started.
2144    
2145     So, while libev cannot guarantee that your callback will be invoked
2146     exactly when requested, it I<can> and I<does> guarantee that the requested
2147     delay has actually elapsed, or in other words, it always errs on the "too
2148     late" side of things.
2149    
2150 root 1.175 =head3 The special problem of time updates
2151    
2152 root 1.383 Establishing the current time is a costly operation (it usually takes
2153     at least one system call): EV therefore updates its idea of the current
2154 root 1.310 time only before and after C<ev_run> collects new events, which causes a
2155 root 1.183 growing difference between C<ev_now ()> and C<ev_time ()> when handling
2156     lots of events in one iteration.
2157 root 1.175
2158 root 1.9 The relative timeouts are calculated relative to the C<ev_now ()>
2159     time. This is usually the right thing as this timestamp refers to the time
2160 root 1.28 of the event triggering whatever timeout you are modifying/starting. If
2161 root 1.175 you suspect event processing to be delayed and you I<need> to base the
2162 root 1.434 timeout on the current time, use something like the following to adjust
2163     for it:
2164 root 1.9
2165 root 1.434 ev_timer_set (&timer, after + (ev_time () - ev_now ()), 0.);
2166 root 1.9
2167 root 1.177 If the event loop is suspended for a long time, you can also force an
2168 root 1.176 update of the time returned by C<ev_now ()> by calling C<ev_now_update
2169 root 1.434 ()>, although that will push the event time of all outstanding events
2170     further into the future.
2171 root 1.176
2172 root 1.383 =head3 The special problem of unsynchronised clocks
2173 root 1.381
2174     Modern systems have a variety of clocks - libev itself uses the normal
2175     "wall clock" clock and, if available, the monotonic clock (to avoid time
2176     jumps).
2177    
2178     Neither of these clocks is synchronised with each other or any other clock
2179     on the system, so C<ev_time ()> might return a considerably different time
2180     than C<gettimeofday ()> or C<time ()>. On a GNU/Linux system, for example,
2181     a call to C<gettimeofday> might return a second count that is one higher
2182     than a directly following call to C<time>.
2183    
2184     The moral of this is to only compare libev-related timestamps with
2185     C<ev_time ()> and C<ev_now ()>, at least if you want better precision than
2186 sf-exg 1.382 a second or so.
2187 root 1.381
2188     One more problem arises due to this lack of synchronisation: if libev uses
2189     the system monotonic clock and you compare timestamps from C<ev_time>
2190     or C<ev_now> from when you started your timer and when your callback is
2191     invoked, you will find that sometimes the callback is a bit "early".
2192    
2193     This is because C<ev_timer>s work in real time, not wall clock time, so
2194     libev makes sure your callback is not invoked before the delay happened,
2195     I<measured according to the real time>, not the system clock.
2196    
2197     If your timeouts are based on a physical timescale (e.g. "time out this
2198     connection after 100 seconds") then this shouldn't bother you as it is
2199     exactly the right behaviour.
2200    
2201     If you want to compare wall clock/system timestamps to your timers, then
2202     you need to use C<ev_periodic>s, as these are based on the wall clock
2203     time, where your comparisons will always generate correct results.
2204    
2205 root 1.257 =head3 The special problems of suspended animation
2206    
2207     When you leave the server world it is quite customary to hit machines that
2208     can suspend/hibernate - what happens to the clocks during such a suspend?
2209    
2210     Some quick tests made with a Linux 2.6.28 indicate that a suspend freezes
2211     all processes, while the clocks (C<times>, C<CLOCK_MONOTONIC>) continue
2212     to run until the system is suspended, but they will not advance while the
2213     system is suspended. That means, on resume, it will be as if the program
2214     was frozen for a few seconds, but the suspend time will not be counted
2215     towards C<ev_timer> when a monotonic clock source is used. The real time
2216     clock advanced as expected, but if it is used as sole clocksource, then a
2217     long suspend would be detected as a time jump by libev, and timers would
2218     be adjusted accordingly.
2219    
2220     I would not be surprised to see different behaviour in different between
2221     operating systems, OS versions or even different hardware.
2222    
2223     The other form of suspend (job control, or sending a SIGSTOP) will see a
2224     time jump in the monotonic clocks and the realtime clock. If the program
2225     is suspended for a very long time, and monotonic clock sources are in use,
2226     then you can expect C<ev_timer>s to expire as the full suspension time
2227     will be counted towards the timers. When no monotonic clock source is in
2228     use, then libev will again assume a timejump and adjust accordingly.
2229    
2230     It might be beneficial for this latter case to call C<ev_suspend>
2231     and C<ev_resume> in code that handles C<SIGTSTP>, to at least get
2232     deterministic behaviour in this case (you can do nothing against
2233     C<SIGSTOP>).
2234    
2235 root 1.82 =head3 Watcher-Specific Functions and Data Members
2236    
2237 root 1.1 =over 4
2238    
2239     =item ev_timer_init (ev_timer *, callback, ev_tstamp after, ev_tstamp repeat)
2240    
2241     =item ev_timer_set (ev_timer *, ev_tstamp after, ev_tstamp repeat)
2242    
2243 root 1.443 Configure the timer to trigger after C<after> seconds (fractional and
2244     negative values are supported). If C<repeat> is C<0.>, then it will
2245     automatically be stopped once the timeout is reached. If it is positive,
2246     then the timer will automatically be configured to trigger again C<repeat>
2247     seconds later, again, and again, until stopped manually.
2248 root 1.157
2249     The timer itself will do a best-effort at avoiding drift, that is, if
2250     you configure a timer to trigger every 10 seconds, then it will normally
2251     trigger at exactly 10 second intervals. If, however, your program cannot
2252     keep up with the timer (because it takes longer than those 10 seconds to
2253     do stuff) the timer will not fire more than once per event loop iteration.
2254 root 1.1
2255 root 1.132 =item ev_timer_again (loop, ev_timer *)
2256 root 1.1
2257 root 1.394 This will act as if the timer timed out, and restarts it again if it is
2258     repeating. It basically works like calling C<ev_timer_stop>, updating the
2259     timeout to the C<repeat> value and calling C<ev_timer_start>.
2260 root 1.1
2261 root 1.395 The exact semantics are as in the following rules, all of which will be
2262 root 1.394 applied to the watcher:
2263 root 1.1
2264 root 1.394 =over 4
2265    
2266     =item If the timer is pending, the pending status is always cleared.
2267    
2268     =item If the timer is started but non-repeating, stop it (as if it timed
2269     out, without invoking it).
2270 root 1.61
2271 root 1.394 =item If the timer is repeating, make the C<repeat> value the new timeout
2272     and start the timer, if necessary.
2273    
2274     =back
2275 root 1.1
2276 root 1.413 This sounds a bit complicated, see L</Be smart about timeouts>, above, for a
2277 root 1.198 usage example.
2278 root 1.183
2279 root 1.275 =item ev_tstamp ev_timer_remaining (loop, ev_timer *)
2280 root 1.258
2281     Returns the remaining time until a timer fires. If the timer is active,
2282     then this time is relative to the current event loop time, otherwise it's
2283     the timeout value currently configured.
2284    
2285     That is, after an C<ev_timer_set (w, 5, 7)>, C<ev_timer_remaining> returns
2286 sf-exg 1.280 C<5>. When the timer is started and one second passes, C<ev_timer_remaining>
2287 root 1.258 will return C<4>. When the timer expires and is restarted, it will return
2288     roughly C<7> (likely slightly less as callback invocation takes some time,
2289     too), and so on.
2290    
2291 root 1.48 =item ev_tstamp repeat [read-write]
2292    
2293     The current C<repeat> value. Will be used each time the watcher times out
2294 root 1.183 or C<ev_timer_again> is called, and determines the next timeout (if any),
2295 root 1.48 which is also when any modifications are taken into account.
2296 root 1.1
2297     =back
2298    
2299 root 1.111 =head3 Examples
2300    
2301 root 1.54 Example: Create a timer that fires after 60 seconds.
2302 root 1.34
2303 root 1.164 static void
2304 root 1.198 one_minute_cb (struct ev_loop *loop, ev_timer *w, int revents)
2305 root 1.164 {
2306     .. one minute over, w is actually stopped right here
2307     }
2308    
2309 root 1.198 ev_timer mytimer;
2310 root 1.164 ev_timer_init (&mytimer, one_minute_cb, 60., 0.);
2311     ev_timer_start (loop, &mytimer);
2312 root 1.34
2313 root 1.54 Example: Create a timeout timer that times out after 10 seconds of
2314 root 1.34 inactivity.
2315    
2316 root 1.164 static void
2317 root 1.198 timeout_cb (struct ev_loop *loop, ev_timer *w, int revents)
2318 root 1.164 {
2319     .. ten seconds without any activity
2320     }
2321    
2322 root 1.198 ev_timer mytimer;
2323 root 1.164 ev_timer_init (&mytimer, timeout_cb, 0., 10.); /* note, only repeat used */
2324     ev_timer_again (&mytimer); /* start timer */
2325 root 1.310 ev_run (loop, 0);
2326 root 1.164
2327     // and in some piece of code that gets executed on any "activity":
2328     // reset the timeout to start ticking again at 10 seconds
2329     ev_timer_again (&mytimer);
2330 root 1.34
2331    
2332 root 1.42 =head2 C<ev_periodic> - to cron or not to cron?
2333 root 1.1
2334     Periodic watchers are also timers of a kind, but they are very versatile
2335     (and unfortunately a bit complex).
2336    
2337 root 1.227 Unlike C<ev_timer>, periodic watchers are not based on real time (or
2338     relative time, the physical time that passes) but on wall clock time
2339 root 1.438 (absolute time, the thing you can read on your calendar or clock). The
2340 root 1.227 difference is that wall clock time can run faster or slower than real
2341     time, and time jumps are not uncommon (e.g. when you adjust your
2342     wrist-watch).
2343    
2344     You can tell a periodic watcher to trigger after some specific point
2345     in time: for example, if you tell a periodic watcher to trigger "in 10
2346     seconds" (by specifying e.g. C<ev_now () + 10.>, that is, an absolute time
2347     not a delay) and then reset your system clock to January of the previous
2348     year, then it will take a year or more to trigger the event (unlike an
2349     C<ev_timer>, which would still trigger roughly 10 seconds after starting
2350     it, as it uses a relative timeout).
2351    
2352     C<ev_periodic> watchers can also be used to implement vastly more complex
2353     timers, such as triggering an event on each "midnight, local time", or
2354 root 1.444 other complicated rules. This cannot easily be done with C<ev_timer>
2355     watchers, as those cannot react to time jumps.
2356 root 1.1
2357 root 1.161 As with timers, the callback is guaranteed to be invoked only when the
2358 root 1.230 point in time where it is supposed to trigger has passed. If multiple
2359     timers become ready during the same loop iteration then the ones with
2360     earlier time-out values are invoked before ones with later time-out values
2361 root 1.310 (but this is no longer true when a callback calls C<ev_run> recursively).
2362 root 1.28
2363 root 1.82 =head3 Watcher-Specific Functions and Data Members
2364    
2365 root 1.1 =over 4
2366    
2367 root 1.227 =item ev_periodic_init (ev_periodic *, callback, ev_tstamp offset, ev_tstamp interval, reschedule_cb)
2368 root 1.1
2369 root 1.227 =item ev_periodic_set (ev_periodic *, ev_tstamp offset, ev_tstamp interval, reschedule_cb)
2370 root 1.1
2371 root 1.227 Lots of arguments, let's sort it out... There are basically three modes of
2372 root 1.183 operation, and we will explain them from simplest to most complex:
2373 root 1.1
2374     =over 4
2375    
2376 root 1.227 =item * absolute timer (offset = absolute time, interval = 0, reschedule_cb = 0)
2377 root 1.1
2378 root 1.161 In this configuration the watcher triggers an event after the wall clock
2379 root 1.227 time C<offset> has passed. It will not repeat and will not adjust when a
2380     time jump occurs, that is, if it is to be run at January 1st 2011 then it
2381     will be stopped and invoked when the system clock reaches or surpasses
2382     this point in time.
2383 root 1.1
2384 root 1.227 =item * repeating interval timer (offset = offset within interval, interval > 0, reschedule_cb = 0)
2385 root 1.1
2386     In this mode the watcher will always be scheduled to time out at the next
2387 root 1.227 C<offset + N * interval> time (for some integer N, which can also be
2388     negative) and then repeat, regardless of any time jumps. The C<offset>
2389     argument is merely an offset into the C<interval> periods.
2390 root 1.1
2391 root 1.183 This can be used to create timers that do not drift with respect to the
2392 root 1.227 system clock, for example, here is an C<ev_periodic> that triggers each
2393     hour, on the hour (with respect to UTC):
2394 root 1.1
2395     ev_periodic_set (&periodic, 0., 3600., 0);
2396    
2397     This doesn't mean there will always be 3600 seconds in between triggers,
2398 root 1.161 but only that the callback will be called when the system time shows a
2399 root 1.12 full hour (UTC), or more correctly, when the system time is evenly divisible
2400 root 1.1 by 3600.
2401    
2402     Another way to think about it (for the mathematically inclined) is that
2403 root 1.10 C<ev_periodic> will try to run the callback in this mode at the next possible
2404 root 1.227 time where C<time = offset (mod interval)>, regardless of any time jumps.
2405 root 1.1
2406 root 1.367 The C<interval> I<MUST> be positive, and for numerical stability, the
2407     interval value should be higher than C<1/8192> (which is around 100
2408     microseconds) and C<offset> should be higher than C<0> and should have
2409     at most a similar magnitude as the current time (say, within a factor of
2410     ten). Typical values for offset are, in fact, C<0> or something between
2411     C<0> and C<interval>, which is also the recommended range.
2412 root 1.78
2413 root 1.161 Note also that there is an upper limit to how often a timer can fire (CPU
2414 root 1.158 speed for example), so if C<interval> is very small then timing stability
2415 root 1.161 will of course deteriorate. Libev itself tries to be exact to be about one
2416 root 1.158 millisecond (if the OS supports it and the machine is fast enough).
2417    
2418 root 1.227 =item * manual reschedule mode (offset ignored, interval ignored, reschedule_cb = callback)
2419 root 1.1
2420 root 1.227 In this mode the values for C<interval> and C<offset> are both being
2421 root 1.1 ignored. Instead, each time the periodic watcher gets scheduled, the
2422     reschedule callback will be called with the watcher as first, and the
2423     current time as second argument.
2424    
2425 root 1.227 NOTE: I<This callback MUST NOT stop or destroy any periodic watcher, ever,
2426     or make ANY other event loop modifications whatsoever, unless explicitly
2427     allowed by documentation here>.
2428 root 1.1
2429 root 1.157 If you need to stop it, return C<now + 1e30> (or so, fudge fudge) and stop
2430     it afterwards (e.g. by starting an C<ev_prepare> watcher, which is the
2431     only event loop modification you are allowed to do).
2432    
2433 root 1.198 The callback prototype is C<ev_tstamp (*reschedule_cb)(ev_periodic
2434 root 1.157 *w, ev_tstamp now)>, e.g.:
2435 root 1.1
2436 root 1.198 static ev_tstamp
2437     my_rescheduler (ev_periodic *w, ev_tstamp now)
2438 root 1.1 {
2439     return now + 60.;
2440     }
2441    
2442     It must return the next time to trigger, based on the passed time value
2443     (that is, the lowest time value larger than to the second argument). It
2444     will usually be called just before the callback will be triggered, but
2445     might be called at other times, too.
2446    
2447 root 1.157 NOTE: I<< This callback must always return a time that is higher than or
2448     equal to the passed C<now> value >>.
2449 root 1.18
2450 root 1.1 This can be used to create very complex timers, such as a timer that
2451 root 1.444 triggers on "next midnight, local time". To do this, you would calculate
2452     the next midnight after C<now> and return the timestamp value for
2453     this. Here is a (completely untested, no error checking) example on how to
2454     do this:
2455    
2456     #include <time.h>
2457    
2458     static ev_tstamp
2459     my_rescheduler (ev_periodic *w, ev_tstamp now)
2460     {
2461     time_t tnow = (time_t)now;
2462     struct tm tm;
2463     localtime_r (&tnow, &tm);
2464    
2465     tm.tm_sec = tm.tm_min = tm.tm_hour = 0; // midnight current day
2466     ++tm.tm_mday; // midnight next day
2467    
2468     return mktime (&tm);
2469     }
2470    
2471     Note: this code might run into trouble on days that have more then two
2472     midnights (beginning and end).
2473 root 1.1
2474     =back
2475    
2476     =item ev_periodic_again (loop, ev_periodic *)
2477    
2478     Simply stops and restarts the periodic watcher again. This is only useful
2479     when you changed some parameters or the reschedule callback would return
2480     a different time than the last time it was called (e.g. in a crond like
2481     program when the crontabs have changed).
2482    
2483 root 1.149 =item ev_tstamp ev_periodic_at (ev_periodic *)
2484    
2485 root 1.227 When active, returns the absolute time that the watcher is supposed
2486     to trigger next. This is not the same as the C<offset> argument to
2487     C<ev_periodic_set>, but indeed works even in interval and manual
2488     rescheduling modes.
2489 root 1.149
2490 root 1.78 =item ev_tstamp offset [read-write]
2491    
2492     When repeating, this contains the offset value, otherwise this is the
2493 root 1.227 absolute point in time (the C<offset> value passed to C<ev_periodic_set>,
2494     although libev might modify this value for better numerical stability).
2495 root 1.78
2496     Can be modified any time, but changes only take effect when the periodic
2497     timer fires or C<ev_periodic_again> is being called.
2498    
2499 root 1.48 =item ev_tstamp interval [read-write]
2500    
2501     The current interval value. Can be modified any time, but changes only
2502     take effect when the periodic timer fires or C<ev_periodic_again> is being
2503     called.
2504    
2505 root 1.198 =item ev_tstamp (*reschedule_cb)(ev_periodic *w, ev_tstamp now) [read-write]
2506 root 1.48
2507     The current reschedule callback, or C<0>, if this functionality is
2508     switched off. Can be changed any time, but changes only take effect when
2509     the periodic timer fires or C<ev_periodic_again> is being called.
2510    
2511 root 1.1 =back
2512    
2513 root 1.111 =head3 Examples
2514    
2515 root 1.54 Example: Call a callback every hour, or, more precisely, whenever the
2516 root 1.183 system time is divisible by 3600. The callback invocation times have
2517 root 1.161 potentially a lot of jitter, but good long-term stability.
2518 root 1.34
2519 root 1.164 static void
2520 root 1.301 clock_cb (struct ev_loop *loop, ev_periodic *w, int revents)
2521 root 1.164 {
2522     ... its now a full hour (UTC, or TAI or whatever your clock follows)
2523     }
2524    
2525 root 1.198 ev_periodic hourly_tick;
2526 root 1.164 ev_periodic_init (&hourly_tick, clock_cb, 0., 3600., 0);
2527     ev_periodic_start (loop, &hourly_tick);
2528 root 1.34
2529 root 1.54 Example: The same as above, but use a reschedule callback to do it:
2530 root 1.34
2531 root 1.164 #include <math.h>
2532 root 1.34
2533 root 1.164 static ev_tstamp
2534 root 1.198 my_scheduler_cb (ev_periodic *w, ev_tstamp now)
2535 root 1.164 {
2536 root 1.183 return now + (3600. - fmod (now, 3600.));
2537 root 1.164 }
2538 root 1.34
2539 root 1.164 ev_periodic_init (&hourly_tick, clock_cb, 0., 0., my_scheduler_cb);
2540 root 1.34
2541 root 1.54 Example: Call a callback every hour, starting now:
2542 root 1.34
2543 root 1.198 ev_periodic hourly_tick;
2544 root 1.164 ev_periodic_init (&hourly_tick, clock_cb,
2545     fmod (ev_now (loop), 3600.), 3600., 0);
2546     ev_periodic_start (loop, &hourly_tick);
2547 root 1.432
2548 root 1.34
2549 root 1.42 =head2 C<ev_signal> - signal me when a signal gets signalled!
2550 root 1.1
2551     Signal watchers will trigger an event when the process receives a specific
2552     signal one or more times. Even though signals are very asynchronous, libev
2553 root 1.340 will try its best to deliver signals synchronously, i.e. as part of the
2554 root 1.1 normal event processing, like any other event.
2555    
2556 root 1.260 If you want signals to be delivered truly asynchronously, just use
2557     C<sigaction> as you would do without libev and forget about sharing
2558     the signal. You can even use C<ev_async> from a signal handler to
2559     synchronously wake up an event loop.
2560    
2561     You can configure as many watchers as you like for the same signal, but
2562     only within the same loop, i.e. you can watch for C<SIGINT> in your
2563     default loop and for C<SIGIO> in another loop, but you cannot watch for
2564     C<SIGINT> in both the default loop and another loop at the same time. At
2565     the moment, C<SIGCHLD> is permanently tied to the default loop.
2566    
2567 root 1.431 Only after the first watcher for a signal is started will libev actually
2568     register something with the kernel. It thus coexists with your own signal
2569     handlers as long as you don't register any with libev for the same signal.
2570 root 1.259
2571 root 1.135 If possible and supported, libev will install its handlers with
2572 root 1.259 C<SA_RESTART> (or equivalent) behaviour enabled, so system calls should
2573     not be unduly interrupted. If you have a problem with system calls getting
2574     interrupted by signals you can block all signals in an C<ev_check> watcher
2575     and unblock them in an C<ev_prepare> watcher.
2576 root 1.135
2577 root 1.277 =head3 The special problem of inheritance over fork/execve/pthread_create
2578 root 1.265
2579     Both the signal mask (C<sigprocmask>) and the signal disposition
2580     (C<sigaction>) are unspecified after starting a signal watcher (and after
2581     stopping it again), that is, libev might or might not block the signal,
2582 root 1.360 and might or might not set or restore the installed signal handler (but
2583     see C<EVFLAG_NOSIGMASK>).
2584 root 1.265
2585     While this does not matter for the signal disposition (libev never
2586     sets signals to C<SIG_IGN>, so handlers will be reset to C<SIG_DFL> on
2587     C<execve>), this matters for the signal mask: many programs do not expect
2588 root 1.266 certain signals to be blocked.
2589 root 1.265
2590     This means that before calling C<exec> (from the child) you should reset
2591     the signal mask to whatever "default" you expect (all clear is a good
2592     choice usually).
2593    
2594 root 1.267 The simplest way to ensure that the signal mask is reset in the child is
2595     to install a fork handler with C<pthread_atfork> that resets it. That will
2596     catch fork calls done by libraries (such as the libc) as well.
2597    
2598 root 1.277 In current versions of libev, the signal will not be blocked indefinitely
2599     unless you use the C<signalfd> API (C<EV_SIGNALFD>). While this reduces
2600     the window of opportunity for problems, it will not go away, as libev
2601     I<has> to modify the signal mask, at least temporarily.
2602    
2603 root 1.278 So I can't stress this enough: I<If you do not reset your signal mask when
2604     you expect it to be empty, you have a race condition in your code>. This
2605     is not a libev-specific thing, this is true for most event libraries.
2606 root 1.266
2607 root 1.349 =head3 The special problem of threads signal handling
2608    
2609     POSIX threads has problematic signal handling semantics, specifically,
2610     a lot of functionality (sigfd, sigwait etc.) only really works if all
2611     threads in a process block signals, which is hard to achieve.
2612    
2613     When you want to use sigwait (or mix libev signal handling with your own
2614     for the same signals), you can tackle this problem by globally blocking
2615     all signals before creating any threads (or creating them with a fully set
2616     sigprocmask) and also specifying the C<EVFLAG_NOSIGMASK> when creating
2617     loops. Then designate one thread as "signal receiver thread" which handles
2618     these signals. You can pass on any signals that libev might be interested
2619     in by calling C<ev_feed_signal>.
2620    
2621 root 1.82 =head3 Watcher-Specific Functions and Data Members
2622    
2623 root 1.1 =over 4
2624    
2625     =item ev_signal_init (ev_signal *, callback, int signum)
2626    
2627     =item ev_signal_set (ev_signal *, int signum)
2628    
2629     Configures the watcher to trigger on the given signal number (usually one
2630     of the C<SIGxxx> constants).
2631    
2632 root 1.48 =item int signum [read-only]
2633    
2634     The signal the watcher watches out for.
2635    
2636 root 1.1 =back
2637    
2638 root 1.132 =head3 Examples
2639    
2640 root 1.188 Example: Try to exit cleanly on SIGINT.
2641 root 1.132
2642 root 1.164 static void
2643 root 1.198 sigint_cb (struct ev_loop *loop, ev_signal *w, int revents)
2644 root 1.164 {
2645 root 1.310 ev_break (loop, EVBREAK_ALL);
2646 root 1.164 }
2647    
2648 root 1.198 ev_signal signal_watcher;
2649 root 1.164 ev_signal_init (&signal_watcher, sigint_cb, SIGINT);
2650 root 1.188 ev_signal_start (loop, &signal_watcher);
2651 root 1.132
2652 root 1.35
2653 root 1.42 =head2 C<ev_child> - watch out for process status changes
2654 root 1.1
2655     Child watchers trigger when your process receives a SIGCHLD in response to
2656 root 1.183 some child status changes (most typically when a child of yours dies or
2657     exits). It is permissible to install a child watcher I<after> the child
2658     has been forked (which implies it might have already exited), as long
2659     as the event loop isn't entered (or is continued from a watcher), i.e.,
2660     forking and then immediately registering a watcher for the child is fine,
2661 root 1.244 but forking and registering a watcher a few event loop iterations later or
2662     in the next callback invocation is not.
2663 root 1.134
2664     Only the default event loop is capable of handling signals, and therefore
2665 root 1.161 you can only register child watchers in the default event loop.
2666 root 1.134
2667 root 1.248 Due to some design glitches inside libev, child watchers will always be
2668 root 1.249 handled at maximum priority (their priority is set to C<EV_MAXPRI> by
2669     libev)
2670 root 1.248
2671 root 1.134 =head3 Process Interaction
2672    
2673     Libev grabs C<SIGCHLD> as soon as the default event loop is
2674 root 1.259 initialised. This is necessary to guarantee proper behaviour even if the
2675     first child watcher is started after the child exits. The occurrence
2676 root 1.134 of C<SIGCHLD> is recorded asynchronously, but child reaping is done
2677     synchronously as part of the event loop processing. Libev always reaps all
2678     children, even ones not watched.
2679    
2680     =head3 Overriding the Built-In Processing
2681    
2682     Libev offers no special support for overriding the built-in child
2683     processing, but if your application collides with libev's default child
2684     handler, you can override it easily by installing your own handler for
2685     C<SIGCHLD> after initialising the default loop, and making sure the
2686     default loop never gets destroyed. You are encouraged, however, to use an
2687     event-based approach to child reaping and thus use libev's support for
2688     that, so other libev users can use C<ev_child> watchers freely.
2689 root 1.1
2690 root 1.173 =head3 Stopping the Child Watcher
2691    
2692     Currently, the child watcher never gets stopped, even when the
2693     child terminates, so normally one needs to stop the watcher in the
2694     callback. Future versions of libev might stop the watcher automatically
2695 root 1.259 when a child exit is detected (calling C<ev_child_stop> twice is not a
2696     problem).
2697 root 1.173
2698 root 1.82 =head3 Watcher-Specific Functions and Data Members
2699    
2700 root 1.1 =over 4
2701    
2702 root 1.120 =item ev_child_init (ev_child *, callback, int pid, int trace)
2703 root 1.1
2704 root 1.120 =item ev_child_set (ev_child *, int pid, int trace)
2705 root 1.1
2706     Configures the watcher to wait for status changes of process C<pid> (or
2707     I<any> process if C<pid> is specified as C<0>). The callback can look
2708     at the C<rstatus> member of the C<ev_child> watcher structure to see
2709 root 1.14 the status word (use the macros from C<sys/wait.h> and see your systems
2710     C<waitpid> documentation). The C<rpid> member contains the pid of the
2711 root 1.120 process causing the status change. C<trace> must be either C<0> (only
2712     activate the watcher when the process terminates) or C<1> (additionally
2713     activate the watcher when the process is stopped or continued).
2714 root 1.1
2715 root 1.48 =item int pid [read-only]
2716    
2717     The process id this watcher watches out for, or C<0>, meaning any process id.
2718    
2719     =item int rpid [read-write]
2720    
2721     The process id that detected a status change.
2722    
2723     =item int rstatus [read-write]
2724    
2725     The process exit/trace status caused by C<rpid> (see your systems
2726     C<waitpid> and C<sys/wait.h> documentation for details).
2727    
2728 root 1.1 =back
2729    
2730 root 1.134 =head3 Examples
2731    
2732     Example: C<fork()> a new process and install a child handler to wait for
2733     its completion.
2734    
2735 root 1.164 ev_child cw;
2736    
2737     static void
2738 root 1.198 child_cb (EV_P_ ev_child *w, int revents)
2739 root 1.164 {
2740     ev_child_stop (EV_A_ w);
2741     printf ("process %d exited with status %x\n", w->rpid, w->rstatus);
2742     }
2743    
2744     pid_t pid = fork ();
2745 root 1.134
2746 root 1.164 if (pid < 0)
2747     // error
2748     else if (pid == 0)
2749     {
2750     // the forked child executes here
2751     exit (1);
2752     }
2753     else
2754     {
2755     ev_child_init (&cw, child_cb, pid, 0);
2756     ev_child_start (EV_DEFAULT_ &cw);
2757     }
2758 root 1.134
2759 root 1.34
2760 root 1.48 =head2 C<ev_stat> - did the file attributes just change?
2761    
2762 root 1.161 This watches a file system path for attribute changes. That is, it calls
2763 root 1.207 C<stat> on that path in regular intervals (or when the OS says it changed)
2764 root 1.425 and sees if it changed compared to the last time, invoking the callback
2765     if it did. Starting the watcher C<stat>'s the file, so only changes that
2766     happen after the watcher has been started will be reported.
2767 root 1.48
2768     The path does not need to exist: changing from "path exists" to "path does
2769 root 1.211 not exist" is a status change like any other. The condition "path does not
2770     exist" (or more correctly "path cannot be stat'ed") is signified by the
2771     C<st_nlink> field being zero (which is otherwise always forced to be at
2772     least one) and all the other fields of the stat buffer having unspecified
2773     contents.
2774 root 1.48
2775 root 1.207 The path I<must not> end in a slash or contain special components such as
2776     C<.> or C<..>. The path I<should> be absolute: If it is relative and
2777     your working directory changes, then the behaviour is undefined.
2778    
2779     Since there is no portable change notification interface available, the
2780     portable implementation simply calls C<stat(2)> regularly on the path
2781     to see if it changed somehow. You can specify a recommended polling
2782     interval for this case. If you specify a polling interval of C<0> (highly
2783     recommended!) then a I<suitable, unspecified default> value will be used
2784     (which you can expect to be around five seconds, although this might
2785     change dynamically). Libev will also impose a minimum interval which is
2786 root 1.208 currently around C<0.1>, but that's usually overkill.
2787 root 1.48
2788     This watcher type is not meant for massive numbers of stat watchers,
2789     as even with OS-supported change notifications, this can be
2790     resource-intensive.
2791    
2792 root 1.183 At the time of this writing, the only OS-specific interface implemented
2793 root 1.211 is the Linux inotify interface (implementing kqueue support is left as an
2794     exercise for the reader. Note, however, that the author sees no way of
2795     implementing C<ev_stat> semantics with kqueue, except as a hint).
2796 root 1.48
2797 root 1.137 =head3 ABI Issues (Largefile Support)
2798    
2799     Libev by default (unless the user overrides this) uses the default
2800 root 1.169 compilation environment, which means that on systems with large file
2801     support disabled by default, you get the 32 bit version of the stat
2802 root 1.137 structure. When using the library from programs that change the ABI to
2803     use 64 bit file offsets the programs will fail. In that case you have to
2804     compile libev with the same flags to get binary compatibility. This is
2805     obviously the case with any flags that change the ABI, but the problem is
2806 root 1.207 most noticeably displayed with ev_stat and large file support.
2807 root 1.169
2808     The solution for this is to lobby your distribution maker to make large
2809     file interfaces available by default (as e.g. FreeBSD does) and not
2810     optional. Libev cannot simply switch on large file support because it has
2811     to exchange stat structures with application programs compiled using the
2812     default compilation environment.
2813 root 1.137
2814 root 1.183 =head3 Inotify and Kqueue
2815 root 1.108
2816 root 1.211 When C<inotify (7)> support has been compiled into libev and present at
2817     runtime, it will be used to speed up change detection where possible. The
2818     inotify descriptor will be created lazily when the first C<ev_stat>
2819     watcher is being started.
2820 root 1.108
2821 root 1.147 Inotify presence does not change the semantics of C<ev_stat> watchers
2822 root 1.108 except that changes might be detected earlier, and in some cases, to avoid
2823 root 1.147 making regular C<stat> calls. Even in the presence of inotify support
2824 root 1.183 there are many cases where libev has to resort to regular C<stat> polling,
2825 root 1.211 but as long as kernel 2.6.25 or newer is used (2.6.24 and older have too
2826     many bugs), the path exists (i.e. stat succeeds), and the path resides on
2827     a local filesystem (libev currently assumes only ext2/3, jfs, reiserfs and
2828     xfs are fully working) libev usually gets away without polling.
2829 root 1.108
2830 root 1.183 There is no support for kqueue, as apparently it cannot be used to
2831 root 1.108 implement this functionality, due to the requirement of having a file
2832 root 1.183 descriptor open on the object at all times, and detecting renames, unlinks
2833     etc. is difficult.
2834 root 1.108
2835 root 1.212 =head3 C<stat ()> is a synchronous operation
2836    
2837     Libev doesn't normally do any kind of I/O itself, and so is not blocking
2838     the process. The exception are C<ev_stat> watchers - those call C<stat
2839     ()>, which is a synchronous operation.
2840    
2841     For local paths, this usually doesn't matter: unless the system is very
2842     busy or the intervals between stat's are large, a stat call will be fast,
2843 root 1.222 as the path data is usually in memory already (except when starting the
2844 root 1.212 watcher).
2845    
2846     For networked file systems, calling C<stat ()> can block an indefinite
2847     time due to network issues, and even under good conditions, a stat call
2848     often takes multiple milliseconds.
2849    
2850     Therefore, it is best to avoid using C<ev_stat> watchers on networked
2851     paths, although this is fully supported by libev.
2852    
2853 root 1.107 =head3 The special problem of stat time resolution
2854    
2855 root 1.207 The C<stat ()> system call only supports full-second resolution portably,
2856     and even on systems where the resolution is higher, most file systems
2857     still only support whole seconds.
2858 root 1.107
2859 root 1.150 That means that, if the time is the only thing that changes, you can
2860     easily miss updates: on the first update, C<ev_stat> detects a change and
2861     calls your callback, which does something. When there is another update
2862 root 1.183 within the same second, C<ev_stat> will be unable to detect unless the
2863     stat data does change in other ways (e.g. file size).
2864 root 1.150
2865     The solution to this is to delay acting on a change for slightly more
2866 root 1.155 than a second (or till slightly after the next full second boundary), using
2867 root 1.150 a roughly one-second-delay C<ev_timer> (e.g. C<ev_timer_set (w, 0., 1.02);
2868     ev_timer_again (loop, w)>).
2869    
2870     The C<.02> offset is added to work around small timing inconsistencies
2871     of some operating systems (where the second counter of the current time
2872     might be be delayed. One such system is the Linux kernel, where a call to
2873     C<gettimeofday> might return a timestamp with a full second later than
2874     a subsequent C<time> call - if the equivalent of C<time ()> is used to
2875     update file times then there will be a small window where the kernel uses
2876     the previous second to update file times but libev might already execute
2877     the timer callback).
2878 root 1.107
2879 root 1.82 =head3 Watcher-Specific Functions and Data Members
2880    
2881 root 1.48 =over 4
2882    
2883     =item ev_stat_init (ev_stat *, callback, const char *path, ev_tstamp interval)
2884    
2885     =item ev_stat_set (ev_stat *, const char *path, ev_tstamp interval)
2886    
2887     Configures the watcher to wait for status changes of the given
2888     C<path>. The C<interval> is a hint on how quickly a change is expected to
2889     be detected and should normally be specified as C<0> to let libev choose
2890     a suitable value. The memory pointed to by C<path> must point to the same
2891     path for as long as the watcher is active.
2892    
2893 root 1.183 The callback will receive an C<EV_STAT> event when a change was detected,
2894     relative to the attributes at the time the watcher was started (or the
2895     last change was detected).
2896 root 1.48
2897 root 1.132 =item ev_stat_stat (loop, ev_stat *)
2898 root 1.48
2899     Updates the stat buffer immediately with new values. If you change the
2900 root 1.150 watched path in your callback, you could call this function to avoid
2901     detecting this change (while introducing a race condition if you are not
2902     the only one changing the path). Can also be useful simply to find out the
2903     new values.
2904 root 1.48
2905     =item ev_statdata attr [read-only]
2906    
2907 root 1.150 The most-recently detected attributes of the file. Although the type is
2908 root 1.48 C<ev_statdata>, this is usually the (or one of the) C<struct stat> types
2909 root 1.150 suitable for your system, but you can only rely on the POSIX-standardised
2910     members to be present. If the C<st_nlink> member is C<0>, then there was
2911     some error while C<stat>ing the file.
2912 root 1.48
2913     =item ev_statdata prev [read-only]
2914    
2915     The previous attributes of the file. The callback gets invoked whenever
2916 root 1.150 C<prev> != C<attr>, or, more precisely, one or more of these members
2917     differ: C<st_dev>, C<st_ino>, C<st_mode>, C<st_nlink>, C<st_uid>,
2918     C<st_gid>, C<st_rdev>, C<st_size>, C<st_atime>, C<st_mtime>, C<st_ctime>.
2919 root 1.48
2920     =item ev_tstamp interval [read-only]
2921    
2922     The specified interval.
2923    
2924     =item const char *path [read-only]
2925    
2926 root 1.161 The file system path that is being watched.
2927 root 1.48
2928     =back
2929    
2930 root 1.108 =head3 Examples
2931    
2932 root 1.48 Example: Watch C</etc/passwd> for attribute changes.
2933    
2934 root 1.164 static void
2935     passwd_cb (struct ev_loop *loop, ev_stat *w, int revents)
2936     {
2937     /* /etc/passwd changed in some way */
2938     if (w->attr.st_nlink)
2939     {
2940     printf ("passwd current size %ld\n", (long)w->attr.st_size);
2941     printf ("passwd current atime %ld\n", (long)w->attr.st_mtime);
2942     printf ("passwd current mtime %ld\n", (long)w->attr.st_mtime);
2943     }
2944     else
2945     /* you shalt not abuse printf for puts */
2946     puts ("wow, /etc/passwd is not there, expect problems. "
2947     "if this is windows, they already arrived\n");
2948     }
2949 root 1.48
2950 root 1.164 ...
2951     ev_stat passwd;
2952 root 1.48
2953 root 1.164 ev_stat_init (&passwd, passwd_cb, "/etc/passwd", 0.);
2954     ev_stat_start (loop, &passwd);
2955 root 1.107
2956     Example: Like above, but additionally use a one-second delay so we do not
2957     miss updates (however, frequent updates will delay processing, too, so
2958     one might do the work both on C<ev_stat> callback invocation I<and> on
2959     C<ev_timer> callback invocation).
2960    
2961 root 1.164 static ev_stat passwd;
2962     static ev_timer timer;
2963 root 1.107
2964 root 1.164 static void
2965     timer_cb (EV_P_ ev_timer *w, int revents)
2966     {
2967     ev_timer_stop (EV_A_ w);
2968    
2969     /* now it's one second after the most recent passwd change */
2970     }
2971    
2972     static void
2973     stat_cb (EV_P_ ev_stat *w, int revents)
2974     {
2975     /* reset the one-second timer */
2976     ev_timer_again (EV_A_ &timer);
2977     }
2978    
2979     ...
2980     ev_stat_init (&passwd, stat_cb, "/etc/passwd", 0.);
2981     ev_stat_start (loop, &passwd);
2982     ev_timer_init (&timer, timer_cb, 0., 1.02);
2983 root 1.48
2984    
2985 root 1.42 =head2 C<ev_idle> - when you've got nothing better to do...
2986 root 1.1
2987 root 1.67 Idle watchers trigger events when no other events of the same or higher
2988 root 1.183 priority are pending (prepare, check and other idle watchers do not count
2989     as receiving "events").
2990 root 1.67
2991     That is, as long as your process is busy handling sockets or timeouts
2992     (or even signals, imagine) of the same or higher priority it will not be
2993     triggered. But when your process is idle (or only lower-priority watchers
2994     are pending), the idle watchers are being called once per event loop
2995     iteration - until stopped, that is, or your process receives more events
2996     and becomes busy again with higher priority stuff.
2997 root 1.1
2998     The most noteworthy effect is that as long as any idle watchers are
2999     active, the process will not block when waiting for new events.
3000    
3001     Apart from keeping your process non-blocking (which is a useful
3002     effect on its own sometimes), idle watchers are a good place to do
3003     "pseudo-background processing", or delay processing stuff to after the
3004     event loop has handled all outstanding events.
3005    
3006 root 1.406 =head3 Abusing an C<ev_idle> watcher for its side-effect
3007    
3008     As long as there is at least one active idle watcher, libev will never
3009     sleep unnecessarily. Or in other words, it will loop as fast as possible.
3010     For this to work, the idle watcher doesn't need to be invoked at all - the
3011     lowest priority will do.
3012    
3013     This mode of operation can be useful together with an C<ev_check> watcher,
3014     to do something on each event loop iteration - for example to balance load
3015     between different connections.
3016    
3017 root 1.415 See L</Abusing an ev_check watcher for its side-effect> for a longer
3018 root 1.406 example.
3019    
3020 root 1.82 =head3 Watcher-Specific Functions and Data Members
3021    
3022 root 1.1 =over 4
3023    
3024 root 1.226 =item ev_idle_init (ev_idle *, callback)
3025 root 1.1
3026     Initialises and configures the idle watcher - it has no parameters of any
3027     kind. There is a C<ev_idle_set> macro, but using it is utterly pointless,
3028     believe me.
3029    
3030     =back
3031    
3032 root 1.111 =head3 Examples
3033    
3034 root 1.54 Example: Dynamically allocate an C<ev_idle> watcher, start it, and in the
3035     callback, free it. Also, use no error checking, as usual.
3036 root 1.34
3037 root 1.164 static void
3038 root 1.198 idle_cb (struct ev_loop *loop, ev_idle *w, int revents)
3039 root 1.164 {
3040 root 1.407 // stop the watcher
3041     ev_idle_stop (loop, w);
3042    
3043     // now we can free it
3044 root 1.164 free (w);
3045 root 1.407
3046 root 1.164 // now do something you wanted to do when the program has
3047     // no longer anything immediate to do.
3048     }
3049    
3050 root 1.198 ev_idle *idle_watcher = malloc (sizeof (ev_idle));
3051 root 1.164 ev_idle_init (idle_watcher, idle_cb);
3052 root 1.242 ev_idle_start (loop, idle_watcher);
3053 root 1.34
3054    
3055 root 1.42 =head2 C<ev_prepare> and C<ev_check> - customise your event loop!
3056 root 1.1
3057 root 1.406 Prepare and check watchers are often (but not always) used in pairs:
3058 root 1.20 prepare watchers get invoked before the process blocks and check watchers
3059 root 1.14 afterwards.
3060 root 1.1
3061 root 1.433 You I<must not> call C<ev_run> (or similar functions that enter the
3062     current event loop) or C<ev_loop_fork> from either C<ev_prepare> or
3063     C<ev_check> watchers. Other loops than the current one are fine,
3064     however. The rationale behind this is that you do not need to check
3065     for recursion in those watchers, i.e. the sequence will always be
3066     C<ev_prepare>, blocking, C<ev_check> so if you have one watcher of each
3067     kind they will always be called in pairs bracketing the blocking call.
3068 root 1.45
3069 root 1.35 Their main purpose is to integrate other event mechanisms into libev and
3070 root 1.183 their use is somewhat advanced. They could be used, for example, to track
3071 root 1.35 variable changes, implement your own watchers, integrate net-snmp or a
3072 root 1.45 coroutine library and lots more. They are also occasionally useful if
3073     you cache some data and want to flush it before blocking (for example,
3074     in X programs you might want to do an C<XFlush ()> in an C<ev_prepare>
3075     watcher).
3076 root 1.1
3077 root 1.183 This is done by examining in each prepare call which file descriptors
3078     need to be watched by the other library, registering C<ev_io> watchers
3079     for them and starting an C<ev_timer> watcher for any timeouts (many
3080     libraries provide exactly this functionality). Then, in the check watcher,
3081     you check for any events that occurred (by checking the pending status
3082     of all watchers and stopping them) and call back into the library. The
3083     I/O and timer callbacks will never actually be called (but must be valid
3084     nevertheless, because you never know, you know?).
3085 root 1.1
3086 root 1.14 As another example, the Perl Coro module uses these hooks to integrate
3087 root 1.1 coroutines into libev programs, by yielding to other active coroutines
3088     during each prepare and only letting the process block if no coroutines
3089 root 1.20 are ready to run (it's actually more complicated: it only runs coroutines
3090     with priority higher than or equal to the event loop and one coroutine
3091     of lower priority, but only once, using idle watchers to keep the event
3092     loop from blocking if lower-priority coroutines are active, thus mapping
3093     low-priority coroutines to idle/background tasks).
3094 root 1.1
3095 root 1.406 When used for this purpose, it is recommended to give C<ev_check> watchers
3096     highest (C<EV_MAXPRI>) priority, to ensure that they are being run before
3097     any other watchers after the poll (this doesn't matter for C<ev_prepare>
3098     watchers).
3099 root 1.183
3100     Also, C<ev_check> watchers (and C<ev_prepare> watchers, too) should not
3101     activate ("feed") events into libev. While libev fully supports this, they
3102     might get executed before other C<ev_check> watchers did their job. As
3103     C<ev_check> watchers are often used to embed other (non-libev) event
3104     loops those other event loops might be in an unusable state until their
3105     C<ev_check> watcher ran (always remind yourself to coexist peacefully with
3106     others).
3107 root 1.77
3108 root 1.406 =head3 Abusing an C<ev_check> watcher for its side-effect
3109    
3110     C<ev_check> (and less often also C<ev_prepare>) watchers can also be
3111     useful because they are called once per event loop iteration. For
3112     example, if you want to handle a large number of connections fairly, you
3113     normally only do a bit of work for each active connection, and if there
3114     is more work to do, you wait for the next event loop iteration, so other
3115     connections have a chance of making progress.
3116    
3117     Using an C<ev_check> watcher is almost enough: it will be called on the
3118     next event loop iteration. However, that isn't as soon as possible -
3119     without external events, your C<ev_check> watcher will not be invoked.
3120    
3121     This is where C<ev_idle> watchers come in handy - all you need is a
3122     single global idle watcher that is active as long as you have one active
3123     C<ev_check> watcher. The C<ev_idle> watcher makes sure the event loop
3124     will not sleep, and the C<ev_check> watcher makes sure a callback gets
3125     invoked. Neither watcher alone can do that.
3126    
3127 root 1.82 =head3 Watcher-Specific Functions and Data Members
3128    
3129 root 1.1 =over 4
3130    
3131     =item ev_prepare_init (ev_prepare *, callback)
3132    
3133     =item ev_check_init (ev_check *, callback)
3134    
3135     Initialises and configures the prepare or check watcher - they have no
3136     parameters of any kind. There are C<ev_prepare_set> and C<ev_check_set>
3137 root 1.183 macros, but using them is utterly, utterly, utterly and completely
3138     pointless.
3139 root 1.1
3140     =back
3141    
3142 root 1.111 =head3 Examples
3143    
3144 root 1.76 There are a number of principal ways to embed other event loops or modules
3145     into libev. Here are some ideas on how to include libadns into libev
3146     (there is a Perl module named C<EV::ADNS> that does this, which you could
3147 root 1.150 use as a working example. Another Perl module named C<EV::Glib> embeds a
3148     Glib main context into libev, and finally, C<Glib::EV> embeds EV into the
3149     Glib event loop).
3150 root 1.76
3151     Method 1: Add IO watchers and a timeout watcher in a prepare handler,
3152     and in a check watcher, destroy them and call into libadns. What follows
3153     is pseudo-code only of course. This requires you to either use a low
3154     priority for the check watcher or use C<ev_clear_pending> explicitly, as
3155     the callbacks for the IO/timeout watchers might not have been called yet.
3156 root 1.45
3157 root 1.164 static ev_io iow [nfd];
3158     static ev_timer tw;
3159 root 1.45
3160 root 1.164 static void
3161 root 1.198 io_cb (struct ev_loop *loop, ev_io *w, int revents)
3162 root 1.164 {
3163     }
3164 root 1.45
3165 root 1.164 // create io watchers for each fd and a timer before blocking
3166     static void
3167 root 1.198 adns_prepare_cb (struct ev_loop *loop, ev_prepare *w, int revents)
3168 root 1.164 {
3169     int timeout = 3600000;
3170     struct pollfd fds [nfd];
3171     // actual code will need to loop here and realloc etc.
3172     adns_beforepoll (ads, fds, &nfd, &timeout, timeval_from (ev_time ()));
3173    
3174     /* the callback is illegal, but won't be called as we stop during check */
3175 root 1.243 ev_timer_init (&tw, 0, timeout * 1e-3, 0.);
3176 root 1.164 ev_timer_start (loop, &tw);
3177    
3178     // create one ev_io per pollfd
3179     for (int i = 0; i < nfd; ++i)
3180     {
3181     ev_io_init (iow + i, io_cb, fds [i].fd,
3182     ((fds [i].events & POLLIN ? EV_READ : 0)
3183     | (fds [i].events & POLLOUT ? EV_WRITE : 0)));
3184    
3185     fds [i].revents = 0;
3186     ev_io_start (loop, iow + i);
3187     }
3188     }
3189    
3190     // stop all watchers after blocking
3191     static void
3192 root 1.198 adns_check_cb (struct ev_loop *loop, ev_check *w, int revents)
3193 root 1.164 {
3194     ev_timer_stop (loop, &tw);
3195    
3196     for (int i = 0; i < nfd; ++i)
3197     {
3198     // set the relevant poll flags
3199     // could also call adns_processreadable etc. here
3200     struct pollfd *fd = fds + i;
3201     int revents = ev_clear_pending (iow + i);
3202     if (revents & EV_READ ) fd->revents |= fd->events & POLLIN;
3203     if (revents & EV_WRITE) fd->revents |= fd->events & POLLOUT;
3204    
3205     // now stop the watcher
3206     ev_io_stop (loop, iow + i);
3207     }
3208    
3209     adns_afterpoll (adns, fds, nfd, timeval_from (ev_now (loop));
3210     }
3211 root 1.34
3212 root 1.76 Method 2: This would be just like method 1, but you run C<adns_afterpoll>
3213     in the prepare watcher and would dispose of the check watcher.
3214    
3215     Method 3: If the module to be embedded supports explicit event
3216 root 1.161 notification (libadns does), you can also make use of the actual watcher
3217 root 1.76 callbacks, and only destroy/create the watchers in the prepare watcher.
3218    
3219 root 1.164 static void
3220     timer_cb (EV_P_ ev_timer *w, int revents)
3221     {
3222     adns_state ads = (adns_state)w->data;
3223     update_now (EV_A);
3224    
3225     adns_processtimeouts (ads, &tv_now);
3226     }
3227    
3228     static void
3229     io_cb (EV_P_ ev_io *w, int revents)
3230     {
3231     adns_state ads = (adns_state)w->data;
3232     update_now (EV_A);
3233    
3234     if (revents & EV_READ ) adns_processreadable (ads, w->fd, &tv_now);
3235     if (revents & EV_WRITE) adns_processwriteable (ads, w->fd, &tv_now);
3236     }
3237 root 1.76
3238 root 1.164 // do not ever call adns_afterpoll
3239 root 1.76
3240     Method 4: Do not use a prepare or check watcher because the module you
3241 root 1.183 want to embed is not flexible enough to support it. Instead, you can
3242     override their poll function. The drawback with this solution is that the
3243     main loop is now no longer controllable by EV. The C<Glib::EV> module uses
3244     this approach, effectively embedding EV as a client into the horrible
3245     libglib event loop.
3246 root 1.76
3247 root 1.164 static gint
3248     event_poll_func (GPollFD *fds, guint nfds, gint timeout)
3249     {
3250     int got_events = 0;
3251    
3252     for (n = 0; n < nfds; ++n)
3253     // create/start io watcher that sets the re