ViewVC Help
View File | Revision Log | Show Annotations | Download File
/cvs/libev/ev.pod
Revision: 1.335
Committed: Mon Oct 25 10:32:05 2010 UTC (13 years, 10 months ago) by root
Branch: MAIN
Changes since 1.334: +59 -60 lines
Log Message:
*** empty log message ***

File Contents

# User Rev Content
1 root 1.1 =head1 NAME
2    
3     libev - a high performance full-featured event loop written in C
4    
5     =head1 SYNOPSIS
6    
7 root 1.164 #include <ev.h>
8 root 1.1
9 root 1.105 =head2 EXAMPLE PROGRAM
10 root 1.54
11 root 1.164 // a single header file is required
12     #include <ev.h>
13 root 1.54
14 root 1.217 #include <stdio.h> // for puts
15    
16 root 1.164 // every watcher type has its own typedef'd struct
17 root 1.200 // with the name ev_TYPE
18 root 1.164 ev_io stdin_watcher;
19     ev_timer timeout_watcher;
20    
21     // all watcher callbacks have a similar signature
22     // this callback is called when data is readable on stdin
23     static void
24 root 1.198 stdin_cb (EV_P_ ev_io *w, int revents)
25 root 1.164 {
26     puts ("stdin ready");
27     // for one-shot events, one must manually stop the watcher
28     // with its corresponding stop function.
29     ev_io_stop (EV_A_ w);
30    
31 root 1.310 // this causes all nested ev_run's to stop iterating
32     ev_break (EV_A_ EVBREAK_ALL);
33 root 1.164 }
34    
35     // another callback, this time for a time-out
36     static void
37 root 1.198 timeout_cb (EV_P_ ev_timer *w, int revents)
38 root 1.164 {
39     puts ("timeout");
40 root 1.310 // this causes the innermost ev_run to stop iterating
41     ev_break (EV_A_ EVBREAK_ONE);
42 root 1.164 }
43    
44     int
45     main (void)
46     {
47     // use the default event loop unless you have special needs
48 root 1.322 struct ev_loop *loop = EV_DEFAULT;
49 root 1.164
50     // initialise an io watcher, then start it
51     // this one will watch for stdin to become readable
52     ev_io_init (&stdin_watcher, stdin_cb, /*STDIN_FILENO*/ 0, EV_READ);
53     ev_io_start (loop, &stdin_watcher);
54    
55     // initialise a timer watcher, then start it
56     // simple non-repeating 5.5 second timeout
57     ev_timer_init (&timeout_watcher, timeout_cb, 5.5, 0.);
58     ev_timer_start (loop, &timeout_watcher);
59    
60     // now wait for events to arrive
61 root 1.310 ev_run (loop, 0);
62 root 1.164
63     // unloop was called, so exit
64     return 0;
65     }
66 root 1.53
67 root 1.236 =head1 ABOUT THIS DOCUMENT
68    
69     This document documents the libev software package.
70 root 1.1
71 root 1.135 The newest version of this document is also available as an html-formatted
72 root 1.69 web page you might find easier to navigate when reading it for the first
73 root 1.154 time: L<http://pod.tst.eu/http://cvs.schmorp.de/libev/ev.pod>.
74 root 1.69
75 root 1.236 While this document tries to be as complete as possible in documenting
76     libev, its usage and the rationale behind its design, it is not a tutorial
77     on event-based programming, nor will it introduce event-based programming
78     with libev.
79    
80 sf-exg 1.298 Familiarity with event based programming techniques in general is assumed
81 root 1.236 throughout this document.
82    
83 root 1.334 =head1 WHAT TO READ WHEN IN A HURRY
84    
85     This manual tries to be very detailed, but unfortunately, this also makes
86     it very long. If you just want to know the basics of libev, I suggest
87     reading L<ANATOMY OF A WATCHER>, then the L<EXAMPLE PROGRAM> above and
88     look up the missing functions in L<GLOBAL FUNCTIONS> and the C<ev_io> and
89     C<ev_timer> sections in L<WATCHER TYPES>.
90    
91 root 1.236 =head1 ABOUT LIBEV
92    
93 root 1.1 Libev is an event loop: you register interest in certain events (such as a
94 root 1.92 file descriptor being readable or a timeout occurring), and it will manage
95 root 1.4 these event sources and provide your program with events.
96 root 1.1
97     To do this, it must take more or less complete control over your process
98     (or thread) by executing the I<event loop> handler, and will then
99     communicate events via a callback mechanism.
100    
101     You register interest in certain events by registering so-called I<event
102     watchers>, which are relatively small C structures you initialise with the
103     details of the event, and then hand it over to libev by I<starting> the
104     watcher.
105    
106 root 1.105 =head2 FEATURES
107 root 1.1
108 root 1.58 Libev supports C<select>, C<poll>, the Linux-specific C<epoll>, the
109     BSD-specific C<kqueue> and the Solaris-specific event port mechanisms
110     for file descriptor events (C<ev_io>), the Linux C<inotify> interface
111 root 1.261 (for C<ev_stat>), Linux eventfd/signalfd (for faster and cleaner
112     inter-thread wakeup (C<ev_async>)/signal handling (C<ev_signal>)) relative
113     timers (C<ev_timer>), absolute timers with customised rescheduling
114     (C<ev_periodic>), synchronous signals (C<ev_signal>), process status
115     change events (C<ev_child>), and event watchers dealing with the event
116     loop mechanism itself (C<ev_idle>, C<ev_embed>, C<ev_prepare> and
117     C<ev_check> watchers) as well as file watchers (C<ev_stat>) and even
118     limited support for fork events (C<ev_fork>).
119 root 1.54
120     It also is quite fast (see this
121     L<benchmark|http://libev.schmorp.de/bench.html> comparing it to libevent
122     for example).
123 root 1.1
124 root 1.105 =head2 CONVENTIONS
125 root 1.1
126 root 1.135 Libev is very configurable. In this manual the default (and most common)
127     configuration will be described, which supports multiple event loops. For
128     more info about various configuration options please have a look at
129     B<EMBED> section in this manual. If libev was configured without support
130     for multiple event loops, then all functions taking an initial argument of
131 root 1.274 name C<loop> (which is always of type C<struct ev_loop *>) will not have
132 root 1.135 this argument.
133 root 1.1
134 root 1.105 =head2 TIME REPRESENTATION
135 root 1.1
136 root 1.237 Libev represents time as a single floating point number, representing
137 root 1.317 the (fractional) number of seconds since the (POSIX) epoch (in practice
138 root 1.293 somewhere near the beginning of 1970, details are complicated, don't
139     ask). This type is called C<ev_tstamp>, which is what you should use
140     too. It usually aliases to the C<double> type in C. When you need to do
141     any calculations on it, you should treat it as some floating point value.
142    
143     Unlike the name component C<stamp> might indicate, it is also used for
144     time differences (e.g. delays) throughout libev.
145 root 1.34
146 root 1.160 =head1 ERROR HANDLING
147    
148     Libev knows three classes of errors: operating system errors, usage errors
149     and internal errors (bugs).
150    
151     When libev catches an operating system error it cannot handle (for example
152 root 1.161 a system call indicating a condition libev cannot fix), it calls the callback
153 root 1.160 set via C<ev_set_syserr_cb>, which is supposed to fix the problem or
154     abort. The default is to print a diagnostic message and to call C<abort
155     ()>.
156    
157     When libev detects a usage error such as a negative timer interval, then
158     it will print a diagnostic message and abort (via the C<assert> mechanism,
159     so C<NDEBUG> will disable this checking): these are programming errors in
160     the libev caller and need to be fixed there.
161    
162     Libev also has a few internal error-checking C<assert>ions, and also has
163     extensive consistency checking code. These do not trigger under normal
164     circumstances, as they indicate either a bug in libev or worse.
165    
166    
167 root 1.17 =head1 GLOBAL FUNCTIONS
168    
169 root 1.18 These functions can be called anytime, even before initialising the
170     library in any way.
171    
172 root 1.1 =over 4
173    
174     =item ev_tstamp ev_time ()
175    
176 root 1.26 Returns the current time as libev would use it. Please note that the
177     C<ev_now> function is usually faster and also often returns the timestamp
178 sf-exg 1.321 you actually want to know. Also interesting is the combination of
179 root 1.318 C<ev_update_now> and C<ev_now>.
180 root 1.1
181 root 1.97 =item ev_sleep (ev_tstamp interval)
182    
183     Sleep for the given interval: The current thread will be blocked until
184     either it is interrupted or the given time interval has passed. Basically
185 root 1.161 this is a sub-second-resolution C<sleep ()>.
186 root 1.97
187 root 1.1 =item int ev_version_major ()
188    
189     =item int ev_version_minor ()
190    
191 root 1.80 You can find out the major and minor ABI version numbers of the library
192 root 1.1 you linked against by calling the functions C<ev_version_major> and
193     C<ev_version_minor>. If you want, you can compare against the global
194     symbols C<EV_VERSION_MAJOR> and C<EV_VERSION_MINOR>, which specify the
195     version of the library your program was compiled against.
196    
197 root 1.80 These version numbers refer to the ABI version of the library, not the
198     release version.
199 root 1.79
200 root 1.9 Usually, it's a good idea to terminate if the major versions mismatch,
201 root 1.79 as this indicates an incompatible change. Minor versions are usually
202 root 1.1 compatible to older versions, so a larger minor version alone is usually
203     not a problem.
204    
205 root 1.54 Example: Make sure we haven't accidentally been linked against the wrong
206 root 1.320 version (note, however, that this will not detect other ABI mismatches,
207     such as LFS or reentrancy).
208 root 1.34
209 root 1.164 assert (("libev version mismatch",
210     ev_version_major () == EV_VERSION_MAJOR
211     && ev_version_minor () >= EV_VERSION_MINOR));
212 root 1.34
213 root 1.31 =item unsigned int ev_supported_backends ()
214    
215     Return the set of all backends (i.e. their corresponding C<EV_BACKEND_*>
216     value) compiled into this binary of libev (independent of their
217     availability on the system you are running on). See C<ev_default_loop> for
218     a description of the set values.
219    
220 root 1.34 Example: make sure we have the epoll method, because yeah this is cool and
221     a must have and can we have a torrent of it please!!!11
222    
223 root 1.164 assert (("sorry, no epoll, no sex",
224     ev_supported_backends () & EVBACKEND_EPOLL));
225 root 1.34
226 root 1.31 =item unsigned int ev_recommended_backends ()
227    
228 root 1.318 Return the set of all backends compiled into this binary of libev and
229     also recommended for this platform, meaning it will work for most file
230     descriptor types. This set is often smaller than the one returned by
231     C<ev_supported_backends>, as for example kqueue is broken on most BSDs
232     and will not be auto-detected unless you explicitly request it (assuming
233     you know what you are doing). This is the set of backends that libev will
234     probe for if you specify no backends explicitly.
235 root 1.31
236 root 1.35 =item unsigned int ev_embeddable_backends ()
237    
238     Returns the set of backends that are embeddable in other event loops. This
239 root 1.319 value is platform-specific but can include backends not available on the
240     current system. To find which embeddable backends might be supported on
241     the current system, you would need to look at C<ev_embeddable_backends ()
242     & ev_supported_backends ()>, likewise for recommended ones.
243 root 1.35
244     See the description of C<ev_embed> watchers for more info.
245    
246 root 1.186 =item ev_set_allocator (void *(*cb)(void *ptr, long size)) [NOT REENTRANT]
247 root 1.1
248 root 1.59 Sets the allocation function to use (the prototype is similar - the
249 root 1.145 semantics are identical to the C<realloc> C89/SuS/POSIX function). It is
250     used to allocate and free memory (no surprises here). If it returns zero
251     when memory needs to be allocated (C<size != 0>), the library might abort
252     or take some potentially destructive action.
253    
254     Since some systems (at least OpenBSD and Darwin) fail to implement
255     correct C<realloc> semantics, libev will use a wrapper around the system
256     C<realloc> and C<free> functions by default.
257 root 1.1
258     You could override this function in high-availability programs to, say,
259     free some memory if it cannot allocate memory, to use a special allocator,
260     or even to sleep a while and retry until some memory is available.
261    
262 root 1.54 Example: Replace the libev allocator with one that waits a bit and then
263 root 1.145 retries (example requires a standards-compliant C<realloc>).
264 root 1.34
265     static void *
266 root 1.52 persistent_realloc (void *ptr, size_t size)
267 root 1.34 {
268     for (;;)
269     {
270     void *newptr = realloc (ptr, size);
271    
272     if (newptr)
273     return newptr;
274    
275     sleep (60);
276     }
277     }
278    
279     ...
280     ev_set_allocator (persistent_realloc);
281    
282 root 1.186 =item ev_set_syserr_cb (void (*cb)(const char *msg)); [NOT REENTRANT]
283 root 1.1
284 root 1.161 Set the callback function to call on a retryable system call error (such
285 root 1.1 as failed select, poll, epoll_wait). The message is a printable string
286     indicating the system call or subsystem causing the problem. If this
287 root 1.161 callback is set, then libev will expect it to remedy the situation, no
288 root 1.7 matter what, when it returns. That is, libev will generally retry the
289 root 1.1 requested operation, or, if the condition doesn't go away, do bad stuff
290     (such as abort).
291    
292 root 1.54 Example: This is basically the same thing that libev does internally, too.
293 root 1.34
294     static void
295     fatal_error (const char *msg)
296     {
297     perror (msg);
298     abort ();
299     }
300    
301     ...
302     ev_set_syserr_cb (fatal_error);
303    
304 root 1.1 =back
305    
306 root 1.322 =head1 FUNCTIONS CONTROLLING EVENT LOOPS
307 root 1.1
308 root 1.310 An event loop is described by a C<struct ev_loop *> (the C<struct> is
309 root 1.311 I<not> optional in this case unless libev 3 compatibility is disabled, as
310     libev 3 had an C<ev_loop> function colliding with the struct name).
311 root 1.200
312     The library knows two types of such loops, the I<default> loop, which
313 root 1.331 supports child process events, and dynamically created event loops which
314     do not.
315 root 1.1
316     =over 4
317    
318     =item struct ev_loop *ev_default_loop (unsigned int flags)
319    
320 root 1.322 This returns the "default" event loop object, which is what you should
321     normally use when you just need "the event loop". Event loop objects and
322     the C<flags> parameter are described in more detail in the entry for
323     C<ev_loop_new>.
324    
325     If the default loop is already initialised then this function simply
326     returns it (and ignores the flags. If that is troubling you, check
327     C<ev_backend ()> afterwards). Otherwise it will create it with the given
328     flags, which should almost always be C<0>, unless the caller is also the
329     one calling C<ev_run> or otherwise qualifies as "the main program".
330 root 1.1
331     If you don't know what event loop to use, use the one returned from this
332 root 1.322 function (or via the C<EV_DEFAULT> macro).
333 root 1.1
334 root 1.139 Note that this function is I<not> thread-safe, so if you want to use it
335 root 1.322 from multiple threads, you have to employ some kind of mutex (note also
336     that this case is unlikely, as loops cannot be shared easily between
337     threads anyway).
338    
339     The default loop is the only loop that can handle C<ev_child> watchers,
340     and to do this, it always registers a handler for C<SIGCHLD>. If this is
341     a problem for your application you can either create a dynamic loop with
342     C<ev_loop_new> which doesn't do that, or you can simply overwrite the
343     C<SIGCHLD> signal handler I<after> calling C<ev_default_init>.
344 root 1.139
345 root 1.322 Example: This is the most typical usage.
346    
347     if (!ev_default_loop (0))
348     fatal ("could not initialise libev, bad $LIBEV_FLAGS in environment?");
349    
350     Example: Restrict libev to the select and poll backends, and do not allow
351     environment settings to be taken into account:
352    
353     ev_default_loop (EVBACKEND_POLL | EVBACKEND_SELECT | EVFLAG_NOENV);
354    
355     =item struct ev_loop *ev_loop_new (unsigned int flags)
356    
357     This will create and initialise a new event loop object. If the loop
358     could not be initialised, returns false.
359    
360     Note that this function I<is> thread-safe, and one common way to use
361     libev with threads is indeed to create one loop per thread, and using the
362     default loop in the "main" or "initial" thread.
363 root 1.118
364 root 1.1 The flags argument can be used to specify special behaviour or specific
365 root 1.33 backends to use, and is usually specified as C<0> (or C<EVFLAG_AUTO>).
366 root 1.1
367 root 1.33 The following flags are supported:
368 root 1.1
369     =over 4
370    
371 root 1.10 =item C<EVFLAG_AUTO>
372 root 1.1
373 root 1.9 The default flags value. Use this if you have no clue (it's the right
374 root 1.1 thing, believe me).
375    
376 root 1.10 =item C<EVFLAG_NOENV>
377 root 1.1
378 root 1.161 If this flag bit is or'ed into the flag value (or the program runs setuid
379 root 1.8 or setgid) then libev will I<not> look at the environment variable
380     C<LIBEV_FLAGS>. Otherwise (the default), this environment variable will
381     override the flags completely if it is found in the environment. This is
382     useful to try out specific backends to test their performance, or to work
383     around bugs.
384 root 1.1
385 root 1.62 =item C<EVFLAG_FORKCHECK>
386    
387 root 1.291 Instead of calling C<ev_loop_fork> manually after a fork, you can also
388     make libev check for a fork in each iteration by enabling this flag.
389 root 1.62
390     This works by calling C<getpid ()> on every iteration of the loop,
391     and thus this might slow down your event loop if you do a lot of loop
392 ayin 1.65 iterations and little real work, but is usually not noticeable (on my
393 root 1.135 GNU/Linux system for example, C<getpid> is actually a simple 5-insn sequence
394 root 1.161 without a system call and thus I<very> fast, but my GNU/Linux system also has
395 root 1.62 C<pthread_atfork> which is even faster).
396    
397     The big advantage of this flag is that you can forget about fork (and
398     forget about forgetting to tell libev about forking) when you use this
399     flag.
400    
401 root 1.161 This flag setting cannot be overridden or specified in the C<LIBEV_FLAGS>
402 root 1.62 environment variable.
403    
404 root 1.260 =item C<EVFLAG_NOINOTIFY>
405    
406     When this flag is specified, then libev will not attempt to use the
407     I<inotify> API for it's C<ev_stat> watchers. Apart from debugging and
408     testing, this flag can be useful to conserve inotify file descriptors, as
409     otherwise each loop using C<ev_stat> watchers consumes one inotify handle.
410    
411 root 1.277 =item C<EVFLAG_SIGNALFD>
412 root 1.260
413 root 1.277 When this flag is specified, then libev will attempt to use the
414     I<signalfd> API for it's C<ev_signal> (and C<ev_child>) watchers. This API
415 root 1.278 delivers signals synchronously, which makes it both faster and might make
416     it possible to get the queued signal data. It can also simplify signal
417     handling with threads, as long as you properly block signals in your
418     threads that are not interested in handling them.
419 root 1.277
420     Signalfd will not be used by default as this changes your signal mask, and
421     there are a lot of shoddy libraries and programs (glib's threadpool for
422     example) that can't properly initialise their signal masks.
423 root 1.260
424 root 1.31 =item C<EVBACKEND_SELECT> (value 1, portable select backend)
425 root 1.1
426 root 1.29 This is your standard select(2) backend. Not I<completely> standard, as
427     libev tries to roll its own fd_set with no limits on the number of fds,
428     but if that fails, expect a fairly low limit on the number of fds when
429 root 1.102 using this backend. It doesn't scale too well (O(highest_fd)), but its
430     usually the fastest backend for a low number of (low-numbered :) fds.
431    
432     To get good performance out of this backend you need a high amount of
433 root 1.161 parallelism (most of the file descriptors should be busy). If you are
434 root 1.102 writing a server, you should C<accept ()> in a loop to accept as many
435     connections as possible during one iteration. You might also want to have
436     a look at C<ev_set_io_collect_interval ()> to increase the amount of
437 root 1.155 readiness notifications you get per iteration.
438 root 1.1
439 root 1.179 This backend maps C<EV_READ> to the C<readfds> set and C<EV_WRITE> to the
440     C<writefds> set (and to work around Microsoft Windows bugs, also onto the
441     C<exceptfds> set on that platform).
442    
443 root 1.31 =item C<EVBACKEND_POLL> (value 2, poll backend, available everywhere except on windows)
444 root 1.1
445 root 1.102 And this is your standard poll(2) backend. It's more complicated
446     than select, but handles sparse fds better and has no artificial
447     limit on the number of fds you can use (except it will slow down
448     considerably with a lot of inactive fds). It scales similarly to select,
449     i.e. O(total_fds). See the entry for C<EVBACKEND_SELECT>, above, for
450     performance tips.
451 root 1.1
452 root 1.179 This backend maps C<EV_READ> to C<POLLIN | POLLERR | POLLHUP>, and
453     C<EV_WRITE> to C<POLLOUT | POLLERR | POLLHUP>.
454    
455 root 1.31 =item C<EVBACKEND_EPOLL> (value 4, Linux)
456 root 1.1
457 root 1.272 Use the linux-specific epoll(7) interface (for both pre- and post-2.6.9
458     kernels).
459    
460 root 1.29 For few fds, this backend is a bit little slower than poll and select,
461 root 1.94 but it scales phenomenally better. While poll and select usually scale
462     like O(total_fds) where n is the total number of fds (or the highest fd),
463 root 1.205 epoll scales either O(1) or O(active_fds).
464    
465 root 1.210 The epoll mechanism deserves honorable mention as the most misdesigned
466     of the more advanced event mechanisms: mere annoyances include silently
467     dropping file descriptors, requiring a system call per change per file
468     descriptor (and unnecessary guessing of parameters), problems with dup and
469     so on. The biggest issue is fork races, however - if a program forks then
470     I<both> parent and child process have to recreate the epoll set, which can
471     take considerable time (one syscall per file descriptor) and is of course
472     hard to detect.
473 root 1.1
474 root 1.210 Epoll is also notoriously buggy - embedding epoll fds I<should> work, but
475     of course I<doesn't>, and epoll just loves to report events for totally
476 root 1.205 I<different> file descriptors (even already closed ones, so one cannot
477     even remove them from the set) than registered in the set (especially
478     on SMP systems). Libev tries to counter these spurious notifications by
479     employing an additional generation counter and comparing that against the
480 root 1.306 events to filter out spurious ones, recreating the set when required. Last
481     not least, it also refuses to work with some file descriptors which work
482     perfectly fine with C<select> (files, many character devices...).
483 root 1.204
484 root 1.94 While stopping, setting and starting an I/O watcher in the same iteration
485 root 1.210 will result in some caching, there is still a system call per such
486     incident (because the same I<file descriptor> could point to a different
487     I<file description> now), so its best to avoid that. Also, C<dup ()>'ed
488     file descriptors might not work very well if you register events for both
489     file descriptors.
490 root 1.29
491 root 1.102 Best performance from this backend is achieved by not unregistering all
492 root 1.183 watchers for a file descriptor until it has been closed, if possible,
493     i.e. keep at least one watcher active per fd at all times. Stopping and
494     starting a watcher (without re-setting it) also usually doesn't cause
495 root 1.206 extra overhead. A fork can both result in spurious notifications as well
496     as in libev having to destroy and recreate the epoll object, which can
497     take considerable time and thus should be avoided.
498 root 1.102
499 root 1.215 All this means that, in practice, C<EVBACKEND_SELECT> can be as fast or
500     faster than epoll for maybe up to a hundred file descriptors, depending on
501 root 1.214 the usage. So sad.
502 root 1.213
503 root 1.161 While nominally embeddable in other event loops, this feature is broken in
504 root 1.102 all kernel versions tested so far.
505    
506 root 1.179 This backend maps C<EV_READ> and C<EV_WRITE> in the same way as
507     C<EVBACKEND_POLL>.
508    
509 root 1.31 =item C<EVBACKEND_KQUEUE> (value 8, most BSD clones)
510 root 1.29
511 root 1.210 Kqueue deserves special mention, as at the time of this writing, it
512     was broken on all BSDs except NetBSD (usually it doesn't work reliably
513     with anything but sockets and pipes, except on Darwin, where of course
514     it's completely useless). Unlike epoll, however, whose brokenness
515     is by design, these kqueue bugs can (and eventually will) be fixed
516     without API changes to existing programs. For this reason it's not being
517     "auto-detected" unless you explicitly specify it in the flags (i.e. using
518     C<EVBACKEND_KQUEUE>) or libev was compiled on a known-to-be-good (-enough)
519     system like NetBSD.
520 root 1.29
521 root 1.100 You still can embed kqueue into a normal poll or select backend and use it
522     only for sockets (after having made sure that sockets work with kqueue on
523     the target platform). See C<ev_embed> watchers for more info.
524    
525 root 1.29 It scales in the same way as the epoll backend, but the interface to the
526 root 1.100 kernel is more efficient (which says nothing about its actual speed, of
527     course). While stopping, setting and starting an I/O watcher does never
528 root 1.161 cause an extra system call as with C<EVBACKEND_EPOLL>, it still adds up to
529 root 1.206 two event changes per incident. Support for C<fork ()> is very bad (but
530     sane, unlike epoll) and it drops fds silently in similarly hard-to-detect
531     cases
532 root 1.29
533 root 1.102 This backend usually performs well under most conditions.
534    
535     While nominally embeddable in other event loops, this doesn't work
536     everywhere, so you might need to test for this. And since it is broken
537     almost everywhere, you should only use it when you have a lot of sockets
538     (for which it usually works), by embedding it into another event loop
539 root 1.223 (e.g. C<EVBACKEND_SELECT> or C<EVBACKEND_POLL> (but C<poll> is of course
540     also broken on OS X)) and, did I mention it, using it only for sockets.
541 root 1.102
542 root 1.179 This backend maps C<EV_READ> into an C<EVFILT_READ> kevent with
543     C<NOTE_EOF>, and C<EV_WRITE> into an C<EVFILT_WRITE> kevent with
544     C<NOTE_EOF>.
545    
546 root 1.31 =item C<EVBACKEND_DEVPOLL> (value 16, Solaris 8)
547 root 1.29
548 root 1.102 This is not implemented yet (and might never be, unless you send me an
549     implementation). According to reports, C</dev/poll> only supports sockets
550     and is not embeddable, which would limit the usefulness of this backend
551     immensely.
552 root 1.29
553 root 1.31 =item C<EVBACKEND_PORT> (value 32, Solaris 10)
554 root 1.29
555 root 1.94 This uses the Solaris 10 event port mechanism. As with everything on Solaris,
556 root 1.29 it's really slow, but it still scales very well (O(active_fds)).
557    
558 root 1.161 Please note that Solaris event ports can deliver a lot of spurious
559 root 1.32 notifications, so you need to use non-blocking I/O or other means to avoid
560     blocking when no data (or space) is available.
561    
562 root 1.102 While this backend scales well, it requires one system call per active
563     file descriptor per loop iteration. For small and medium numbers of file
564     descriptors a "slow" C<EVBACKEND_SELECT> or C<EVBACKEND_POLL> backend
565     might perform better.
566    
567 root 1.183 On the positive side, with the exception of the spurious readiness
568     notifications, this backend actually performed fully to specification
569     in all tests and is fully embeddable, which is a rare feat among the
570 root 1.206 OS-specific backends (I vastly prefer correctness over speed hacks).
571 root 1.117
572 root 1.179 This backend maps C<EV_READ> and C<EV_WRITE> in the same way as
573     C<EVBACKEND_POLL>.
574    
575 root 1.31 =item C<EVBACKEND_ALL>
576 root 1.29
577     Try all backends (even potentially broken ones that wouldn't be tried
578     with C<EVFLAG_AUTO>). Since this is a mask, you can do stuff such as
579 root 1.31 C<EVBACKEND_ALL & ~EVBACKEND_KQUEUE>.
580 root 1.1
581 root 1.102 It is definitely not recommended to use this flag.
582    
583 root 1.1 =back
584    
585 root 1.260 If one or more of the backend flags are or'ed into the flags value,
586     then only these backends will be tried (in the reverse order as listed
587     here). If none are specified, all backends in C<ev_recommended_backends
588     ()> will be tried.
589 root 1.29
590 root 1.54 Example: Try to create a event loop that uses epoll and nothing else.
591 root 1.34
592 root 1.164 struct ev_loop *epoller = ev_loop_new (EVBACKEND_EPOLL | EVFLAG_NOENV);
593     if (!epoller)
594     fatal ("no epoll found here, maybe it hides under your chair");
595 root 1.34
596 root 1.323 Example: Use whatever libev has to offer, but make sure that kqueue is
597     used if available.
598    
599     struct ev_loop *loop = ev_loop_new (ev_recommended_backends () | EVBACKEND_KQUEUE);
600    
601 root 1.322 =item ev_loop_destroy (loop)
602 root 1.1
603 root 1.322 Destroys an event loop object (frees all memory and kernel state
604     etc.). None of the active event watchers will be stopped in the normal
605     sense, so e.g. C<ev_is_active> might still return true. It is your
606     responsibility to either stop all watchers cleanly yourself I<before>
607     calling this function, or cope with the fact afterwards (which is usually
608     the easiest thing, you can just ignore the watchers and/or C<free ()> them
609     for example).
610 root 1.1
611 root 1.203 Note that certain global state, such as signal state (and installed signal
612     handlers), will not be freed by this function, and related watchers (such
613     as signal and child watchers) would need to be stopped manually.
614 root 1.87
615 root 1.322 This function is normally used on loop objects allocated by
616     C<ev_loop_new>, but it can also be used on the default loop returned by
617     C<ev_default_loop>, in which case it is not thread-safe.
618    
619     Note that it is not advisable to call this function on the default loop
620     except in the rare occasion where you really need to free it's resources.
621     If you need dynamically allocated loops it is better to use C<ev_loop_new>
622     and C<ev_loop_destroy>.
623 root 1.87
624 root 1.322 =item ev_loop_fork (loop)
625 root 1.1
626 root 1.322 This function sets a flag that causes subsequent C<ev_run> iterations to
627     reinitialise the kernel state for backends that have one. Despite the
628 root 1.119 name, you can call it anytime, but it makes most sense after forking, in
629 root 1.322 the child process. You I<must> call it (or use C<EVFLAG_FORKCHECK>) in the
630     child before resuming or calling C<ev_run>.
631 root 1.119
632 root 1.291 Again, you I<have> to call it on I<any> loop that you want to re-use after
633     a fork, I<even if you do not plan to use the loop in the parent>. This is
634     because some kernel interfaces *cough* I<kqueue> *cough* do funny things
635     during fork.
636    
637 root 1.119 On the other hand, you only need to call this function in the child
638 root 1.310 process if and only if you want to use the event loop in the child. If
639     you just fork+exec or create a new loop in the child, you don't have to
640     call it at all (in fact, C<epoll> is so badly broken that it makes a
641     difference, but libev will usually detect this case on its own and do a
642     costly reset of the backend).
643 root 1.1
644 root 1.9 The function itself is quite fast and it's usually not a problem to call
645 root 1.322 it just in case after a fork.
646 root 1.1
647 root 1.322 Example: Automate calling C<ev_loop_fork> on the default loop when
648     using pthreads.
649 root 1.1
650 root 1.322 static void
651     post_fork_child (void)
652     {
653     ev_loop_fork (EV_DEFAULT);
654     }
655 root 1.1
656 root 1.322 ...
657     pthread_atfork (0, 0, post_fork_child);
658 root 1.1
659 root 1.131 =item int ev_is_default_loop (loop)
660    
661 root 1.183 Returns true when the given loop is, in fact, the default loop, and false
662     otherwise.
663 root 1.131
664 root 1.291 =item unsigned int ev_iteration (loop)
665 root 1.66
666 root 1.310 Returns the current iteration count for the event loop, which is identical
667     to the number of times libev did poll for new events. It starts at C<0>
668     and happily wraps around with enough iterations.
669 root 1.66
670     This value can sometimes be useful as a generation counter of sorts (it
671     "ticks" the number of loop iterations), as it roughly corresponds with
672 root 1.291 C<ev_prepare> and C<ev_check> calls - and is incremented between the
673     prepare and check phases.
674 root 1.66
675 root 1.291 =item unsigned int ev_depth (loop)
676 root 1.247
677 root 1.310 Returns the number of times C<ev_run> was entered minus the number of
678     times C<ev_run> was exited, in other words, the recursion depth.
679 root 1.247
680 root 1.310 Outside C<ev_run>, this number is zero. In a callback, this number is
681     C<1>, unless C<ev_run> was invoked recursively (or from another thread),
682 root 1.247 in which case it is higher.
683    
684 root 1.310 Leaving C<ev_run> abnormally (setjmp/longjmp, cancelling the thread
685 root 1.291 etc.), doesn't count as "exit" - consider this as a hint to avoid such
686 root 1.310 ungentleman-like behaviour unless it's really convenient.
687 root 1.247
688 root 1.31 =item unsigned int ev_backend (loop)
689 root 1.1
690 root 1.31 Returns one of the C<EVBACKEND_*> flags indicating the event backend in
691 root 1.1 use.
692    
693 root 1.9 =item ev_tstamp ev_now (loop)
694 root 1.1
695     Returns the current "event loop time", which is the time the event loop
696 root 1.34 received events and started processing them. This timestamp does not
697     change as long as callbacks are being processed, and this is also the base
698     time used for relative timers. You can treat it as the timestamp of the
699 root 1.92 event occurring (or more correctly, libev finding out about it).
700 root 1.1
701 root 1.176 =item ev_now_update (loop)
702    
703     Establishes the current time by querying the kernel, updating the time
704     returned by C<ev_now ()> in the progress. This is a costly operation and
705 root 1.310 is usually done automatically within C<ev_run ()>.
706 root 1.176
707     This function is rarely useful, but when some event callback runs for a
708     very long time without entering the event loop, updating libev's idea of
709     the current time is a good idea.
710    
711 root 1.238 See also L<The special problem of time updates> in the C<ev_timer> section.
712 root 1.176
713 root 1.231 =item ev_suspend (loop)
714    
715     =item ev_resume (loop)
716    
717 root 1.310 These two functions suspend and resume an event loop, for use when the
718     loop is not used for a while and timeouts should not be processed.
719 root 1.231
720     A typical use case would be an interactive program such as a game: When
721     the user presses C<^Z> to suspend the game and resumes it an hour later it
722     would be best to handle timeouts as if no time had actually passed while
723     the program was suspended. This can be achieved by calling C<ev_suspend>
724     in your C<SIGTSTP> handler, sending yourself a C<SIGSTOP> and calling
725     C<ev_resume> directly afterwards to resume timer processing.
726    
727     Effectively, all C<ev_timer> watchers will be delayed by the time spend
728     between C<ev_suspend> and C<ev_resume>, and all C<ev_periodic> watchers
729     will be rescheduled (that is, they will lose any events that would have
730 sf-exg 1.298 occurred while suspended).
731 root 1.231
732     After calling C<ev_suspend> you B<must not> call I<any> function on the
733     given loop other than C<ev_resume>, and you B<must not> call C<ev_resume>
734     without a previous call to C<ev_suspend>.
735    
736     Calling C<ev_suspend>/C<ev_resume> has the side effect of updating the
737     event loop time (see C<ev_now_update>).
738    
739 root 1.310 =item ev_run (loop, int flags)
740 root 1.1
741     Finally, this is it, the event handler. This function usually is called
742 root 1.268 after you have initialised all your watchers and you want to start
743 root 1.310 handling events. It will ask the operating system for any new events, call
744     the watcher callbacks, an then repeat the whole process indefinitely: This
745     is why event loops are called I<loops>.
746 root 1.1
747 root 1.310 If the flags argument is specified as C<0>, it will keep handling events
748     until either no event watchers are active anymore or C<ev_break> was
749     called.
750 root 1.1
751 root 1.310 Please note that an explicit C<ev_break> is usually better than
752 root 1.34 relying on all watchers to be stopped when deciding when a program has
753 root 1.183 finished (especially in interactive programs), but having a program
754     that automatically loops as long as it has to and no longer by virtue
755     of relying on its watchers stopping correctly, that is truly a thing of
756     beauty.
757 root 1.34
758 root 1.310 A flags value of C<EVRUN_NOWAIT> will look for new events, will handle
759     those events and any already outstanding ones, but will not wait and
760     block your process in case there are no events and will return after one
761     iteration of the loop. This is sometimes useful to poll and handle new
762     events while doing lengthy calculations, to keep the program responsive.
763 root 1.1
764 root 1.310 A flags value of C<EVRUN_ONCE> will look for new events (waiting if
765 root 1.183 necessary) and will handle those and any already outstanding ones. It
766     will block your process until at least one new event arrives (which could
767 root 1.208 be an event internal to libev itself, so there is no guarantee that a
768 root 1.183 user-registered callback will be called), and will return after one
769     iteration of the loop.
770    
771     This is useful if you are waiting for some external event in conjunction
772     with something not expressible using other libev watchers (i.e. "roll your
773 root 1.310 own C<ev_run>"). However, a pair of C<ev_prepare>/C<ev_check> watchers is
774 root 1.33 usually a better approach for this kind of thing.
775    
776 root 1.310 Here are the gory details of what C<ev_run> does:
777 root 1.33
778 root 1.310 - Increment loop depth.
779     - Reset the ev_break status.
780 root 1.77 - Before the first iteration, call any pending watchers.
781 root 1.310 LOOP:
782     - If EVFLAG_FORKCHECK was used, check for a fork.
783 root 1.171 - If a fork was detected (by any means), queue and call all fork watchers.
784 root 1.113 - Queue and call all prepare watchers.
785 root 1.310 - If ev_break was called, goto FINISH.
786 root 1.171 - If we have been forked, detach and recreate the kernel state
787     as to not disturb the other process.
788 root 1.33 - Update the kernel state with all outstanding changes.
789 root 1.171 - Update the "event loop time" (ev_now ()).
790 root 1.113 - Calculate for how long to sleep or block, if at all
791 root 1.310 (active idle watchers, EVRUN_NOWAIT or not having
792 root 1.113 any active watchers at all will result in not sleeping).
793     - Sleep if the I/O and timer collect interval say so.
794 root 1.310 - Increment loop iteration counter.
795 root 1.33 - Block the process, waiting for any events.
796     - Queue all outstanding I/O (fd) events.
797 root 1.171 - Update the "event loop time" (ev_now ()), and do time jump adjustments.
798 root 1.183 - Queue all expired timers.
799     - Queue all expired periodics.
800 root 1.310 - Queue all idle watchers with priority higher than that of pending events.
801 root 1.33 - Queue all check watchers.
802     - Call all queued watchers in reverse order (i.e. check watchers first).
803     Signals and child watchers are implemented as I/O watchers, and will
804     be handled here by queueing them when their watcher gets executed.
805 root 1.310 - If ev_break has been called, or EVRUN_ONCE or EVRUN_NOWAIT
806     were used, or there are no active watchers, goto FINISH, otherwise
807     continue with step LOOP.
808     FINISH:
809     - Reset the ev_break status iff it was EVBREAK_ONE.
810     - Decrement the loop depth.
811     - Return.
812 root 1.27
813 root 1.114 Example: Queue some jobs and then loop until no events are outstanding
814 root 1.34 anymore.
815    
816     ... queue jobs here, make sure they register event watchers as long
817     ... as they still have work to do (even an idle watcher will do..)
818 root 1.310 ev_run (my_loop, 0);
819 root 1.171 ... jobs done or somebody called unloop. yeah!
820 root 1.34
821 root 1.310 =item ev_break (loop, how)
822 root 1.1
823 root 1.310 Can be used to make a call to C<ev_run> return early (but only after it
824 root 1.9 has processed all outstanding events). The C<how> argument must be either
825 root 1.310 C<EVBREAK_ONE>, which will make the innermost C<ev_run> call return, or
826     C<EVBREAK_ALL>, which will make all nested C<ev_run> calls return.
827 root 1.1
828 root 1.310 This "unloop state" will be cleared when entering C<ev_run> again.
829 root 1.115
830 root 1.310 It is safe to call C<ev_break> from outside any C<ev_run> calls. ##TODO##
831 root 1.194
832 root 1.1 =item ev_ref (loop)
833    
834     =item ev_unref (loop)
835    
836 root 1.9 Ref/unref can be used to add or remove a reference count on the event
837     loop: Every watcher keeps one reference, and as long as the reference
838 root 1.310 count is nonzero, C<ev_run> will not return on its own.
839 root 1.183
840 root 1.276 This is useful when you have a watcher that you never intend to
841 root 1.310 unregister, but that nevertheless should not keep C<ev_run> from
842 root 1.276 returning. In such a case, call C<ev_unref> after starting, and C<ev_ref>
843     before stopping it.
844 root 1.183
845 root 1.229 As an example, libev itself uses this for its internal signal pipe: It
846 root 1.310 is not visible to the libev user and should not keep C<ev_run> from
847 root 1.229 exiting if no event watchers registered by it are active. It is also an
848     excellent way to do this for generic recurring timers or from within
849     third-party libraries. Just remember to I<unref after start> and I<ref
850     before stop> (but only if the watcher wasn't active before, or was active
851     before, respectively. Note also that libev might stop watchers itself
852     (e.g. non-repeating timers) in which case you have to C<ev_ref>
853     in the callback).
854 root 1.1
855 root 1.310 Example: Create a signal watcher, but keep it from keeping C<ev_run>
856 root 1.34 running when nothing else is active.
857    
858 root 1.198 ev_signal exitsig;
859 root 1.164 ev_signal_init (&exitsig, sig_cb, SIGINT);
860     ev_signal_start (loop, &exitsig);
861     evf_unref (loop);
862 root 1.34
863 root 1.54 Example: For some weird reason, unregister the above signal handler again.
864 root 1.34
865 root 1.164 ev_ref (loop);
866     ev_signal_stop (loop, &exitsig);
867 root 1.34
868 root 1.97 =item ev_set_io_collect_interval (loop, ev_tstamp interval)
869    
870     =item ev_set_timeout_collect_interval (loop, ev_tstamp interval)
871    
872     These advanced functions influence the time that libev will spend waiting
873 root 1.171 for events. Both time intervals are by default C<0>, meaning that libev
874     will try to invoke timer/periodic callbacks and I/O callbacks with minimum
875     latency.
876 root 1.97
877     Setting these to a higher value (the C<interval> I<must> be >= C<0>)
878 root 1.171 allows libev to delay invocation of I/O and timer/periodic callbacks
879     to increase efficiency of loop iterations (or to increase power-saving
880     opportunities).
881 root 1.97
882 root 1.183 The idea is that sometimes your program runs just fast enough to handle
883     one (or very few) event(s) per loop iteration. While this makes the
884     program responsive, it also wastes a lot of CPU time to poll for new
885 root 1.97 events, especially with backends like C<select ()> which have a high
886     overhead for the actual polling but can deliver many events at once.
887    
888     By setting a higher I<io collect interval> you allow libev to spend more
889     time collecting I/O events, so you can handle more events per iteration,
890     at the cost of increasing latency. Timeouts (both C<ev_periodic> and
891 ayin 1.101 C<ev_timer>) will be not affected. Setting this to a non-null value will
892 root 1.245 introduce an additional C<ev_sleep ()> call into most loop iterations. The
893     sleep time ensures that libev will not poll for I/O events more often then
894     once per this interval, on average.
895 root 1.97
896     Likewise, by setting a higher I<timeout collect interval> you allow libev
897     to spend more time collecting timeouts, at the expense of increased
898 root 1.183 latency/jitter/inexactness (the watcher callback will be called
899     later). C<ev_io> watchers will not be affected. Setting this to a non-null
900     value will not introduce any overhead in libev.
901 root 1.97
902 root 1.161 Many (busy) programs can usually benefit by setting the I/O collect
903 root 1.98 interval to a value near C<0.1> or so, which is often enough for
904     interactive servers (of course not for games), likewise for timeouts. It
905     usually doesn't make much sense to set it to a lower value than C<0.01>,
906 root 1.245 as this approaches the timing granularity of most systems. Note that if
907     you do transactions with the outside world and you can't increase the
908     parallelity, then this setting will limit your transaction rate (if you
909     need to poll once per transaction and the I/O collect interval is 0.01,
910 sf-exg 1.298 then you can't do more than 100 transactions per second).
911 root 1.97
912 root 1.171 Setting the I<timeout collect interval> can improve the opportunity for
913     saving power, as the program will "bundle" timer callback invocations that
914     are "near" in time together, by delaying some, thus reducing the number of
915     times the process sleeps and wakes up again. Another useful technique to
916     reduce iterations/wake-ups is to use C<ev_periodic> watchers and make sure
917     they fire on, say, one-second boundaries only.
918    
919 root 1.245 Example: we only need 0.1s timeout granularity, and we wish not to poll
920     more often than 100 times per second:
921    
922     ev_set_timeout_collect_interval (EV_DEFAULT_UC_ 0.1);
923     ev_set_io_collect_interval (EV_DEFAULT_UC_ 0.01);
924    
925 root 1.253 =item ev_invoke_pending (loop)
926    
927     This call will simply invoke all pending watchers while resetting their
928 root 1.310 pending state. Normally, C<ev_run> does this automatically when required,
929 root 1.313 but when overriding the invoke callback this call comes handy. This
930     function can be invoked from a watcher - this can be useful for example
931     when you want to do some lengthy calculation and want to pass further
932     event handling to another thread (you still have to make sure only one
933     thread executes within C<ev_invoke_pending> or C<ev_run> of course).
934 root 1.253
935 root 1.256 =item int ev_pending_count (loop)
936    
937     Returns the number of pending watchers - zero indicates that no watchers
938     are pending.
939    
940 root 1.253 =item ev_set_invoke_pending_cb (loop, void (*invoke_pending_cb)(EV_P))
941    
942     This overrides the invoke pending functionality of the loop: Instead of
943 root 1.310 invoking all pending watchers when there are any, C<ev_run> will call
944 root 1.253 this callback instead. This is useful, for example, when you want to
945     invoke the actual watchers inside another context (another thread etc.).
946    
947     If you want to reset the callback, use C<ev_invoke_pending> as new
948     callback.
949    
950     =item ev_set_loop_release_cb (loop, void (*release)(EV_P), void (*acquire)(EV_P))
951    
952     Sometimes you want to share the same loop between multiple threads. This
953     can be done relatively simply by putting mutex_lock/unlock calls around
954     each call to a libev function.
955    
956 root 1.310 However, C<ev_run> can run an indefinite time, so it is not feasible
957     to wait for it to return. One way around this is to wake up the event
958     loop via C<ev_break> and C<av_async_send>, another way is to set these
959     I<release> and I<acquire> callbacks on the loop.
960 root 1.253
961     When set, then C<release> will be called just before the thread is
962     suspended waiting for new events, and C<acquire> is called just
963     afterwards.
964    
965     Ideally, C<release> will just call your mutex_unlock function, and
966     C<acquire> will just call the mutex_lock function again.
967    
968 root 1.254 While event loop modifications are allowed between invocations of
969     C<release> and C<acquire> (that's their only purpose after all), no
970     modifications done will affect the event loop, i.e. adding watchers will
971     have no effect on the set of file descriptors being watched, or the time
972 root 1.310 waited. Use an C<ev_async> watcher to wake up C<ev_run> when you want it
973 root 1.254 to take note of any changes you made.
974    
975 root 1.310 In theory, threads executing C<ev_run> will be async-cancel safe between
976 root 1.254 invocations of C<release> and C<acquire>.
977    
978     See also the locking example in the C<THREADS> section later in this
979     document.
980    
981 root 1.253 =item ev_set_userdata (loop, void *data)
982    
983     =item ev_userdata (loop)
984    
985     Set and retrieve a single C<void *> associated with a loop. When
986     C<ev_set_userdata> has never been called, then C<ev_userdata> returns
987     C<0.>
988    
989     These two functions can be used to associate arbitrary data with a loop,
990     and are intended solely for the C<invoke_pending_cb>, C<release> and
991     C<acquire> callbacks described above, but of course can be (ab-)used for
992     any other purpose as well.
993    
994 root 1.309 =item ev_verify (loop)
995 root 1.159
996     This function only does something when C<EV_VERIFY> support has been
997 root 1.200 compiled in, which is the default for non-minimal builds. It tries to go
998 root 1.183 through all internal structures and checks them for validity. If anything
999     is found to be inconsistent, it will print an error message to standard
1000     error and call C<abort ()>.
1001 root 1.159
1002     This can be used to catch bugs inside libev itself: under normal
1003     circumstances, this function will never abort as of course libev keeps its
1004     data structures consistent.
1005    
1006 root 1.1 =back
1007    
1008 root 1.42
1009 root 1.1 =head1 ANATOMY OF A WATCHER
1010    
1011 root 1.200 In the following description, uppercase C<TYPE> in names stands for the
1012     watcher type, e.g. C<ev_TYPE_start> can mean C<ev_timer_start> for timer
1013     watchers and C<ev_io_start> for I/O watchers.
1014    
1015 root 1.311 A watcher is an opaque structure that you allocate and register to record
1016     your interest in some event. To make a concrete example, imagine you want
1017     to wait for STDIN to become readable, you would create an C<ev_io> watcher
1018     for that:
1019 root 1.1
1020 root 1.198 static void my_cb (struct ev_loop *loop, ev_io *w, int revents)
1021 root 1.164 {
1022     ev_io_stop (w);
1023 root 1.310 ev_break (loop, EVBREAK_ALL);
1024 root 1.164 }
1025    
1026     struct ev_loop *loop = ev_default_loop (0);
1027 root 1.200
1028 root 1.198 ev_io stdin_watcher;
1029 root 1.200
1030 root 1.164 ev_init (&stdin_watcher, my_cb);
1031     ev_io_set (&stdin_watcher, STDIN_FILENO, EV_READ);
1032     ev_io_start (loop, &stdin_watcher);
1033 root 1.200
1034 root 1.310 ev_run (loop, 0);
1035 root 1.1
1036     As you can see, you are responsible for allocating the memory for your
1037 root 1.200 watcher structures (and it is I<usually> a bad idea to do this on the
1038     stack).
1039    
1040     Each watcher has an associated watcher structure (called C<struct ev_TYPE>
1041     or simply C<ev_TYPE>, as typedefs are provided for all watcher structs).
1042 root 1.1
1043 root 1.311 Each watcher structure must be initialised by a call to C<ev_init (watcher
1044     *, callback)>, which expects a callback to be provided. This callback is
1045     invoked each time the event occurs (or, in the case of I/O watchers, each
1046     time the event loop detects that the file descriptor given is readable
1047     and/or writable).
1048 root 1.1
1049 root 1.200 Each watcher type further has its own C<< ev_TYPE_set (watcher *, ...) >>
1050     macro to configure it, with arguments specific to the watcher type. There
1051     is also a macro to combine initialisation and setting in one call: C<<
1052     ev_TYPE_init (watcher *, callback, ...) >>.
1053 root 1.1
1054     To make the watcher actually watch out for events, you have to start it
1055 root 1.200 with a watcher-specific start function (C<< ev_TYPE_start (loop, watcher
1056 root 1.1 *) >>), and you can stop watching for events at any time by calling the
1057 root 1.200 corresponding stop function (C<< ev_TYPE_stop (loop, watcher *) >>.
1058 root 1.1
1059     As long as your watcher is active (has been started but not stopped) you
1060     must not touch the values stored in it. Most specifically you must never
1061 root 1.200 reinitialise it or call its C<ev_TYPE_set> macro.
1062 root 1.1
1063     Each and every callback receives the event loop pointer as first, the
1064     registered watcher structure as second, and a bitset of received events as
1065     third argument.
1066    
1067 root 1.14 The received events usually include a single bit per event type received
1068 root 1.1 (you can receive multiple events at the same time). The possible bit masks
1069     are:
1070    
1071     =over 4
1072    
1073 root 1.10 =item C<EV_READ>
1074 root 1.1
1075 root 1.10 =item C<EV_WRITE>
1076 root 1.1
1077 root 1.10 The file descriptor in the C<ev_io> watcher has become readable and/or
1078 root 1.1 writable.
1079    
1080 root 1.289 =item C<EV_TIMER>
1081 root 1.1
1082 root 1.10 The C<ev_timer> watcher has timed out.
1083 root 1.1
1084 root 1.10 =item C<EV_PERIODIC>
1085 root 1.1
1086 root 1.10 The C<ev_periodic> watcher has timed out.
1087 root 1.1
1088 root 1.10 =item C<EV_SIGNAL>
1089 root 1.1
1090 root 1.10 The signal specified in the C<ev_signal> watcher has been received by a thread.
1091 root 1.1
1092 root 1.10 =item C<EV_CHILD>
1093 root 1.1
1094 root 1.10 The pid specified in the C<ev_child> watcher has received a status change.
1095 root 1.1
1096 root 1.48 =item C<EV_STAT>
1097    
1098     The path specified in the C<ev_stat> watcher changed its attributes somehow.
1099    
1100 root 1.10 =item C<EV_IDLE>
1101 root 1.1
1102 root 1.10 The C<ev_idle> watcher has determined that you have nothing better to do.
1103 root 1.1
1104 root 1.10 =item C<EV_PREPARE>
1105 root 1.1
1106 root 1.10 =item C<EV_CHECK>
1107 root 1.1
1108 root 1.310 All C<ev_prepare> watchers are invoked just I<before> C<ev_run> starts
1109 root 1.10 to gather new events, and all C<ev_check> watchers are invoked just after
1110 root 1.310 C<ev_run> has gathered them, but before it invokes any callbacks for any
1111 root 1.1 received events. Callbacks of both watcher types can start and stop as
1112     many watchers as they want, and all of them will be taken into account
1113 root 1.10 (for example, a C<ev_prepare> watcher might start an idle watcher to keep
1114 root 1.310 C<ev_run> from blocking).
1115 root 1.1
1116 root 1.50 =item C<EV_EMBED>
1117    
1118     The embedded event loop specified in the C<ev_embed> watcher needs attention.
1119    
1120     =item C<EV_FORK>
1121    
1122     The event loop has been resumed in the child process after fork (see
1123     C<ev_fork>).
1124    
1125 root 1.324 =item C<EV_CLEANUP>
1126    
1127 sf-exg 1.330 The event loop is about to be destroyed (see C<ev_cleanup>).
1128 root 1.324
1129 root 1.122 =item C<EV_ASYNC>
1130    
1131     The given async watcher has been asynchronously notified (see C<ev_async>).
1132    
1133 root 1.229 =item C<EV_CUSTOM>
1134    
1135     Not ever sent (or otherwise used) by libev itself, but can be freely used
1136     by libev users to signal watchers (e.g. via C<ev_feed_event>).
1137    
1138 root 1.10 =item C<EV_ERROR>
1139 root 1.1
1140 root 1.161 An unspecified error has occurred, the watcher has been stopped. This might
1141 root 1.1 happen because the watcher could not be properly started because libev
1142     ran out of memory, a file descriptor was found to be closed or any other
1143 root 1.197 problem. Libev considers these application bugs.
1144    
1145     You best act on it by reporting the problem and somehow coping with the
1146     watcher being stopped. Note that well-written programs should not receive
1147     an error ever, so when your watcher receives it, this usually indicates a
1148     bug in your program.
1149 root 1.1
1150 root 1.183 Libev will usually signal a few "dummy" events together with an error, for
1151     example it might indicate that a fd is readable or writable, and if your
1152     callbacks is well-written it can just attempt the operation and cope with
1153     the error from read() or write(). This will not work in multi-threaded
1154     programs, though, as the fd could already be closed and reused for another
1155     thing, so beware.
1156 root 1.1
1157     =back
1158    
1159 root 1.42 =head2 GENERIC WATCHER FUNCTIONS
1160 root 1.36
1161     =over 4
1162    
1163     =item C<ev_init> (ev_TYPE *watcher, callback)
1164    
1165     This macro initialises the generic portion of a watcher. The contents
1166     of the watcher object can be arbitrary (so C<malloc> will do). Only
1167     the generic parts of the watcher are initialised, you I<need> to call
1168     the type-specific C<ev_TYPE_set> macro afterwards to initialise the
1169     type-specific parts. For each type there is also a C<ev_TYPE_init> macro
1170     which rolls both calls into one.
1171    
1172     You can reinitialise a watcher at any time as long as it has been stopped
1173     (or never started) and there are no pending events outstanding.
1174    
1175 root 1.198 The callback is always of type C<void (*)(struct ev_loop *loop, ev_TYPE *watcher,
1176 root 1.36 int revents)>.
1177    
1178 root 1.183 Example: Initialise an C<ev_io> watcher in two steps.
1179    
1180     ev_io w;
1181     ev_init (&w, my_cb);
1182     ev_io_set (&w, STDIN_FILENO, EV_READ);
1183    
1184 root 1.274 =item C<ev_TYPE_set> (ev_TYPE *watcher, [args])
1185 root 1.36
1186     This macro initialises the type-specific parts of a watcher. You need to
1187     call C<ev_init> at least once before you call this macro, but you can
1188     call C<ev_TYPE_set> any number of times. You must not, however, call this
1189     macro on a watcher that is active (it can be pending, however, which is a
1190     difference to the C<ev_init> macro).
1191    
1192     Although some watcher types do not have type-specific arguments
1193     (e.g. C<ev_prepare>) you still need to call its C<set> macro.
1194    
1195 root 1.183 See C<ev_init>, above, for an example.
1196    
1197 root 1.36 =item C<ev_TYPE_init> (ev_TYPE *watcher, callback, [args])
1198    
1199 root 1.161 This convenience macro rolls both C<ev_init> and C<ev_TYPE_set> macro
1200     calls into a single call. This is the most convenient method to initialise
1201 root 1.36 a watcher. The same limitations apply, of course.
1202    
1203 root 1.183 Example: Initialise and set an C<ev_io> watcher in one step.
1204    
1205     ev_io_init (&w, my_cb, STDIN_FILENO, EV_READ);
1206    
1207 root 1.274 =item C<ev_TYPE_start> (loop, ev_TYPE *watcher)
1208 root 1.36
1209     Starts (activates) the given watcher. Only active watchers will receive
1210     events. If the watcher is already active nothing will happen.
1211    
1212 root 1.183 Example: Start the C<ev_io> watcher that is being abused as example in this
1213     whole section.
1214    
1215     ev_io_start (EV_DEFAULT_UC, &w);
1216    
1217 root 1.274 =item C<ev_TYPE_stop> (loop, ev_TYPE *watcher)
1218 root 1.36
1219 root 1.195 Stops the given watcher if active, and clears the pending status (whether
1220     the watcher was active or not).
1221    
1222     It is possible that stopped watchers are pending - for example,
1223     non-repeating timers are being stopped when they become pending - but
1224     calling C<ev_TYPE_stop> ensures that the watcher is neither active nor
1225     pending. If you want to free or reuse the memory used by the watcher it is
1226     therefore a good idea to always call its C<ev_TYPE_stop> function.
1227 root 1.36
1228     =item bool ev_is_active (ev_TYPE *watcher)
1229    
1230     Returns a true value iff the watcher is active (i.e. it has been started
1231     and not yet been stopped). As long as a watcher is active you must not modify
1232     it.
1233    
1234     =item bool ev_is_pending (ev_TYPE *watcher)
1235    
1236     Returns a true value iff the watcher is pending, (i.e. it has outstanding
1237     events but its callback has not yet been invoked). As long as a watcher
1238     is pending (but not active) you must not call an init function on it (but
1239 root 1.73 C<ev_TYPE_set> is safe), you must not change its priority, and you must
1240     make sure the watcher is available to libev (e.g. you cannot C<free ()>
1241     it).
1242 root 1.36
1243 root 1.55 =item callback ev_cb (ev_TYPE *watcher)
1244 root 1.36
1245     Returns the callback currently set on the watcher.
1246    
1247     =item ev_cb_set (ev_TYPE *watcher, callback)
1248    
1249     Change the callback. You can change the callback at virtually any time
1250     (modulo threads).
1251    
1252 root 1.274 =item ev_set_priority (ev_TYPE *watcher, int priority)
1253 root 1.67
1254     =item int ev_priority (ev_TYPE *watcher)
1255    
1256     Set and query the priority of the watcher. The priority is a small
1257     integer between C<EV_MAXPRI> (default: C<2>) and C<EV_MINPRI>
1258     (default: C<-2>). Pending watchers with higher priority will be invoked
1259     before watchers with lower priority, but priority will not keep watchers
1260     from being executed (except for C<ev_idle> watchers).
1261    
1262     If you need to suppress invocation when higher priority events are pending
1263     you need to look at C<ev_idle> watchers, which provide this functionality.
1264    
1265 root 1.73 You I<must not> change the priority of a watcher as long as it is active or
1266     pending.
1267    
1268 root 1.233 Setting a priority outside the range of C<EV_MINPRI> to C<EV_MAXPRI> is
1269     fine, as long as you do not mind that the priority value you query might
1270     or might not have been clamped to the valid range.
1271    
1272 root 1.67 The default priority used by watchers when no priority has been set is
1273     always C<0>, which is supposed to not be too high and not be too low :).
1274    
1275 root 1.235 See L<WATCHER PRIORITY MODELS>, below, for a more thorough treatment of
1276 root 1.233 priorities.
1277 root 1.67
1278 root 1.74 =item ev_invoke (loop, ev_TYPE *watcher, int revents)
1279    
1280     Invoke the C<watcher> with the given C<loop> and C<revents>. Neither
1281     C<loop> nor C<revents> need to be valid as long as the watcher callback
1282 root 1.183 can deal with that fact, as both are simply passed through to the
1283     callback.
1284 root 1.74
1285     =item int ev_clear_pending (loop, ev_TYPE *watcher)
1286    
1287 root 1.183 If the watcher is pending, this function clears its pending status and
1288     returns its C<revents> bitset (as if its callback was invoked). If the
1289 root 1.74 watcher isn't pending it does nothing and returns C<0>.
1290    
1291 root 1.183 Sometimes it can be useful to "poll" a watcher instead of waiting for its
1292     callback to be invoked, which can be accomplished with this function.
1293    
1294 root 1.274 =item ev_feed_event (loop, ev_TYPE *watcher, int revents)
1295 root 1.273
1296     Feeds the given event set into the event loop, as if the specified event
1297     had happened for the specified watcher (which must be a pointer to an
1298     initialised but not necessarily started event watcher). Obviously you must
1299     not free the watcher as long as it has pending events.
1300    
1301     Stopping the watcher, letting libev invoke it, or calling
1302     C<ev_clear_pending> will clear the pending event, even if the watcher was
1303     not started in the first place.
1304    
1305     See also C<ev_feed_fd_event> and C<ev_feed_signal_event> for related
1306     functions that do not need a watcher.
1307    
1308 root 1.36 =back
1309    
1310 root 1.1 =head2 ASSOCIATING CUSTOM DATA WITH A WATCHER
1311    
1312     Each watcher has, by default, a member C<void *data> that you can change
1313 root 1.183 and read at any time: libev will completely ignore it. This can be used
1314 root 1.1 to associate arbitrary data with your watcher. If you need more data and
1315     don't want to allocate memory and store a pointer to it in that data
1316     member, you can also "subclass" the watcher type and provide your own
1317     data:
1318    
1319 root 1.164 struct my_io
1320     {
1321 root 1.198 ev_io io;
1322 root 1.164 int otherfd;
1323     void *somedata;
1324     struct whatever *mostinteresting;
1325 root 1.178 };
1326    
1327     ...
1328     struct my_io w;
1329     ev_io_init (&w.io, my_cb, fd, EV_READ);
1330 root 1.1
1331     And since your callback will be called with a pointer to the watcher, you
1332     can cast it back to your own type:
1333    
1334 root 1.198 static void my_cb (struct ev_loop *loop, ev_io *w_, int revents)
1335 root 1.164 {
1336     struct my_io *w = (struct my_io *)w_;
1337     ...
1338     }
1339 root 1.1
1340 root 1.55 More interesting and less C-conformant ways of casting your callback type
1341     instead have been omitted.
1342    
1343 root 1.178 Another common scenario is to use some data structure with multiple
1344     embedded watchers:
1345 root 1.55
1346 root 1.164 struct my_biggy
1347     {
1348     int some_data;
1349     ev_timer t1;
1350     ev_timer t2;
1351     }
1352 root 1.55
1353 root 1.178 In this case getting the pointer to C<my_biggy> is a bit more
1354     complicated: Either you store the address of your C<my_biggy> struct
1355 root 1.183 in the C<data> member of the watcher (for woozies), or you need to use
1356     some pointer arithmetic using C<offsetof> inside your watchers (for real
1357     programmers):
1358 root 1.55
1359 root 1.164 #include <stddef.h>
1360    
1361     static void
1362 root 1.198 t1_cb (EV_P_ ev_timer *w, int revents)
1363 root 1.164 {
1364 root 1.242 struct my_biggy big = (struct my_biggy *)
1365 root 1.164 (((char *)w) - offsetof (struct my_biggy, t1));
1366     }
1367 root 1.55
1368 root 1.164 static void
1369 root 1.198 t2_cb (EV_P_ ev_timer *w, int revents)
1370 root 1.164 {
1371 root 1.242 struct my_biggy big = (struct my_biggy *)
1372 root 1.164 (((char *)w) - offsetof (struct my_biggy, t2));
1373     }
1374 root 1.1
1375 root 1.335 =head2 WATCHER STATES
1376    
1377     There are various watcher states mentioned throughout this manual -
1378     active, pending and so on. In this section these states and the rules to
1379     transition between them will be described in more detail - and while these
1380     rules might look complicated, they usually do "the right thing".
1381    
1382     =over 4
1383    
1384     =item initialiased
1385    
1386     Before a watcher can be registered with the event looop it has to be
1387     initialised. This can be done with a call to C<ev_TYPE_init>, or calls to
1388     C<ev_init> followed by the watcher-specific C<ev_TYPE_set> function.
1389    
1390     In this state it is simply some block of memory that is suitable for use
1391     in an event loop. It can be moved around, freed, reused etc. at will.
1392    
1393     =item started/running/active
1394    
1395     Once a watcher has been started with a call to C<ev_TYPE_start> it becomes
1396     property of the event loop, and is actively waiting for events. While in
1397     this state it cannot be accessed (except in a few documented ways), moved,
1398     freed or anything else - the only legal thing is to keep a pointer to it,
1399     and call libev functions on it that are documented to work on active watchers.
1400    
1401     =item pending
1402    
1403     If a watcher is active and libev determines that an event it is interested
1404     in has occurred (such as a timer expiring), it will become pending. It will
1405     stay in this pending state until either it is stopped or its callback is
1406     about to be invoked, so it is not normally pending inside the watcher
1407     callback.
1408    
1409     The watcher might or might not be active while it is pending (for example,
1410     an expired non-repeating timer can be pending but no longer active). If it
1411     is stopped, it can be freely accessed (e.g. by calling C<ev_TYPE_set>),
1412     but it is still property of the event loop at this time, so cannot be
1413     moved, freed or reused. And if it is active the rules described in the
1414     previous item still apply.
1415    
1416     It is also possible to feed an event on a watcher that is not active (e.g.
1417     via C<ev_feed_event>), in which case it becomes pending without being
1418     active.
1419    
1420     =item stopped
1421    
1422     A watcher can be stopped implicitly by libev (in which case it might still
1423     be pending), or explicitly by calling its C<ev_TYPE_stop> function. The
1424     latter will clear any pending state the watcher might be in, regardless
1425     of whether it was active or not, so stopping a watcher explicitly before
1426     freeing it is often a good idea.
1427    
1428     While stopped (and not pending) the watcher is essentially in the
1429     initialised state, that is it can be reused, moved, modified in any way
1430     you wish.
1431    
1432     =back
1433    
1434 root 1.233 =head2 WATCHER PRIORITY MODELS
1435    
1436     Many event loops support I<watcher priorities>, which are usually small
1437     integers that influence the ordering of event callback invocation
1438     between watchers in some way, all else being equal.
1439    
1440     In libev, Watcher priorities can be set using C<ev_set_priority>. See its
1441     description for the more technical details such as the actual priority
1442     range.
1443    
1444     There are two common ways how these these priorities are being interpreted
1445     by event loops:
1446    
1447     In the more common lock-out model, higher priorities "lock out" invocation
1448     of lower priority watchers, which means as long as higher priority
1449     watchers receive events, lower priority watchers are not being invoked.
1450    
1451     The less common only-for-ordering model uses priorities solely to order
1452     callback invocation within a single event loop iteration: Higher priority
1453     watchers are invoked before lower priority ones, but they all get invoked
1454     before polling for new events.
1455    
1456     Libev uses the second (only-for-ordering) model for all its watchers
1457     except for idle watchers (which use the lock-out model).
1458    
1459     The rationale behind this is that implementing the lock-out model for
1460     watchers is not well supported by most kernel interfaces, and most event
1461     libraries will just poll for the same events again and again as long as
1462     their callbacks have not been executed, which is very inefficient in the
1463     common case of one high-priority watcher locking out a mass of lower
1464     priority ones.
1465    
1466     Static (ordering) priorities are most useful when you have two or more
1467     watchers handling the same resource: a typical usage example is having an
1468     C<ev_io> watcher to receive data, and an associated C<ev_timer> to handle
1469     timeouts. Under load, data might be received while the program handles
1470     other jobs, but since timers normally get invoked first, the timeout
1471     handler will be executed before checking for data. In that case, giving
1472     the timer a lower priority than the I/O watcher ensures that I/O will be
1473     handled first even under adverse conditions (which is usually, but not
1474     always, what you want).
1475    
1476     Since idle watchers use the "lock-out" model, meaning that idle watchers
1477     will only be executed when no same or higher priority watchers have
1478     received events, they can be used to implement the "lock-out" model when
1479     required.
1480    
1481     For example, to emulate how many other event libraries handle priorities,
1482     you can associate an C<ev_idle> watcher to each such watcher, and in
1483     the normal watcher callback, you just start the idle watcher. The real
1484     processing is done in the idle watcher callback. This causes libev to
1485 sf-exg 1.298 continuously poll and process kernel event data for the watcher, but when
1486 root 1.233 the lock-out case is known to be rare (which in turn is rare :), this is
1487     workable.
1488    
1489     Usually, however, the lock-out model implemented that way will perform
1490     miserably under the type of load it was designed to handle. In that case,
1491     it might be preferable to stop the real watcher before starting the
1492     idle watcher, so the kernel will not have to process the event in case
1493     the actual processing will be delayed for considerable time.
1494    
1495     Here is an example of an I/O watcher that should run at a strictly lower
1496     priority than the default, and which should only process data when no
1497     other events are pending:
1498    
1499     ev_idle idle; // actual processing watcher
1500     ev_io io; // actual event watcher
1501    
1502     static void
1503     io_cb (EV_P_ ev_io *w, int revents)
1504     {
1505     // stop the I/O watcher, we received the event, but
1506     // are not yet ready to handle it.
1507     ev_io_stop (EV_A_ w);
1508    
1509 root 1.296 // start the idle watcher to handle the actual event.
1510 root 1.233 // it will not be executed as long as other watchers
1511     // with the default priority are receiving events.
1512     ev_idle_start (EV_A_ &idle);
1513     }
1514    
1515     static void
1516 root 1.242 idle_cb (EV_P_ ev_idle *w, int revents)
1517 root 1.233 {
1518     // actual processing
1519     read (STDIN_FILENO, ...);
1520    
1521     // have to start the I/O watcher again, as
1522     // we have handled the event
1523     ev_io_start (EV_P_ &io);
1524     }
1525    
1526     // initialisation
1527     ev_idle_init (&idle, idle_cb);
1528     ev_io_init (&io, io_cb, STDIN_FILENO, EV_READ);
1529     ev_io_start (EV_DEFAULT_ &io);
1530    
1531     In the "real" world, it might also be beneficial to start a timer, so that
1532     low-priority connections can not be locked out forever under load. This
1533     enables your program to keep a lower latency for important connections
1534     during short periods of high load, while not completely locking out less
1535     important ones.
1536    
1537 root 1.1
1538     =head1 WATCHER TYPES
1539    
1540     This section describes each watcher in detail, but will not repeat
1541 root 1.48 information given in the last section. Any initialisation/set macros,
1542     functions and members specific to the watcher type are explained.
1543    
1544     Members are additionally marked with either I<[read-only]>, meaning that,
1545     while the watcher is active, you can look at the member and expect some
1546     sensible content, but you must not modify it (you can modify it while the
1547     watcher is stopped to your hearts content), or I<[read-write]>, which
1548     means you can expect it to have some sensible content while the watcher
1549     is active, but you can also modify it. Modifying it may not do something
1550     sensible or take immediate effect (or do anything at all), but libev will
1551     not crash or malfunction in any way.
1552 root 1.1
1553 root 1.34
1554 root 1.42 =head2 C<ev_io> - is this file descriptor readable or writable?
1555 root 1.1
1556 root 1.4 I/O watchers check whether a file descriptor is readable or writable
1557 root 1.42 in each iteration of the event loop, or, more precisely, when reading
1558     would not block the process and writing would at least be able to write
1559     some data. This behaviour is called level-triggering because you keep
1560     receiving events as long as the condition persists. Remember you can stop
1561     the watcher if you don't want to act on the event and neither want to
1562     receive future events.
1563 root 1.1
1564 root 1.23 In general you can register as many read and/or write event watchers per
1565 root 1.8 fd as you want (as long as you don't confuse yourself). Setting all file
1566     descriptors to non-blocking mode is also usually a good idea (but not
1567     required if you know what you are doing).
1568    
1569 root 1.183 If you cannot use non-blocking mode, then force the use of a
1570     known-to-be-good backend (at the time of this writing, this includes only
1571 root 1.239 C<EVBACKEND_SELECT> and C<EVBACKEND_POLL>). The same applies to file
1572     descriptors for which non-blocking operation makes no sense (such as
1573 sf-exg 1.298 files) - libev doesn't guarantee any specific behaviour in that case.
1574 root 1.8
1575 root 1.42 Another thing you have to watch out for is that it is quite easy to
1576 root 1.155 receive "spurious" readiness notifications, that is your callback might
1577 root 1.42 be called with C<EV_READ> but a subsequent C<read>(2) will actually block
1578     because there is no data. Not only are some backends known to create a
1579 root 1.161 lot of those (for example Solaris ports), it is very easy to get into
1580 root 1.42 this situation even with a relatively standard program structure. Thus
1581     it is best to always use non-blocking I/O: An extra C<read>(2) returning
1582     C<EAGAIN> is far preferable to a program hanging until some data arrives.
1583    
1584 root 1.183 If you cannot run the fd in non-blocking mode (for example you should
1585     not play around with an Xlib connection), then you have to separately
1586     re-test whether a file descriptor is really ready with a known-to-be good
1587     interface such as poll (fortunately in our Xlib example, Xlib already
1588     does this on its own, so its quite safe to use). Some people additionally
1589     use C<SIGALRM> and an interval timer, just to be sure you won't block
1590     indefinitely.
1591    
1592     But really, best use non-blocking mode.
1593 root 1.42
1594 root 1.81 =head3 The special problem of disappearing file descriptors
1595    
1596 root 1.94 Some backends (e.g. kqueue, epoll) need to be told about closing a file
1597 root 1.183 descriptor (either due to calling C<close> explicitly or any other means,
1598     such as C<dup2>). The reason is that you register interest in some file
1599 root 1.81 descriptor, but when it goes away, the operating system will silently drop
1600     this interest. If another file descriptor with the same number then is
1601     registered with libev, there is no efficient way to see that this is, in
1602     fact, a different file descriptor.
1603    
1604     To avoid having to explicitly tell libev about such cases, libev follows
1605     the following policy: Each time C<ev_io_set> is being called, libev
1606     will assume that this is potentially a new file descriptor, otherwise
1607     it is assumed that the file descriptor stays the same. That means that
1608     you I<have> to call C<ev_io_set> (or C<ev_io_init>) when you change the
1609     descriptor even if the file descriptor number itself did not change.
1610    
1611     This is how one would do it normally anyway, the important point is that
1612     the libev application should not optimise around libev but should leave
1613     optimisations to libev.
1614    
1615 root 1.95 =head3 The special problem of dup'ed file descriptors
1616 root 1.94
1617     Some backends (e.g. epoll), cannot register events for file descriptors,
1618 root 1.103 but only events for the underlying file descriptions. That means when you
1619 root 1.109 have C<dup ()>'ed file descriptors or weirder constellations, and register
1620     events for them, only one file descriptor might actually receive events.
1621 root 1.94
1622 root 1.103 There is no workaround possible except not registering events
1623     for potentially C<dup ()>'ed file descriptors, or to resort to
1624 root 1.94 C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>.
1625    
1626     =head3 The special problem of fork
1627    
1628     Some backends (epoll, kqueue) do not support C<fork ()> at all or exhibit
1629     useless behaviour. Libev fully supports fork, but needs to be told about
1630     it in the child.
1631    
1632     To support fork in your programs, you either have to call
1633     C<ev_default_fork ()> or C<ev_loop_fork ()> after a fork in the child,
1634     enable C<EVFLAG_FORKCHECK>, or resort to C<EVBACKEND_SELECT> or
1635     C<EVBACKEND_POLL>.
1636    
1637 root 1.138 =head3 The special problem of SIGPIPE
1638    
1639 root 1.183 While not really specific to libev, it is easy to forget about C<SIGPIPE>:
1640 root 1.174 when writing to a pipe whose other end has been closed, your program gets
1641 root 1.183 sent a SIGPIPE, which, by default, aborts your program. For most programs
1642 root 1.174 this is sensible behaviour, for daemons, this is usually undesirable.
1643 root 1.138
1644     So when you encounter spurious, unexplained daemon exits, make sure you
1645     ignore SIGPIPE (and maybe make sure you log the exit status of your daemon
1646     somewhere, as that would have given you a big clue).
1647    
1648 root 1.284 =head3 The special problem of accept()ing when you can't
1649    
1650     Many implementations of the POSIX C<accept> function (for example,
1651 sf-exg 1.292 found in post-2004 Linux) have the peculiar behaviour of not removing a
1652 root 1.284 connection from the pending queue in all error cases.
1653    
1654     For example, larger servers often run out of file descriptors (because
1655     of resource limits), causing C<accept> to fail with C<ENFILE> but not
1656     rejecting the connection, leading to libev signalling readiness on
1657     the next iteration again (the connection still exists after all), and
1658     typically causing the program to loop at 100% CPU usage.
1659    
1660     Unfortunately, the set of errors that cause this issue differs between
1661     operating systems, there is usually little the app can do to remedy the
1662     situation, and no known thread-safe method of removing the connection to
1663     cope with overload is known (to me).
1664    
1665     One of the easiest ways to handle this situation is to just ignore it
1666     - when the program encounters an overload, it will just loop until the
1667     situation is over. While this is a form of busy waiting, no OS offers an
1668     event-based way to handle this situation, so it's the best one can do.
1669    
1670     A better way to handle the situation is to log any errors other than
1671     C<EAGAIN> and C<EWOULDBLOCK>, making sure not to flood the log with such
1672     messages, and continue as usual, which at least gives the user an idea of
1673     what could be wrong ("raise the ulimit!"). For extra points one could stop
1674     the C<ev_io> watcher on the listening fd "for a while", which reduces CPU
1675     usage.
1676    
1677     If your program is single-threaded, then you could also keep a dummy file
1678     descriptor for overload situations (e.g. by opening F</dev/null>), and
1679     when you run into C<ENFILE> or C<EMFILE>, close it, run C<accept>,
1680     close that fd, and create a new dummy fd. This will gracefully refuse
1681     clients under typical overload conditions.
1682    
1683     The last way to handle it is to simply log the error and C<exit>, as
1684     is often done with C<malloc> failures, but this results in an easy
1685     opportunity for a DoS attack.
1686 root 1.81
1687 root 1.82 =head3 Watcher-Specific Functions
1688    
1689 root 1.1 =over 4
1690    
1691     =item ev_io_init (ev_io *, callback, int fd, int events)
1692    
1693     =item ev_io_set (ev_io *, int fd, int events)
1694    
1695 root 1.42 Configures an C<ev_io> watcher. The C<fd> is the file descriptor to
1696 root 1.183 receive events for and C<events> is either C<EV_READ>, C<EV_WRITE> or
1697     C<EV_READ | EV_WRITE>, to express the desire to receive the given events.
1698 root 1.32
1699 root 1.48 =item int fd [read-only]
1700    
1701     The file descriptor being watched.
1702    
1703     =item int events [read-only]
1704    
1705     The events being watched.
1706    
1707 root 1.1 =back
1708    
1709 root 1.111 =head3 Examples
1710    
1711 root 1.54 Example: Call C<stdin_readable_cb> when STDIN_FILENO has become, well
1712 root 1.34 readable, but only once. Since it is likely line-buffered, you could
1713 root 1.54 attempt to read a whole line in the callback.
1714 root 1.34
1715 root 1.164 static void
1716 root 1.198 stdin_readable_cb (struct ev_loop *loop, ev_io *w, int revents)
1717 root 1.164 {
1718     ev_io_stop (loop, w);
1719 root 1.183 .. read from stdin here (or from w->fd) and handle any I/O errors
1720 root 1.164 }
1721    
1722     ...
1723     struct ev_loop *loop = ev_default_init (0);
1724 root 1.198 ev_io stdin_readable;
1725 root 1.164 ev_io_init (&stdin_readable, stdin_readable_cb, STDIN_FILENO, EV_READ);
1726     ev_io_start (loop, &stdin_readable);
1727 root 1.310 ev_run (loop, 0);
1728 root 1.34
1729    
1730 root 1.42 =head2 C<ev_timer> - relative and optionally repeating timeouts
1731 root 1.1
1732     Timer watchers are simple relative timers that generate an event after a
1733     given time, and optionally repeating in regular intervals after that.
1734    
1735     The timers are based on real time, that is, if you register an event that
1736 root 1.161 times out after an hour and you reset your system clock to January last
1737 root 1.183 year, it will still time out after (roughly) one hour. "Roughly" because
1738 root 1.28 detecting time jumps is hard, and some inaccuracies are unavoidable (the
1739 root 1.1 monotonic clock option helps a lot here).
1740    
1741 root 1.183 The callback is guaranteed to be invoked only I<after> its timeout has
1742 root 1.240 passed (not I<at>, so on systems with very low-resolution clocks this
1743     might introduce a small delay). If multiple timers become ready during the
1744     same loop iteration then the ones with earlier time-out values are invoked
1745 root 1.248 before ones of the same priority with later time-out values (but this is
1746 root 1.310 no longer true when a callback calls C<ev_run> recursively).
1747 root 1.175
1748 root 1.198 =head3 Be smart about timeouts
1749    
1750 root 1.199 Many real-world problems involve some kind of timeout, usually for error
1751 root 1.198 recovery. A typical example is an HTTP request - if the other side hangs,
1752     you want to raise some error after a while.
1753    
1754 root 1.199 What follows are some ways to handle this problem, from obvious and
1755     inefficient to smart and efficient.
1756 root 1.198
1757 root 1.199 In the following, a 60 second activity timeout is assumed - a timeout that
1758     gets reset to 60 seconds each time there is activity (e.g. each time some
1759     data or other life sign was received).
1760 root 1.198
1761     =over 4
1762    
1763 root 1.199 =item 1. Use a timer and stop, reinitialise and start it on activity.
1764 root 1.198
1765     This is the most obvious, but not the most simple way: In the beginning,
1766     start the watcher:
1767    
1768     ev_timer_init (timer, callback, 60., 0.);
1769     ev_timer_start (loop, timer);
1770    
1771 root 1.199 Then, each time there is some activity, C<ev_timer_stop> it, initialise it
1772     and start it again:
1773 root 1.198
1774     ev_timer_stop (loop, timer);
1775     ev_timer_set (timer, 60., 0.);
1776     ev_timer_start (loop, timer);
1777    
1778 root 1.199 This is relatively simple to implement, but means that each time there is
1779     some activity, libev will first have to remove the timer from its internal
1780     data structure and then add it again. Libev tries to be fast, but it's
1781     still not a constant-time operation.
1782 root 1.198
1783     =item 2. Use a timer and re-start it with C<ev_timer_again> inactivity.
1784    
1785     This is the easiest way, and involves using C<ev_timer_again> instead of
1786     C<ev_timer_start>.
1787    
1788 root 1.199 To implement this, configure an C<ev_timer> with a C<repeat> value
1789     of C<60> and then call C<ev_timer_again> at start and each time you
1790     successfully read or write some data. If you go into an idle state where
1791     you do not expect data to travel on the socket, you can C<ev_timer_stop>
1792     the timer, and C<ev_timer_again> will automatically restart it if need be.
1793    
1794     That means you can ignore both the C<ev_timer_start> function and the
1795     C<after> argument to C<ev_timer_set>, and only ever use the C<repeat>
1796     member and C<ev_timer_again>.
1797 root 1.198
1798     At start:
1799    
1800 root 1.243 ev_init (timer, callback);
1801 root 1.199 timer->repeat = 60.;
1802 root 1.198 ev_timer_again (loop, timer);
1803    
1804 root 1.199 Each time there is some activity:
1805 root 1.198
1806     ev_timer_again (loop, timer);
1807    
1808 root 1.199 It is even possible to change the time-out on the fly, regardless of
1809     whether the watcher is active or not:
1810 root 1.198
1811     timer->repeat = 30.;
1812     ev_timer_again (loop, timer);
1813    
1814     This is slightly more efficient then stopping/starting the timer each time
1815     you want to modify its timeout value, as libev does not have to completely
1816 root 1.199 remove and re-insert the timer from/into its internal data structure.
1817    
1818     It is, however, even simpler than the "obvious" way to do it.
1819 root 1.198
1820     =item 3. Let the timer time out, but then re-arm it as required.
1821    
1822     This method is more tricky, but usually most efficient: Most timeouts are
1823 root 1.199 relatively long compared to the intervals between other activity - in
1824     our example, within 60 seconds, there are usually many I/O events with
1825     associated activity resets.
1826 root 1.198
1827     In this case, it would be more efficient to leave the C<ev_timer> alone,
1828     but remember the time of last activity, and check for a real timeout only
1829     within the callback:
1830    
1831     ev_tstamp last_activity; // time of last activity
1832    
1833     static void
1834     callback (EV_P_ ev_timer *w, int revents)
1835     {
1836 root 1.199 ev_tstamp now = ev_now (EV_A);
1837 root 1.198 ev_tstamp timeout = last_activity + 60.;
1838    
1839 root 1.199 // if last_activity + 60. is older than now, we did time out
1840 root 1.198 if (timeout < now)
1841     {
1842 sf-exg 1.298 // timeout occurred, take action
1843 root 1.198 }
1844     else
1845     {
1846     // callback was invoked, but there was some activity, re-arm
1847 root 1.199 // the watcher to fire in last_activity + 60, which is
1848     // guaranteed to be in the future, so "again" is positive:
1849 root 1.216 w->repeat = timeout - now;
1850 root 1.198 ev_timer_again (EV_A_ w);
1851     }
1852     }
1853    
1854 root 1.199 To summarise the callback: first calculate the real timeout (defined
1855     as "60 seconds after the last activity"), then check if that time has
1856     been reached, which means something I<did>, in fact, time out. Otherwise
1857     the callback was invoked too early (C<timeout> is in the future), so
1858     re-schedule the timer to fire at that future time, to see if maybe we have
1859     a timeout then.
1860 root 1.198
1861     Note how C<ev_timer_again> is used, taking advantage of the
1862     C<ev_timer_again> optimisation when the timer is already running.
1863    
1864 root 1.199 This scheme causes more callback invocations (about one every 60 seconds
1865     minus half the average time between activity), but virtually no calls to
1866     libev to change the timeout.
1867    
1868     To start the timer, simply initialise the watcher and set C<last_activity>
1869     to the current time (meaning we just have some activity :), then call the
1870     callback, which will "do the right thing" and start the timer:
1871 root 1.198
1872 root 1.243 ev_init (timer, callback);
1873 root 1.198 last_activity = ev_now (loop);
1874 root 1.289 callback (loop, timer, EV_TIMER);
1875 root 1.198
1876 root 1.199 And when there is some activity, simply store the current time in
1877     C<last_activity>, no libev calls at all:
1878 root 1.198
1879 root 1.295 last_activity = ev_now (loop);
1880 root 1.198
1881     This technique is slightly more complex, but in most cases where the
1882     time-out is unlikely to be triggered, much more efficient.
1883    
1884 root 1.199 Changing the timeout is trivial as well (if it isn't hard-coded in the
1885     callback :) - just change the timeout and invoke the callback, which will
1886     fix things for you.
1887    
1888 root 1.200 =item 4. Wee, just use a double-linked list for your timeouts.
1889 root 1.199
1890 root 1.200 If there is not one request, but many thousands (millions...), all
1891     employing some kind of timeout with the same timeout value, then one can
1892     do even better:
1893 root 1.199
1894     When starting the timeout, calculate the timeout value and put the timeout
1895     at the I<end> of the list.
1896    
1897     Then use an C<ev_timer> to fire when the timeout at the I<beginning> of
1898     the list is expected to fire (for example, using the technique #3).
1899    
1900     When there is some activity, remove the timer from the list, recalculate
1901     the timeout, append it to the end of the list again, and make sure to
1902     update the C<ev_timer> if it was taken from the beginning of the list.
1903    
1904     This way, one can manage an unlimited number of timeouts in O(1) time for
1905     starting, stopping and updating the timers, at the expense of a major
1906     complication, and having to use a constant timeout. The constant timeout
1907     ensures that the list stays sorted.
1908    
1909 root 1.198 =back
1910    
1911 root 1.200 So which method the best?
1912 root 1.199
1913 root 1.200 Method #2 is a simple no-brain-required solution that is adequate in most
1914     situations. Method #3 requires a bit more thinking, but handles many cases
1915     better, and isn't very complicated either. In most case, choosing either
1916     one is fine, with #3 being better in typical situations.
1917 root 1.199
1918     Method #1 is almost always a bad idea, and buys you nothing. Method #4 is
1919     rather complicated, but extremely efficient, something that really pays
1920 root 1.200 off after the first million or so of active timers, i.e. it's usually
1921 root 1.199 overkill :)
1922    
1923 root 1.175 =head3 The special problem of time updates
1924    
1925 root 1.176 Establishing the current time is a costly operation (it usually takes at
1926     least two system calls): EV therefore updates its idea of the current
1927 root 1.310 time only before and after C<ev_run> collects new events, which causes a
1928 root 1.183 growing difference between C<ev_now ()> and C<ev_time ()> when handling
1929     lots of events in one iteration.
1930 root 1.175
1931 root 1.9 The relative timeouts are calculated relative to the C<ev_now ()>
1932     time. This is usually the right thing as this timestamp refers to the time
1933 root 1.28 of the event triggering whatever timeout you are modifying/starting. If
1934 root 1.175 you suspect event processing to be delayed and you I<need> to base the
1935     timeout on the current time, use something like this to adjust for this:
1936 root 1.9
1937     ev_timer_set (&timer, after + ev_now () - ev_time (), 0.);
1938    
1939 root 1.177 If the event loop is suspended for a long time, you can also force an
1940 root 1.176 update of the time returned by C<ev_now ()> by calling C<ev_now_update
1941     ()>.
1942    
1943 root 1.257 =head3 The special problems of suspended animation
1944    
1945     When you leave the server world it is quite customary to hit machines that
1946     can suspend/hibernate - what happens to the clocks during such a suspend?
1947    
1948     Some quick tests made with a Linux 2.6.28 indicate that a suspend freezes
1949     all processes, while the clocks (C<times>, C<CLOCK_MONOTONIC>) continue
1950     to run until the system is suspended, but they will not advance while the
1951     system is suspended. That means, on resume, it will be as if the program
1952     was frozen for a few seconds, but the suspend time will not be counted
1953     towards C<ev_timer> when a monotonic clock source is used. The real time
1954     clock advanced as expected, but if it is used as sole clocksource, then a
1955     long suspend would be detected as a time jump by libev, and timers would
1956     be adjusted accordingly.
1957    
1958     I would not be surprised to see different behaviour in different between
1959     operating systems, OS versions or even different hardware.
1960    
1961     The other form of suspend (job control, or sending a SIGSTOP) will see a
1962     time jump in the monotonic clocks and the realtime clock. If the program
1963     is suspended for a very long time, and monotonic clock sources are in use,
1964     then you can expect C<ev_timer>s to expire as the full suspension time
1965     will be counted towards the timers. When no monotonic clock source is in
1966     use, then libev will again assume a timejump and adjust accordingly.
1967    
1968     It might be beneficial for this latter case to call C<ev_suspend>
1969     and C<ev_resume> in code that handles C<SIGTSTP>, to at least get
1970     deterministic behaviour in this case (you can do nothing against
1971     C<SIGSTOP>).
1972    
1973 root 1.82 =head3 Watcher-Specific Functions and Data Members
1974    
1975 root 1.1 =over 4
1976    
1977     =item ev_timer_init (ev_timer *, callback, ev_tstamp after, ev_tstamp repeat)
1978    
1979     =item ev_timer_set (ev_timer *, ev_tstamp after, ev_tstamp repeat)
1980    
1981 root 1.157 Configure the timer to trigger after C<after> seconds. If C<repeat>
1982     is C<0.>, then it will automatically be stopped once the timeout is
1983     reached. If it is positive, then the timer will automatically be
1984     configured to trigger again C<repeat> seconds later, again, and again,
1985     until stopped manually.
1986    
1987     The timer itself will do a best-effort at avoiding drift, that is, if
1988     you configure a timer to trigger every 10 seconds, then it will normally
1989     trigger at exactly 10 second intervals. If, however, your program cannot
1990     keep up with the timer (because it takes longer than those 10 seconds to
1991     do stuff) the timer will not fire more than once per event loop iteration.
1992 root 1.1
1993 root 1.132 =item ev_timer_again (loop, ev_timer *)
1994 root 1.1
1995     This will act as if the timer timed out and restart it again if it is
1996     repeating. The exact semantics are:
1997    
1998 root 1.61 If the timer is pending, its pending status is cleared.
1999 root 1.1
2000 root 1.161 If the timer is started but non-repeating, stop it (as if it timed out).
2001 root 1.61
2002     If the timer is repeating, either start it if necessary (with the
2003     C<repeat> value), or reset the running timer to the C<repeat> value.
2004 root 1.1
2005 root 1.232 This sounds a bit complicated, see L<Be smart about timeouts>, above, for a
2006 root 1.198 usage example.
2007 root 1.183
2008 root 1.275 =item ev_tstamp ev_timer_remaining (loop, ev_timer *)
2009 root 1.258
2010     Returns the remaining time until a timer fires. If the timer is active,
2011     then this time is relative to the current event loop time, otherwise it's
2012     the timeout value currently configured.
2013    
2014     That is, after an C<ev_timer_set (w, 5, 7)>, C<ev_timer_remaining> returns
2015 sf-exg 1.280 C<5>. When the timer is started and one second passes, C<ev_timer_remaining>
2016 root 1.258 will return C<4>. When the timer expires and is restarted, it will return
2017     roughly C<7> (likely slightly less as callback invocation takes some time,
2018     too), and so on.
2019    
2020 root 1.48 =item ev_tstamp repeat [read-write]
2021    
2022     The current C<repeat> value. Will be used each time the watcher times out
2023 root 1.183 or C<ev_timer_again> is called, and determines the next timeout (if any),
2024 root 1.48 which is also when any modifications are taken into account.
2025 root 1.1
2026     =back
2027    
2028 root 1.111 =head3 Examples
2029    
2030 root 1.54 Example: Create a timer that fires after 60 seconds.
2031 root 1.34
2032 root 1.164 static void
2033 root 1.198 one_minute_cb (struct ev_loop *loop, ev_timer *w, int revents)
2034 root 1.164 {
2035     .. one minute over, w is actually stopped right here
2036     }
2037    
2038 root 1.198 ev_timer mytimer;
2039 root 1.164 ev_timer_init (&mytimer, one_minute_cb, 60., 0.);
2040     ev_timer_start (loop, &mytimer);
2041 root 1.34
2042 root 1.54 Example: Create a timeout timer that times out after 10 seconds of
2043 root 1.34 inactivity.
2044    
2045 root 1.164 static void
2046 root 1.198 timeout_cb (struct ev_loop *loop, ev_timer *w, int revents)
2047 root 1.164 {
2048     .. ten seconds without any activity
2049     }
2050    
2051 root 1.198 ev_timer mytimer;
2052 root 1.164 ev_timer_init (&mytimer, timeout_cb, 0., 10.); /* note, only repeat used */
2053     ev_timer_again (&mytimer); /* start timer */
2054 root 1.310 ev_run (loop, 0);
2055 root 1.164
2056     // and in some piece of code that gets executed on any "activity":
2057     // reset the timeout to start ticking again at 10 seconds
2058     ev_timer_again (&mytimer);
2059 root 1.34
2060    
2061 root 1.42 =head2 C<ev_periodic> - to cron or not to cron?
2062 root 1.1
2063     Periodic watchers are also timers of a kind, but they are very versatile
2064     (and unfortunately a bit complex).
2065    
2066 root 1.227 Unlike C<ev_timer>, periodic watchers are not based on real time (or
2067     relative time, the physical time that passes) but on wall clock time
2068     (absolute time, the thing you can read on your calender or clock). The
2069     difference is that wall clock time can run faster or slower than real
2070     time, and time jumps are not uncommon (e.g. when you adjust your
2071     wrist-watch).
2072    
2073     You can tell a periodic watcher to trigger after some specific point
2074     in time: for example, if you tell a periodic watcher to trigger "in 10
2075     seconds" (by specifying e.g. C<ev_now () + 10.>, that is, an absolute time
2076     not a delay) and then reset your system clock to January of the previous
2077     year, then it will take a year or more to trigger the event (unlike an
2078     C<ev_timer>, which would still trigger roughly 10 seconds after starting
2079     it, as it uses a relative timeout).
2080    
2081     C<ev_periodic> watchers can also be used to implement vastly more complex
2082     timers, such as triggering an event on each "midnight, local time", or
2083     other complicated rules. This cannot be done with C<ev_timer> watchers, as
2084     those cannot react to time jumps.
2085 root 1.1
2086 root 1.161 As with timers, the callback is guaranteed to be invoked only when the
2087 root 1.230 point in time where it is supposed to trigger has passed. If multiple
2088     timers become ready during the same loop iteration then the ones with
2089     earlier time-out values are invoked before ones with later time-out values
2090 root 1.310 (but this is no longer true when a callback calls C<ev_run> recursively).
2091 root 1.28
2092 root 1.82 =head3 Watcher-Specific Functions and Data Members
2093    
2094 root 1.1 =over 4
2095    
2096 root 1.227 =item ev_periodic_init (ev_periodic *, callback, ev_tstamp offset, ev_tstamp interval, reschedule_cb)
2097 root 1.1
2098 root 1.227 =item ev_periodic_set (ev_periodic *, ev_tstamp offset, ev_tstamp interval, reschedule_cb)
2099 root 1.1
2100 root 1.227 Lots of arguments, let's sort it out... There are basically three modes of
2101 root 1.183 operation, and we will explain them from simplest to most complex:
2102 root 1.1
2103     =over 4
2104    
2105 root 1.227 =item * absolute timer (offset = absolute time, interval = 0, reschedule_cb = 0)
2106 root 1.1
2107 root 1.161 In this configuration the watcher triggers an event after the wall clock
2108 root 1.227 time C<offset> has passed. It will not repeat and will not adjust when a
2109     time jump occurs, that is, if it is to be run at January 1st 2011 then it
2110     will be stopped and invoked when the system clock reaches or surpasses
2111     this point in time.
2112 root 1.1
2113 root 1.227 =item * repeating interval timer (offset = offset within interval, interval > 0, reschedule_cb = 0)
2114 root 1.1
2115     In this mode the watcher will always be scheduled to time out at the next
2116 root 1.227 C<offset + N * interval> time (for some integer N, which can also be
2117     negative) and then repeat, regardless of any time jumps. The C<offset>
2118     argument is merely an offset into the C<interval> periods.
2119 root 1.1
2120 root 1.183 This can be used to create timers that do not drift with respect to the
2121 root 1.227 system clock, for example, here is an C<ev_periodic> that triggers each
2122     hour, on the hour (with respect to UTC):
2123 root 1.1
2124     ev_periodic_set (&periodic, 0., 3600., 0);
2125    
2126     This doesn't mean there will always be 3600 seconds in between triggers,
2127 root 1.161 but only that the callback will be called when the system time shows a
2128 root 1.12 full hour (UTC), or more correctly, when the system time is evenly divisible
2129 root 1.1 by 3600.
2130    
2131     Another way to think about it (for the mathematically inclined) is that
2132 root 1.10 C<ev_periodic> will try to run the callback in this mode at the next possible
2133 root 1.227 time where C<time = offset (mod interval)>, regardless of any time jumps.
2134 root 1.1
2135 root 1.227 For numerical stability it is preferable that the C<offset> value is near
2136 root 1.78 C<ev_now ()> (the current time), but there is no range requirement for
2137 root 1.157 this value, and in fact is often specified as zero.
2138 root 1.78
2139 root 1.161 Note also that there is an upper limit to how often a timer can fire (CPU
2140 root 1.158 speed for example), so if C<interval> is very small then timing stability
2141 root 1.161 will of course deteriorate. Libev itself tries to be exact to be about one
2142 root 1.158 millisecond (if the OS supports it and the machine is fast enough).
2143    
2144 root 1.227 =item * manual reschedule mode (offset ignored, interval ignored, reschedule_cb = callback)
2145 root 1.1
2146 root 1.227 In this mode the values for C<interval> and C<offset> are both being
2147 root 1.1 ignored. Instead, each time the periodic watcher gets scheduled, the
2148     reschedule callback will be called with the watcher as first, and the
2149     current time as second argument.
2150    
2151 root 1.227 NOTE: I<This callback MUST NOT stop or destroy any periodic watcher, ever,
2152     or make ANY other event loop modifications whatsoever, unless explicitly
2153     allowed by documentation here>.
2154 root 1.1
2155 root 1.157 If you need to stop it, return C<now + 1e30> (or so, fudge fudge) and stop
2156     it afterwards (e.g. by starting an C<ev_prepare> watcher, which is the
2157     only event loop modification you are allowed to do).
2158    
2159 root 1.198 The callback prototype is C<ev_tstamp (*reschedule_cb)(ev_periodic
2160 root 1.157 *w, ev_tstamp now)>, e.g.:
2161 root 1.1
2162 root 1.198 static ev_tstamp
2163     my_rescheduler (ev_periodic *w, ev_tstamp now)
2164 root 1.1 {
2165     return now + 60.;
2166     }
2167    
2168     It must return the next time to trigger, based on the passed time value
2169     (that is, the lowest time value larger than to the second argument). It
2170     will usually be called just before the callback will be triggered, but
2171     might be called at other times, too.
2172    
2173 root 1.157 NOTE: I<< This callback must always return a time that is higher than or
2174     equal to the passed C<now> value >>.
2175 root 1.18
2176 root 1.1 This can be used to create very complex timers, such as a timer that
2177 root 1.157 triggers on "next midnight, local time". To do this, you would calculate the
2178 root 1.19 next midnight after C<now> and return the timestamp value for this. How
2179     you do this is, again, up to you (but it is not trivial, which is the main
2180     reason I omitted it as an example).
2181 root 1.1
2182     =back
2183    
2184     =item ev_periodic_again (loop, ev_periodic *)
2185    
2186     Simply stops and restarts the periodic watcher again. This is only useful
2187     when you changed some parameters or the reschedule callback would return
2188     a different time than the last time it was called (e.g. in a crond like
2189     program when the crontabs have changed).
2190    
2191 root 1.149 =item ev_tstamp ev_periodic_at (ev_periodic *)
2192    
2193 root 1.227 When active, returns the absolute time that the watcher is supposed
2194     to trigger next. This is not the same as the C<offset> argument to
2195     C<ev_periodic_set>, but indeed works even in interval and manual
2196     rescheduling modes.
2197 root 1.149
2198 root 1.78 =item ev_tstamp offset [read-write]
2199    
2200     When repeating, this contains the offset value, otherwise this is the
2201 root 1.227 absolute point in time (the C<offset> value passed to C<ev_periodic_set>,
2202     although libev might modify this value for better numerical stability).
2203 root 1.78
2204     Can be modified any time, but changes only take effect when the periodic
2205     timer fires or C<ev_periodic_again> is being called.
2206    
2207 root 1.48 =item ev_tstamp interval [read-write]
2208    
2209     The current interval value. Can be modified any time, but changes only
2210     take effect when the periodic timer fires or C<ev_periodic_again> is being
2211     called.
2212    
2213 root 1.198 =item ev_tstamp (*reschedule_cb)(ev_periodic *w, ev_tstamp now) [read-write]
2214 root 1.48
2215     The current reschedule callback, or C<0>, if this functionality is
2216     switched off. Can be changed any time, but changes only take effect when
2217     the periodic timer fires or C<ev_periodic_again> is being called.
2218    
2219 root 1.1 =back
2220    
2221 root 1.111 =head3 Examples
2222    
2223 root 1.54 Example: Call a callback every hour, or, more precisely, whenever the
2224 root 1.183 system time is divisible by 3600. The callback invocation times have
2225 root 1.161 potentially a lot of jitter, but good long-term stability.
2226 root 1.34
2227 root 1.164 static void
2228 root 1.301 clock_cb (struct ev_loop *loop, ev_periodic *w, int revents)
2229 root 1.164 {
2230     ... its now a full hour (UTC, or TAI or whatever your clock follows)
2231     }
2232    
2233 root 1.198 ev_periodic hourly_tick;
2234 root 1.164 ev_periodic_init (&hourly_tick, clock_cb, 0., 3600., 0);
2235     ev_periodic_start (loop, &hourly_tick);
2236 root 1.34
2237 root 1.54 Example: The same as above, but use a reschedule callback to do it:
2238 root 1.34
2239 root 1.164 #include <math.h>
2240 root 1.34
2241 root 1.164 static ev_tstamp
2242 root 1.198 my_scheduler_cb (ev_periodic *w, ev_tstamp now)
2243 root 1.164 {
2244 root 1.183 return now + (3600. - fmod (now, 3600.));
2245 root 1.164 }
2246 root 1.34
2247 root 1.164 ev_periodic_init (&hourly_tick, clock_cb, 0., 0., my_scheduler_cb);
2248 root 1.34
2249 root 1.54 Example: Call a callback every hour, starting now:
2250 root 1.34
2251 root 1.198 ev_periodic hourly_tick;
2252 root 1.164 ev_periodic_init (&hourly_tick, clock_cb,
2253     fmod (ev_now (loop), 3600.), 3600., 0);
2254     ev_periodic_start (loop, &hourly_tick);
2255 root 1.34
2256    
2257 root 1.42 =head2 C<ev_signal> - signal me when a signal gets signalled!
2258 root 1.1
2259     Signal watchers will trigger an event when the process receives a specific
2260     signal one or more times. Even though signals are very asynchronous, libev
2261 root 1.9 will try it's best to deliver signals synchronously, i.e. as part of the
2262 root 1.1 normal event processing, like any other event.
2263    
2264 root 1.260 If you want signals to be delivered truly asynchronously, just use
2265     C<sigaction> as you would do without libev and forget about sharing
2266     the signal. You can even use C<ev_async> from a signal handler to
2267     synchronously wake up an event loop.
2268    
2269     You can configure as many watchers as you like for the same signal, but
2270     only within the same loop, i.e. you can watch for C<SIGINT> in your
2271     default loop and for C<SIGIO> in another loop, but you cannot watch for
2272     C<SIGINT> in both the default loop and another loop at the same time. At
2273     the moment, C<SIGCHLD> is permanently tied to the default loop.
2274    
2275     When the first watcher gets started will libev actually register something
2276     with the kernel (thus it coexists with your own signal handlers as long as
2277     you don't register any with libev for the same signal).
2278 root 1.259
2279 root 1.135 If possible and supported, libev will install its handlers with
2280 root 1.259 C<SA_RESTART> (or equivalent) behaviour enabled, so system calls should
2281     not be unduly interrupted. If you have a problem with system calls getting
2282     interrupted by signals you can block all signals in an C<ev_check> watcher
2283     and unblock them in an C<ev_prepare> watcher.
2284 root 1.135
2285 root 1.277 =head3 The special problem of inheritance over fork/execve/pthread_create
2286 root 1.265
2287     Both the signal mask (C<sigprocmask>) and the signal disposition
2288     (C<sigaction>) are unspecified after starting a signal watcher (and after
2289     stopping it again), that is, libev might or might not block the signal,
2290     and might or might not set or restore the installed signal handler.
2291    
2292     While this does not matter for the signal disposition (libev never
2293     sets signals to C<SIG_IGN>, so handlers will be reset to C<SIG_DFL> on
2294     C<execve>), this matters for the signal mask: many programs do not expect
2295 root 1.266 certain signals to be blocked.
2296 root 1.265
2297     This means that before calling C<exec> (from the child) you should reset
2298     the signal mask to whatever "default" you expect (all clear is a good
2299     choice usually).
2300    
2301 root 1.267 The simplest way to ensure that the signal mask is reset in the child is
2302     to install a fork handler with C<pthread_atfork> that resets it. That will
2303     catch fork calls done by libraries (such as the libc) as well.
2304    
2305 root 1.277 In current versions of libev, the signal will not be blocked indefinitely
2306     unless you use the C<signalfd> API (C<EV_SIGNALFD>). While this reduces
2307     the window of opportunity for problems, it will not go away, as libev
2308     I<has> to modify the signal mask, at least temporarily.
2309    
2310 root 1.278 So I can't stress this enough: I<If you do not reset your signal mask when
2311     you expect it to be empty, you have a race condition in your code>. This
2312     is not a libev-specific thing, this is true for most event libraries.
2313 root 1.266
2314 root 1.82 =head3 Watcher-Specific Functions and Data Members
2315    
2316 root 1.1 =over 4
2317    
2318     =item ev_signal_init (ev_signal *, callback, int signum)
2319    
2320     =item ev_signal_set (ev_signal *, int signum)
2321    
2322     Configures the watcher to trigger on the given signal number (usually one
2323     of the C<SIGxxx> constants).
2324    
2325 root 1.48 =item int signum [read-only]
2326    
2327     The signal the watcher watches out for.
2328    
2329 root 1.1 =back
2330    
2331 root 1.132 =head3 Examples
2332    
2333 root 1.188 Example: Try to exit cleanly on SIGINT.
2334 root 1.132
2335 root 1.164 static void
2336 root 1.198 sigint_cb (struct ev_loop *loop, ev_signal *w, int revents)
2337 root 1.164 {
2338 root 1.310 ev_break (loop, EVBREAK_ALL);
2339 root 1.164 }
2340    
2341 root 1.198 ev_signal signal_watcher;
2342 root 1.164 ev_signal_init (&signal_watcher, sigint_cb, SIGINT);
2343 root 1.188 ev_signal_start (loop, &signal_watcher);
2344 root 1.132
2345 root 1.35
2346 root 1.42 =head2 C<ev_child> - watch out for process status changes
2347 root 1.1
2348     Child watchers trigger when your process receives a SIGCHLD in response to
2349 root 1.183 some child status changes (most typically when a child of yours dies or
2350     exits). It is permissible to install a child watcher I<after> the child
2351     has been forked (which implies it might have already exited), as long
2352     as the event loop isn't entered (or is continued from a watcher), i.e.,
2353     forking and then immediately registering a watcher for the child is fine,
2354 root 1.244 but forking and registering a watcher a few event loop iterations later or
2355     in the next callback invocation is not.
2356 root 1.134
2357     Only the default event loop is capable of handling signals, and therefore
2358 root 1.161 you can only register child watchers in the default event loop.
2359 root 1.134
2360 root 1.248 Due to some design glitches inside libev, child watchers will always be
2361 root 1.249 handled at maximum priority (their priority is set to C<EV_MAXPRI> by
2362     libev)
2363 root 1.248
2364 root 1.134 =head3 Process Interaction
2365    
2366     Libev grabs C<SIGCHLD> as soon as the default event loop is
2367 root 1.259 initialised. This is necessary to guarantee proper behaviour even if the
2368     first child watcher is started after the child exits. The occurrence
2369 root 1.134 of C<SIGCHLD> is recorded asynchronously, but child reaping is done
2370     synchronously as part of the event loop processing. Libev always reaps all
2371     children, even ones not watched.
2372    
2373     =head3 Overriding the Built-In Processing
2374    
2375     Libev offers no special support for overriding the built-in child
2376     processing, but if your application collides with libev's default child
2377     handler, you can override it easily by installing your own handler for
2378     C<SIGCHLD> after initialising the default loop, and making sure the
2379     default loop never gets destroyed. You are encouraged, however, to use an
2380     event-based approach to child reaping and thus use libev's support for
2381     that, so other libev users can use C<ev_child> watchers freely.
2382 root 1.1
2383 root 1.173 =head3 Stopping the Child Watcher
2384    
2385     Currently, the child watcher never gets stopped, even when the
2386     child terminates, so normally one needs to stop the watcher in the
2387     callback. Future versions of libev might stop the watcher automatically
2388 root 1.259 when a child exit is detected (calling C<ev_child_stop> twice is not a
2389     problem).
2390 root 1.173
2391 root 1.82 =head3 Watcher-Specific Functions and Data Members
2392    
2393 root 1.1 =over 4
2394    
2395 root 1.120 =item ev_child_init (ev_child *, callback, int pid, int trace)
2396 root 1.1
2397 root 1.120 =item ev_child_set (ev_child *, int pid, int trace)
2398 root 1.1
2399     Configures the watcher to wait for status changes of process C<pid> (or
2400     I<any> process if C<pid> is specified as C<0>). The callback can look
2401     at the C<rstatus> member of the C<ev_child> watcher structure to see
2402 root 1.14 the status word (use the macros from C<sys/wait.h> and see your systems
2403     C<waitpid> documentation). The C<rpid> member contains the pid of the
2404 root 1.120 process causing the status change. C<trace> must be either C<0> (only
2405     activate the watcher when the process terminates) or C<1> (additionally
2406     activate the watcher when the process is stopped or continued).
2407 root 1.1
2408 root 1.48 =item int pid [read-only]
2409    
2410     The process id this watcher watches out for, or C<0>, meaning any process id.
2411    
2412     =item int rpid [read-write]
2413    
2414     The process id that detected a status change.
2415    
2416     =item int rstatus [read-write]
2417    
2418     The process exit/trace status caused by C<rpid> (see your systems
2419     C<waitpid> and C<sys/wait.h> documentation for details).
2420    
2421 root 1.1 =back
2422    
2423 root 1.134 =head3 Examples
2424    
2425     Example: C<fork()> a new process and install a child handler to wait for
2426     its completion.
2427    
2428 root 1.164 ev_child cw;
2429    
2430     static void
2431 root 1.198 child_cb (EV_P_ ev_child *w, int revents)
2432 root 1.164 {
2433     ev_child_stop (EV_A_ w);
2434     printf ("process %d exited with status %x\n", w->rpid, w->rstatus);
2435     }
2436    
2437     pid_t pid = fork ();
2438 root 1.134
2439 root 1.164 if (pid < 0)
2440     // error
2441     else if (pid == 0)
2442     {
2443     // the forked child executes here
2444     exit (1);
2445     }
2446     else
2447     {
2448     ev_child_init (&cw, child_cb, pid, 0);
2449     ev_child_start (EV_DEFAULT_ &cw);
2450     }
2451 root 1.134
2452 root 1.34
2453 root 1.48 =head2 C<ev_stat> - did the file attributes just change?
2454    
2455 root 1.161 This watches a file system path for attribute changes. That is, it calls
2456 root 1.207 C<stat> on that path in regular intervals (or when the OS says it changed)
2457     and sees if it changed compared to the last time, invoking the callback if
2458     it did.
2459 root 1.48
2460     The path does not need to exist: changing from "path exists" to "path does
2461 root 1.211 not exist" is a status change like any other. The condition "path does not
2462     exist" (or more correctly "path cannot be stat'ed") is signified by the
2463     C<st_nlink> field being zero (which is otherwise always forced to be at
2464     least one) and all the other fields of the stat buffer having unspecified
2465     contents.
2466 root 1.48
2467 root 1.207 The path I<must not> end in a slash or contain special components such as
2468     C<.> or C<..>. The path I<should> be absolute: If it is relative and
2469     your working directory changes, then the behaviour is undefined.
2470    
2471     Since there is no portable change notification interface available, the
2472     portable implementation simply calls C<stat(2)> regularly on the path
2473     to see if it changed somehow. You can specify a recommended polling
2474     interval for this case. If you specify a polling interval of C<0> (highly
2475     recommended!) then a I<suitable, unspecified default> value will be used
2476     (which you can expect to be around five seconds, although this might
2477     change dynamically). Libev will also impose a minimum interval which is
2478 root 1.208 currently around C<0.1>, but that's usually overkill.
2479 root 1.48
2480     This watcher type is not meant for massive numbers of stat watchers,
2481     as even with OS-supported change notifications, this can be
2482     resource-intensive.
2483    
2484 root 1.183 At the time of this writing, the only OS-specific interface implemented
2485 root 1.211 is the Linux inotify interface (implementing kqueue support is left as an
2486     exercise for the reader. Note, however, that the author sees no way of
2487     implementing C<ev_stat> semantics with kqueue, except as a hint).
2488 root 1.48
2489 root 1.137 =head3 ABI Issues (Largefile Support)
2490    
2491     Libev by default (unless the user overrides this) uses the default
2492 root 1.169 compilation environment, which means that on systems with large file
2493     support disabled by default, you get the 32 bit version of the stat
2494 root 1.137 structure. When using the library from programs that change the ABI to
2495     use 64 bit file offsets the programs will fail. In that case you have to
2496     compile libev with the same flags to get binary compatibility. This is
2497     obviously the case with any flags that change the ABI, but the problem is
2498 root 1.207 most noticeably displayed with ev_stat and large file support.
2499 root 1.169
2500     The solution for this is to lobby your distribution maker to make large
2501     file interfaces available by default (as e.g. FreeBSD does) and not
2502     optional. Libev cannot simply switch on large file support because it has
2503     to exchange stat structures with application programs compiled using the
2504     default compilation environment.
2505 root 1.137
2506 root 1.183 =head3 Inotify and Kqueue
2507 root 1.108
2508 root 1.211 When C<inotify (7)> support has been compiled into libev and present at
2509     runtime, it will be used to speed up change detection where possible. The
2510     inotify descriptor will be created lazily when the first C<ev_stat>
2511     watcher is being started.
2512 root 1.108
2513 root 1.147 Inotify presence does not change the semantics of C<ev_stat> watchers
2514 root 1.108 except that changes might be detected earlier, and in some cases, to avoid
2515 root 1.147 making regular C<stat> calls. Even in the presence of inotify support
2516 root 1.183 there are many cases where libev has to resort to regular C<stat> polling,
2517 root 1.211 but as long as kernel 2.6.25 or newer is used (2.6.24 and older have too
2518     many bugs), the path exists (i.e. stat succeeds), and the path resides on
2519     a local filesystem (libev currently assumes only ext2/3, jfs, reiserfs and
2520     xfs are fully working) libev usually gets away without polling.
2521 root 1.108
2522 root 1.183 There is no support for kqueue, as apparently it cannot be used to
2523 root 1.108 implement this functionality, due to the requirement of having a file
2524 root 1.183 descriptor open on the object at all times, and detecting renames, unlinks
2525     etc. is difficult.
2526 root 1.108
2527 root 1.212 =head3 C<stat ()> is a synchronous operation
2528    
2529     Libev doesn't normally do any kind of I/O itself, and so is not blocking
2530     the process. The exception are C<ev_stat> watchers - those call C<stat
2531     ()>, which is a synchronous operation.
2532    
2533     For local paths, this usually doesn't matter: unless the system is very
2534     busy or the intervals between stat's are large, a stat call will be fast,
2535 root 1.222 as the path data is usually in memory already (except when starting the
2536 root 1.212 watcher).
2537    
2538     For networked file systems, calling C<stat ()> can block an indefinite
2539     time due to network issues, and even under good conditions, a stat call
2540     often takes multiple milliseconds.
2541    
2542     Therefore, it is best to avoid using C<ev_stat> watchers on networked
2543     paths, although this is fully supported by libev.
2544    
2545 root 1.107 =head3 The special problem of stat time resolution
2546    
2547 root 1.207 The C<stat ()> system call only supports full-second resolution portably,
2548     and even on systems where the resolution is higher, most file systems
2549     still only support whole seconds.
2550 root 1.107
2551 root 1.150 That means that, if the time is the only thing that changes, you can
2552     easily miss updates: on the first update, C<ev_stat> detects a change and
2553     calls your callback, which does something. When there is another update
2554 root 1.183 within the same second, C<ev_stat> will be unable to detect unless the
2555     stat data does change in other ways (e.g. file size).
2556 root 1.150
2557     The solution to this is to delay acting on a change for slightly more
2558 root 1.155 than a second (or till slightly after the next full second boundary), using
2559 root 1.150 a roughly one-second-delay C<ev_timer> (e.g. C<ev_timer_set (w, 0., 1.02);
2560     ev_timer_again (loop, w)>).
2561    
2562     The C<.02> offset is added to work around small timing inconsistencies
2563     of some operating systems (where the second counter of the current time
2564     might be be delayed. One such system is the Linux kernel, where a call to
2565     C<gettimeofday> might return a timestamp with a full second later than
2566     a subsequent C<time> call - if the equivalent of C<time ()> is used to
2567     update file times then there will be a small window where the kernel uses
2568     the previous second to update file times but libev might already execute
2569     the timer callback).
2570 root 1.107
2571 root 1.82 =head3 Watcher-Specific Functions and Data Members
2572    
2573 root 1.48 =over 4
2574    
2575     =item ev_stat_init (ev_stat *, callback, const char *path, ev_tstamp interval)
2576    
2577     =item ev_stat_set (ev_stat *, const char *path, ev_tstamp interval)
2578    
2579     Configures the watcher to wait for status changes of the given
2580     C<path>. The C<interval> is a hint on how quickly a change is expected to
2581     be detected and should normally be specified as C<0> to let libev choose
2582     a suitable value. The memory pointed to by C<path> must point to the same
2583     path for as long as the watcher is active.
2584    
2585 root 1.183 The callback will receive an C<EV_STAT> event when a change was detected,
2586     relative to the attributes at the time the watcher was started (or the
2587     last change was detected).
2588 root 1.48
2589 root 1.132 =item ev_stat_stat (loop, ev_stat *)
2590 root 1.48
2591     Updates the stat buffer immediately with new values. If you change the
2592 root 1.150 watched path in your callback, you could call this function to avoid
2593     detecting this change (while introducing a race condition if you are not
2594     the only one changing the path). Can also be useful simply to find out the
2595     new values.
2596 root 1.48
2597     =item ev_statdata attr [read-only]
2598    
2599 root 1.150 The most-recently detected attributes of the file. Although the type is
2600 root 1.48 C<ev_statdata>, this is usually the (or one of the) C<struct stat> types
2601 root 1.150 suitable for your system, but you can only rely on the POSIX-standardised
2602     members to be present. If the C<st_nlink> member is C<0>, then there was
2603     some error while C<stat>ing the file.
2604 root 1.48
2605     =item ev_statdata prev [read-only]
2606    
2607     The previous attributes of the file. The callback gets invoked whenever
2608 root 1.150 C<prev> != C<attr>, or, more precisely, one or more of these members
2609     differ: C<st_dev>, C<st_ino>, C<st_mode>, C<st_nlink>, C<st_uid>,
2610     C<st_gid>, C<st_rdev>, C<st_size>, C<st_atime>, C<st_mtime>, C<st_ctime>.
2611 root 1.48
2612     =item ev_tstamp interval [read-only]
2613    
2614     The specified interval.
2615    
2616     =item const char *path [read-only]
2617    
2618 root 1.161 The file system path that is being watched.
2619 root 1.48
2620     =back
2621    
2622 root 1.108 =head3 Examples
2623    
2624 root 1.48 Example: Watch C</etc/passwd> for attribute changes.
2625    
2626 root 1.164 static void
2627     passwd_cb (struct ev_loop *loop, ev_stat *w, int revents)
2628     {
2629     /* /etc/passwd changed in some way */
2630     if (w->attr.st_nlink)
2631     {
2632     printf ("passwd current size %ld\n", (long)w->attr.st_size);
2633     printf ("passwd current atime %ld\n", (long)w->attr.st_mtime);
2634     printf ("passwd current mtime %ld\n", (long)w->attr.st_mtime);
2635     }
2636     else
2637     /* you shalt not abuse printf for puts */
2638     puts ("wow, /etc/passwd is not there, expect problems. "
2639     "if this is windows, they already arrived\n");
2640     }
2641 root 1.48
2642 root 1.164 ...
2643     ev_stat passwd;
2644 root 1.48
2645 root 1.164 ev_stat_init (&passwd, passwd_cb, "/etc/passwd", 0.);
2646     ev_stat_start (loop, &passwd);
2647 root 1.107
2648     Example: Like above, but additionally use a one-second delay so we do not
2649     miss updates (however, frequent updates will delay processing, too, so
2650     one might do the work both on C<ev_stat> callback invocation I<and> on
2651     C<ev_timer> callback invocation).
2652    
2653 root 1.164 static ev_stat passwd;
2654     static ev_timer timer;
2655 root 1.107
2656 root 1.164 static void
2657     timer_cb (EV_P_ ev_timer *w, int revents)
2658     {
2659     ev_timer_stop (EV_A_ w);
2660    
2661     /* now it's one second after the most recent passwd change */
2662     }
2663    
2664     static void
2665     stat_cb (EV_P_ ev_stat *w, int revents)
2666     {
2667     /* reset the one-second timer */
2668     ev_timer_again (EV_A_ &timer);
2669     }
2670    
2671     ...
2672     ev_stat_init (&passwd, stat_cb, "/etc/passwd", 0.);
2673     ev_stat_start (loop, &passwd);
2674     ev_timer_init (&timer, timer_cb, 0., 1.02);
2675 root 1.48
2676    
2677 root 1.42 =head2 C<ev_idle> - when you've got nothing better to do...
2678 root 1.1
2679 root 1.67 Idle watchers trigger events when no other events of the same or higher
2680 root 1.183 priority are pending (prepare, check and other idle watchers do not count
2681     as receiving "events").
2682 root 1.67
2683     That is, as long as your process is busy handling sockets or timeouts
2684     (or even signals, imagine) of the same or higher priority it will not be
2685     triggered. But when your process is idle (or only lower-priority watchers
2686     are pending), the idle watchers are being called once per event loop
2687     iteration - until stopped, that is, or your process receives more events
2688     and becomes busy again with higher priority stuff.
2689 root 1.1
2690     The most noteworthy effect is that as long as any idle watchers are
2691     active, the process will not block when waiting for new events.
2692    
2693     Apart from keeping your process non-blocking (which is a useful
2694     effect on its own sometimes), idle watchers are a good place to do
2695     "pseudo-background processing", or delay processing stuff to after the
2696     event loop has handled all outstanding events.
2697    
2698 root 1.82 =head3 Watcher-Specific Functions and Data Members
2699    
2700 root 1.1 =over 4
2701    
2702 root 1.226 =item ev_idle_init (ev_idle *, callback)
2703 root 1.1
2704     Initialises and configures the idle watcher - it has no parameters of any
2705     kind. There is a C<ev_idle_set> macro, but using it is utterly pointless,
2706     believe me.
2707    
2708     =back
2709    
2710 root 1.111 =head3 Examples
2711    
2712 root 1.54 Example: Dynamically allocate an C<ev_idle> watcher, start it, and in the
2713     callback, free it. Also, use no error checking, as usual.
2714 root 1.34
2715 root 1.164 static void
2716 root 1.198 idle_cb (struct ev_loop *loop, ev_idle *w, int revents)
2717 root 1.164 {
2718     free (w);
2719     // now do something you wanted to do when the program has
2720     // no longer anything immediate to do.
2721     }
2722    
2723 root 1.198 ev_idle *idle_watcher = malloc (sizeof (ev_idle));
2724 root 1.164 ev_idle_init (idle_watcher, idle_cb);
2725 root 1.242 ev_idle_start (loop, idle_watcher);
2726 root 1.34
2727    
2728 root 1.42 =head2 C<ev_prepare> and C<ev_check> - customise your event loop!
2729 root 1.1
2730 root 1.183 Prepare and check watchers are usually (but not always) used in pairs:
2731 root 1.20 prepare watchers get invoked before the process blocks and check watchers
2732 root 1.14 afterwards.
2733 root 1.1
2734 root 1.310 You I<must not> call C<ev_run> or similar functions that enter
2735 root 1.45 the current event loop from either C<ev_prepare> or C<ev_check>
2736     watchers. Other loops than the current one are fine, however. The
2737     rationale behind this is that you do not need to check for recursion in
2738     those watchers, i.e. the sequence will always be C<ev_prepare>, blocking,
2739     C<ev_check> so if you have one watcher of each kind they will always be
2740     called in pairs bracketing the blocking call.
2741    
2742 root 1.35 Their main purpose is to integrate other event mechanisms into libev and
2743 root 1.183 their use is somewhat advanced. They could be used, for example, to track
2744 root 1.35 variable changes, implement your own watchers, integrate net-snmp or a
2745 root 1.45 coroutine library and lots more. They are also occasionally useful if
2746     you cache some data and want to flush it before blocking (for example,
2747     in X programs you might want to do an C<XFlush ()> in an C<ev_prepare>
2748     watcher).
2749 root 1.1
2750 root 1.183 This is done by examining in each prepare call which file descriptors
2751     need to be watched by the other library, registering C<ev_io> watchers
2752     for them and starting an C<ev_timer> watcher for any timeouts (many
2753     libraries provide exactly this functionality). Then, in the check watcher,
2754     you check for any events that occurred (by checking the pending status
2755     of all watchers and stopping them) and call back into the library. The
2756     I/O and timer callbacks will never actually be called (but must be valid
2757     nevertheless, because you never know, you know?).
2758 root 1.1
2759 root 1.14 As another example, the Perl Coro module uses these hooks to integrate
2760 root 1.1 coroutines into libev programs, by yielding to other active coroutines
2761     during each prepare and only letting the process block if no coroutines
2762 root 1.20 are ready to run (it's actually more complicated: it only runs coroutines
2763     with priority higher than or equal to the event loop and one coroutine
2764     of lower priority, but only once, using idle watchers to keep the event
2765     loop from blocking if lower-priority coroutines are active, thus mapping
2766     low-priority coroutines to idle/background tasks).
2767 root 1.1
2768 root 1.77 It is recommended to give C<ev_check> watchers highest (C<EV_MAXPRI>)
2769     priority, to ensure that they are being run before any other watchers
2770 root 1.183 after the poll (this doesn't matter for C<ev_prepare> watchers).
2771    
2772     Also, C<ev_check> watchers (and C<ev_prepare> watchers, too) should not
2773     activate ("feed") events into libev. While libev fully supports this, they
2774     might get executed before other C<ev_check> watchers did their job. As
2775     C<ev_check> watchers are often used to embed other (non-libev) event
2776     loops those other event loops might be in an unusable state until their
2777     C<ev_check> watcher ran (always remind yourself to coexist peacefully with
2778     others).
2779 root 1.77
2780 root 1.82 =head3 Watcher-Specific Functions and Data Members
2781    
2782 root 1.1 =over 4
2783    
2784     =item ev_prepare_init (ev_prepare *, callback)
2785    
2786     =item ev_check_init (ev_check *, callback)
2787    
2788     Initialises and configures the prepare or check watcher - they have no
2789     parameters of any kind. There are C<ev_prepare_set> and C<ev_check_set>
2790 root 1.183 macros, but using them is utterly, utterly, utterly and completely
2791     pointless.
2792 root 1.1
2793     =back
2794    
2795 root 1.111 =head3 Examples
2796    
2797 root 1.76 There are a number of principal ways to embed other event loops or modules
2798     into libev. Here are some ideas on how to include libadns into libev
2799     (there is a Perl module named C<EV::ADNS> that does this, which you could
2800 root 1.150 use as a working example. Another Perl module named C<EV::Glib> embeds a
2801     Glib main context into libev, and finally, C<Glib::EV> embeds EV into the
2802     Glib event loop).
2803 root 1.76
2804     Method 1: Add IO watchers and a timeout watcher in a prepare handler,
2805     and in a check watcher, destroy them and call into libadns. What follows
2806     is pseudo-code only of course. This requires you to either use a low
2807     priority for the check watcher or use C<ev_clear_pending> explicitly, as
2808     the callbacks for the IO/timeout watchers might not have been called yet.
2809 root 1.45
2810 root 1.164 static ev_io iow [nfd];
2811     static ev_timer tw;
2812 root 1.45
2813 root 1.164 static void
2814 root 1.198 io_cb (struct ev_loop *loop, ev_io *w, int revents)
2815 root 1.164 {
2816     }
2817 root 1.45
2818 root 1.164 // create io watchers for each fd and a timer before blocking
2819     static void
2820 root 1.198 adns_prepare_cb (struct ev_loop *loop, ev_prepare *w, int revents)
2821 root 1.164 {
2822     int timeout = 3600000;
2823     struct pollfd fds [nfd];
2824     // actual code will need to loop here and realloc etc.
2825     adns_beforepoll (ads, fds, &nfd, &timeout, timeval_from (ev_time ()));
2826    
2827     /* the callback is illegal, but won't be called as we stop during check */
2828 root 1.243 ev_timer_init (&tw, 0, timeout * 1e-3, 0.);
2829 root 1.164 ev_timer_start (loop, &tw);
2830    
2831     // create one ev_io per pollfd
2832     for (int i = 0; i < nfd; ++i)
2833     {
2834     ev_io_init (iow + i, io_cb, fds [i].fd,
2835     ((fds [i].events & POLLIN ? EV_READ : 0)
2836     | (fds [i].events & POLLOUT ? EV_WRITE : 0)));
2837    
2838     fds [i].revents = 0;
2839     ev_io_start (loop, iow + i);
2840     }
2841     }
2842    
2843     // stop all watchers after blocking
2844     static void
2845 root 1.198 adns_check_cb (struct ev_loop *loop, ev_check *w, int revents)
2846 root 1.164 {
2847     ev_timer_stop (loop, &tw);
2848    
2849     for (int i = 0; i < nfd; ++i)
2850     {
2851     // set the relevant poll flags
2852     // could also call adns_processreadable etc. here
2853     struct pollfd *fd = fds + i;
2854     int revents = ev_clear_pending (iow + i);
2855     if (revents & EV_READ ) fd->revents |= fd->events & POLLIN;
2856     if (revents & EV_WRITE) fd->revents |= fd->events & POLLOUT;
2857    
2858     // now stop the watcher
2859     ev_io_stop (loop, iow + i);
2860     }
2861    
2862     adns_afterpoll (adns, fds, nfd, timeval_from (ev_now (loop));
2863     }
2864 root 1.34
2865 root 1.76 Method 2: This would be just like method 1, but you run C<adns_afterpoll>
2866     in the prepare watcher and would dispose of the check watcher.
2867    
2868     Method 3: If the module to be embedded supports explicit event
2869 root 1.161 notification (libadns does), you can also make use of the actual watcher
2870 root 1.76 callbacks, and only destroy/create the watchers in the prepare watcher.
2871    
2872 root 1.164 static void
2873     timer_cb (EV_P_ ev_timer *w, int revents)
2874     {
2875     adns_state ads = (adns_state)w->data;
2876     update_now (EV_A);
2877    
2878     adns_processtimeouts (ads, &tv_now);
2879     }
2880    
2881     static void
2882     io_cb (EV_P_ ev_io *w, int revents)
2883     {
2884     adns_state ads = (adns_state)w->data;
2885     update_now (EV_A);
2886    
2887     if (revents & EV_READ ) adns_processreadable (ads, w->fd, &tv_now);
2888     if (revents & EV_WRITE) adns_processwriteable (ads, w->fd, &tv_now);
2889     }
2890 root 1.76
2891 root 1.164 // do not ever call adns_afterpoll
2892 root 1.76
2893     Method 4: Do not use a prepare or check watcher because the module you
2894 root 1.183 want to embed is not flexible enough to support it. Instead, you can
2895     override their poll function. The drawback with this solution is that the
2896     main loop is now no longer controllable by EV. The C<Glib::EV> module uses
2897     this approach, effectively embedding EV as a client into the horrible
2898     libglib event loop.
2899 root 1.76
2900 root 1.164 static gint
2901     event_poll_func (GPollFD *fds, guint nfds, gint timeout)
2902     {
2903     int got_events = 0;
2904    
2905     for (n = 0; n < nfds; ++n)
2906     // create/start io watcher that sets the relevant bits in fds[n] and increment got_events
2907    
2908     if (timeout >= 0)
2909     // create/start timer
2910    
2911     // poll
2912 root 1.310 ev_run (EV_A_ 0);
2913 root 1.76
2914 root 1.164 // stop timer again
2915     if (timeout >= 0)
2916     ev_timer_stop (EV_A_ &to);
2917    
2918     // stop io watchers again - their callbacks should have set
2919     for (n = 0; n < nfds; ++n)
2920     ev_io_stop (EV_A_ iow [n]);
2921    
2922     return got_events;
2923     }
2924 root 1.76
2925 root 1.34
2926 root 1.42 =head2 C<ev_embed> - when one backend isn't enough...
2927 root 1.35
2928     This is a rather advanced watcher type that lets you embed one event loop
2929 root 1.36 into another (currently only C<ev_io> events are supported in the embedded
2930     loop, other types of watchers might be handled in a delayed or incorrect
2931 root 1.100 fashion and must not be used).
2932 root 1.35
2933     There are primarily two reasons you would want that: work around bugs and
2934     prioritise I/O.
2935    
2936     As an example for a bug workaround, the kqueue backend might only support
2937     sockets on some platform, so it is unusable as generic backend, but you
2938     still want to make use of it because you have many sockets and it scales
2939 root 1.183 so nicely. In this case, you would create a kqueue-based loop and embed
2940     it into your default loop (which might use e.g. poll). Overall operation
2941     will be a bit slower because first libev has to call C<poll> and then
2942     C<kevent>, but at least you can use both mechanisms for what they are
2943     best: C<kqueue> for scalable sockets and C<poll> if you want it to work :)
2944    
2945     As for prioritising I/O: under rare circumstances you have the case where
2946     some fds have to be watched and handled very quickly (with low latency),
2947     and even priorities and idle watchers might have too much overhead. In
2948     this case you would put all the high priority stuff in one loop and all
2949     the rest in a second one, and embed the second one in the first.
2950 root 1.35
2951 root 1.223 As long as the watcher is active, the callback will be invoked every
2952     time there might be events pending in the embedded loop. The callback
2953     must then call C<ev_embed_sweep (mainloop, watcher)> to make a single
2954     sweep and invoke their callbacks (the callback doesn't need to invoke the
2955     C<ev_embed_sweep> function directly, it could also start an idle watcher
2956     to give the embedded loop strictly lower priority for example).
2957    
2958     You can also set the callback to C<0>, in which case the embed watcher
2959     will automatically execute the embedded loop sweep whenever necessary.
2960    
2961     Fork detection will be handled transparently while the C<ev_embed> watcher
2962     is active, i.e., the embedded loop will automatically be forked when the
2963     embedding loop forks. In other cases, the user is responsible for calling
2964     C<ev_loop_fork> on the embedded loop.
2965 root 1.35
2966 root 1.184 Unfortunately, not all backends are embeddable: only the ones returned by
2967 root 1.35 C<ev_embeddable_backends> are, which, unfortunately, does not include any
2968     portable one.
2969    
2970     So when you want to use this feature you will always have to be prepared
2971     that you cannot get an embeddable loop. The recommended way to get around
2972     this is to have a separate variables for your embeddable loop, try to
2973 root 1.111 create it, and if that fails, use the normal loop for everything.
2974 root 1.35
2975 root 1.187 =head3 C<ev_embed> and fork
2976    
2977     While the C<ev_embed> watcher is running, forks in the embedding loop will
2978     automatically be applied to the embedded loop as well, so no special
2979     fork handling is required in that case. When the watcher is not running,
2980     however, it is still the task of the libev user to call C<ev_loop_fork ()>
2981     as applicable.
2982    
2983 root 1.82 =head3 Watcher-Specific Functions and Data Members
2984    
2985 root 1.35 =over 4
2986    
2987 root 1.36 =item ev_embed_init (ev_embed *, callback, struct ev_loop *embedded_loop)
2988    
2989     =item ev_embed_set (ev_embed *, callback, struct ev_loop *embedded_loop)
2990    
2991     Configures the watcher to embed the given loop, which must be
2992     embeddable. If the callback is C<0>, then C<ev_embed_sweep> will be
2993     invoked automatically, otherwise it is the responsibility of the callback
2994     to invoke it (it will continue to be called until the sweep has been done,
2995 root 1.161 if you do not want that, you need to temporarily stop the embed watcher).
2996 root 1.35
2997 root 1.36 =item ev_embed_sweep (loop, ev_embed *)
2998 root 1.35
2999 root 1.36 Make a single, non-blocking sweep over the embedded loop. This works
3000 root 1.310 similarly to C<ev_run (embedded_loop, EVRUN_NOWAIT)>, but in the most
3001 root 1.161 appropriate way for embedded loops.
3002 root 1.35
3003 root 1.91 =item struct ev_loop *other [read-only]
3004 root 1.48
3005     The embedded event loop.
3006    
3007 root 1.35 =back
3008    
3009 root 1.111 =head3 Examples
3010    
3011     Example: Try to get an embeddable event loop and embed it into the default
3012     event loop. If that is not possible, use the default loop. The default
3013 root 1.161 loop is stored in C<loop_hi>, while the embeddable loop is stored in
3014     C<loop_lo> (which is C<loop_hi> in the case no embeddable loop can be
3015 root 1.111 used).
3016    
3017 root 1.164 struct ev_loop *loop_hi = ev_default_init (0);
3018     struct ev_loop *loop_lo = 0;
3019 root 1.198 ev_embed embed;
3020 root 1.164
3021     // see if there is a chance of getting one that works
3022     // (remember that a flags value of 0 means autodetection)
3023     loop_lo = ev_embeddable_backends () & ev_recommended_backends ()
3024     ? ev_loop_new (ev_embeddable_backends () & ev_recommended_backends ())
3025     : 0;
3026    
3027     // if we got one, then embed it, otherwise default to loop_hi
3028     if (loop_lo)
3029     {
3030     ev_embed_init (&embed, 0, loop_lo);
3031     ev_embed_start (loop_hi, &embed);
3032     }
3033     else
3034     loop_lo = loop_hi;
3035 root 1.111
3036     Example: Check if kqueue is available but not recommended and create
3037     a kqueue backend for use with sockets (which usually work with any
3038     kqueue implementation). Store the kqueue/socket-only event loop in
3039     C<loop_socket>. (One might optionally use C<EVFLAG_NOENV>, too).
3040    
3041 root 1.164 struct ev_loop *loop = ev_default_init (0);
3042     struct ev_loop *loop_socket = 0;
3043 root 1.198 ev_embed embed;
3044 root 1.164
3045     if (ev_supported_backends () & ~ev_recommended_backends () & EVBACKEND_KQUEUE)
3046     if ((loop_socket = ev_loop_new (EVBACKEND_KQUEUE))
3047     {
3048     ev_embed_init (&embed, 0, loop_socket);
3049     ev_embed_start (loop, &embed);
3050     }
3051 root 1.111
3052 root 1.164 if (!loop_socket)
3053     loop_socket = loop;
3054 root 1.111
3055 root 1.164 // now use loop_socket for all sockets, and loop for everything else
3056 root 1.111
3057 root 1.35
3058 root 1.50 =head2 C<ev_fork> - the audacity to resume the event loop after a fork
3059    
3060     Fork watchers are called when a C<fork ()> was detected (usually because
3061     whoever is a good citizen cared to tell libev about it by calling
3062     C<ev_default_fork> or C<ev_loop_fork>). The invocation is done before the
3063     event loop blocks next and before C<ev_check> watchers are being called,
3064     and only in the child after the fork. If whoever good citizen calling
3065     C<ev_default_fork> cheats and calls it in the wrong process, the fork
3066     handlers will be invoked, too, of course.
3067    
3068 root 1.238 =head3 The special problem of life after fork - how is it possible?
3069    
3070 root 1.302 Most uses of C<fork()> consist of forking, then some simple calls to set
3071 root 1.238 up/change the process environment, followed by a call to C<exec()>. This
3072     sequence should be handled by libev without any problems.
3073    
3074     This changes when the application actually wants to do event handling
3075     in the child, or both parent in child, in effect "continuing" after the
3076     fork.
3077    
3078     The default mode of operation (for libev, with application help to detect
3079     forks) is to duplicate all the state in the child, as would be expected
3080     when I<either> the parent I<or> the child process continues.
3081    
3082     When both processes want to continue using libev, then this is usually the
3083     wrong result. In that case, usually one process (typically the parent) is
3084     supposed to continue with all watchers in place as before, while the other
3085     process typically wants to start fresh, i.e. without any active watchers.
3086    
3087     The cleanest and most efficient way to achieve that with libev is to
3088     simply create a new event loop, which of course will be "empty", and
3089     use that for new watchers. This has the advantage of not touching more
3090     memory than necessary, and thus avoiding the copy-on-write, and the
3091     disadvantage of having to use multiple event loops (which do not support
3092     signal watchers).
3093    
3094     When this is not possible, or you want to use the default loop for
3095     other reasons, then in the process that wants to start "fresh", call
3096 root 1.322 C<ev_loop_destroy (EV_DEFAULT)> followed by C<ev_default_loop (...)>.
3097     Destroying the default loop will "orphan" (not stop) all registered
3098     watchers, so you have to be careful not to execute code that modifies
3099     those watchers. Note also that in that case, you have to re-register any
3100     signal watchers.
3101 root 1.238
3102 root 1.83 =head3 Watcher-Specific Functions and Data Members
3103    
3104 root 1.50 =over 4
3105    
3106 root 1.325 =item ev_fork_init (ev_fork *, callback)
3107 root 1.50
3108     Initialises and configures the fork watcher - it has no parameters of any
3109     kind. There is a C<ev_fork_set> macro, but using it is utterly pointless,
3110 root 1.329 really.
3111 root 1.50
3112     =back
3113    
3114    
3115 root 1.324 =head2 C<ev_cleanup> - even the best things end
3116    
3117 root 1.328 Cleanup watchers are called just before the event loop is being destroyed
3118     by a call to C<ev_loop_destroy>.
3119 root 1.324
3120     While there is no guarantee that the event loop gets destroyed, cleanup
3121 root 1.326 watchers provide a convenient method to install cleanup hooks for your
3122 root 1.324 program, worker threads and so on - you just to make sure to destroy the
3123     loop when you want them to be invoked.
3124    
3125 root 1.327 Cleanup watchers are invoked in the same way as any other watcher. Unlike
3126     all other watchers, they do not keep a reference to the event loop (which
3127     makes a lot of sense if you think about it). Like all other watchers, you
3128     can call libev functions in the callback, except C<ev_cleanup_start>.
3129    
3130 root 1.324 =head3 Watcher-Specific Functions and Data Members
3131    
3132     =over 4
3133    
3134 root 1.325 =item ev_cleanup_init (ev_cleanup *, callback)
3135 root 1.324
3136     Initialises and configures the cleanup watcher - it has no parameters of
3137     any kind. There is a C<ev_cleanup_set> macro, but using it is utterly
3138 root 1.329 pointless, I assure you.
3139 root 1.324
3140     =back
3141    
3142     Example: Register an atexit handler to destroy the default loop, so any
3143     cleanup functions are called.
3144    
3145     static void
3146     program_exits (void)
3147     {
3148     ev_loop_destroy (EV_DEFAULT_UC);
3149     }
3150    
3151     ...
3152     atexit (program_exits);
3153    
3154    
3155 root 1.302 =head2 C<ev_async> - how to wake up an event loop
3156 root 1.122
3157 root 1.310 In general, you cannot use an C<ev_run> from multiple threads or other
3158 root 1.122 asynchronous sources such as signal handlers (as opposed to multiple event
3159     loops - those are of course safe to use in different threads).
3160    
3161 root 1.302 Sometimes, however, you need to wake up an event loop you do not control,
3162     for example because it belongs to another thread. This is what C<ev_async>
3163     watchers do: as long as the C<ev_async> watcher is active, you can signal
3164     it by calling C<ev_async_send>, which is thread- and signal safe.
3165 root 1.122
3166     This functionality is very similar to C<ev_signal> watchers, as signals,
3167     too, are asynchronous in nature, and signals, too, will be compressed
3168     (i.e. the number of callback invocations may be less than the number of
3169     C<ev_async_sent> calls).
3170    
3171     Unlike C<ev_signal> watchers, C<ev_async> works with any event loop, not
3172     just the default loop.
3173    
3174 root 1.124 =head3 Queueing
3175    
3176     C<ev_async> does not support queueing of data in any way. The reason
3177     is that the author does not know of a simple (or any) algorithm for a
3178     multiple-writer-single-reader queue that works in all cases and doesn't
3179 root 1.274 need elaborate support such as pthreads or unportable memory access
3180     semantics.
3181 root 1.124
3182     That means that if you want to queue data, you have to provide your own
3183 root 1.184 queue. But at least I can tell you how to implement locking around your
3184 root 1.130 queue:
3185 root 1.124
3186     =over 4
3187    
3188     =item queueing from a signal handler context
3189    
3190     To implement race-free queueing, you simply add to the queue in the signal
3191 root 1.191 handler but you block the signal handler in the watcher callback. Here is
3192     an example that does that for some fictitious SIGUSR1 handler:
3193 root 1.124
3194     static ev_async mysig;
3195    
3196     static void
3197     sigusr1_handler (void)
3198     {
3199     sometype data;
3200    
3201     // no locking etc.
3202     queue_put (data);
3203 root 1.133 ev_async_send (EV_DEFAULT_ &mysig);
3204 root 1.124 }
3205    
3206     static void
3207     mysig_cb (EV_P_ ev_async *w, int revents)
3208     {
3209     sometype data;
3210     sigset_t block, prev;
3211    
3212     sigemptyset (&block);
3213     sigaddset (&block, SIGUSR1);
3214     sigprocmask (SIG_BLOCK, &block, &prev);
3215    
3216     while (queue_get (&data))
3217     process (data);
3218    
3219     if (sigismember (&prev, SIGUSR1)
3220     sigprocmask (SIG_UNBLOCK, &block, 0);
3221     }