… | |
… | |
4 | |
4 | |
5 | =head1 SYNOPSIS |
5 | =head1 SYNOPSIS |
6 | |
6 | |
7 | #include <ev.h> |
7 | #include <ev.h> |
8 | |
8 | |
9 | =head1 EXAMPLE PROGRAM |
9 | =head2 EXAMPLE PROGRAM |
10 | |
10 | |
11 | #include <ev.h> |
11 | #include <ev.h> |
12 | |
12 | |
13 | ev_io stdin_watcher; |
13 | ev_io stdin_watcher; |
14 | ev_timer timeout_watcher; |
14 | ev_timer timeout_watcher; |
… | |
… | |
53 | The newest version of this document is also available as a html-formatted |
53 | The newest version of this document is also available as a html-formatted |
54 | web page you might find easier to navigate when reading it for the first |
54 | web page you might find easier to navigate when reading it for the first |
55 | time: L<http://cvs.schmorp.de/libev/ev.html>. |
55 | time: L<http://cvs.schmorp.de/libev/ev.html>. |
56 | |
56 | |
57 | Libev is an event loop: you register interest in certain events (such as a |
57 | Libev is an event loop: you register interest in certain events (such as a |
58 | file descriptor being readable or a timeout occuring), and it will manage |
58 | file descriptor being readable or a timeout occurring), and it will manage |
59 | these event sources and provide your program with events. |
59 | these event sources and provide your program with events. |
60 | |
60 | |
61 | To do this, it must take more or less complete control over your process |
61 | To do this, it must take more or less complete control over your process |
62 | (or thread) by executing the I<event loop> handler, and will then |
62 | (or thread) by executing the I<event loop> handler, and will then |
63 | communicate events via a callback mechanism. |
63 | communicate events via a callback mechanism. |
… | |
… | |
65 | You register interest in certain events by registering so-called I<event |
65 | You register interest in certain events by registering so-called I<event |
66 | watchers>, which are relatively small C structures you initialise with the |
66 | watchers>, which are relatively small C structures you initialise with the |
67 | details of the event, and then hand it over to libev by I<starting> the |
67 | details of the event, and then hand it over to libev by I<starting> the |
68 | watcher. |
68 | watcher. |
69 | |
69 | |
70 | =head1 FEATURES |
70 | =head2 FEATURES |
71 | |
71 | |
72 | Libev supports C<select>, C<poll>, the Linux-specific C<epoll>, the |
72 | Libev supports C<select>, C<poll>, the Linux-specific C<epoll>, the |
73 | BSD-specific C<kqueue> and the Solaris-specific event port mechanisms |
73 | BSD-specific C<kqueue> and the Solaris-specific event port mechanisms |
74 | for file descriptor events (C<ev_io>), the Linux C<inotify> interface |
74 | for file descriptor events (C<ev_io>), the Linux C<inotify> interface |
75 | (for C<ev_stat>), relative timers (C<ev_timer>), absolute timers |
75 | (for C<ev_stat>), relative timers (C<ev_timer>), absolute timers |
… | |
… | |
82 | |
82 | |
83 | It also is quite fast (see this |
83 | It also is quite fast (see this |
84 | L<benchmark|http://libev.schmorp.de/bench.html> comparing it to libevent |
84 | L<benchmark|http://libev.schmorp.de/bench.html> comparing it to libevent |
85 | for example). |
85 | for example). |
86 | |
86 | |
87 | =head1 CONVENTIONS |
87 | =head2 CONVENTIONS |
88 | |
88 | |
89 | Libev is very configurable. In this manual the default configuration will |
89 | Libev is very configurable. In this manual the default configuration will |
90 | be described, which supports multiple event loops. For more info about |
90 | be described, which supports multiple event loops. For more info about |
91 | various configuration options please have a look at B<EMBED> section in |
91 | various configuration options please have a look at B<EMBED> section in |
92 | this manual. If libev was configured without support for multiple event |
92 | this manual. If libev was configured without support for multiple event |
93 | loops, then all functions taking an initial argument of name C<loop> |
93 | loops, then all functions taking an initial argument of name C<loop> |
94 | (which is always of type C<struct ev_loop *>) will not have this argument. |
94 | (which is always of type C<struct ev_loop *>) will not have this argument. |
95 | |
95 | |
96 | =head1 TIME REPRESENTATION |
96 | =head2 TIME REPRESENTATION |
97 | |
97 | |
98 | Libev represents time as a single floating point number, representing the |
98 | Libev represents time as a single floating point number, representing the |
99 | (fractional) number of seconds since the (POSIX) epoch (somewhere near |
99 | (fractional) number of seconds since the (POSIX) epoch (somewhere near |
100 | the beginning of 1970, details are complicated, don't ask). This type is |
100 | the beginning of 1970, details are complicated, don't ask). This type is |
101 | called C<ev_tstamp>, which is what you should use too. It usually aliases |
101 | called C<ev_tstamp>, which is what you should use too. It usually aliases |
… | |
… | |
115 | |
115 | |
116 | Returns the current time as libev would use it. Please note that the |
116 | Returns the current time as libev would use it. Please note that the |
117 | C<ev_now> function is usually faster and also often returns the timestamp |
117 | C<ev_now> function is usually faster and also often returns the timestamp |
118 | you actually want to know. |
118 | you actually want to know. |
119 | |
119 | |
|
|
120 | =item ev_sleep (ev_tstamp interval) |
|
|
121 | |
|
|
122 | Sleep for the given interval: The current thread will be blocked until |
|
|
123 | either it is interrupted or the given time interval has passed. Basically |
|
|
124 | this is a subsecond-resolution C<sleep ()>. |
|
|
125 | |
120 | =item int ev_version_major () |
126 | =item int ev_version_major () |
121 | |
127 | |
122 | =item int ev_version_minor () |
128 | =item int ev_version_minor () |
123 | |
129 | |
124 | You can find out the major and minor ABI version numbers of the library |
130 | You can find out the major and minor ABI version numbers of the library |
… | |
… | |
254 | flags. If that is troubling you, check C<ev_backend ()> afterwards). |
260 | flags. If that is troubling you, check C<ev_backend ()> afterwards). |
255 | |
261 | |
256 | If you don't know what event loop to use, use the one returned from this |
262 | If you don't know what event loop to use, use the one returned from this |
257 | function. |
263 | function. |
258 | |
264 | |
|
|
265 | The default loop is the only loop that can handle C<ev_signal> and |
|
|
266 | C<ev_child> watchers, and to do this, it always registers a handler |
|
|
267 | for C<SIGCHLD>. If this is a problem for your app you can either |
|
|
268 | create a dynamic loop with C<ev_loop_new> that doesn't do that, or you |
|
|
269 | can simply overwrite the C<SIGCHLD> signal handler I<after> calling |
|
|
270 | C<ev_default_init>. |
|
|
271 | |
259 | The flags argument can be used to specify special behaviour or specific |
272 | The flags argument can be used to specify special behaviour or specific |
260 | backends to use, and is usually specified as C<0> (or C<EVFLAG_AUTO>). |
273 | backends to use, and is usually specified as C<0> (or C<EVFLAG_AUTO>). |
261 | |
274 | |
262 | The following flags are supported: |
275 | The following flags are supported: |
263 | |
276 | |
… | |
… | |
300 | =item C<EVBACKEND_SELECT> (value 1, portable select backend) |
313 | =item C<EVBACKEND_SELECT> (value 1, portable select backend) |
301 | |
314 | |
302 | This is your standard select(2) backend. Not I<completely> standard, as |
315 | This is your standard select(2) backend. Not I<completely> standard, as |
303 | libev tries to roll its own fd_set with no limits on the number of fds, |
316 | libev tries to roll its own fd_set with no limits on the number of fds, |
304 | but if that fails, expect a fairly low limit on the number of fds when |
317 | but if that fails, expect a fairly low limit on the number of fds when |
305 | using this backend. It doesn't scale too well (O(highest_fd)), but its usually |
318 | using this backend. It doesn't scale too well (O(highest_fd)), but its |
306 | the fastest backend for a low number of fds. |
319 | usually the fastest backend for a low number of (low-numbered :) fds. |
|
|
320 | |
|
|
321 | To get good performance out of this backend you need a high amount of |
|
|
322 | parallelity (most of the file descriptors should be busy). If you are |
|
|
323 | writing a server, you should C<accept ()> in a loop to accept as many |
|
|
324 | connections as possible during one iteration. You might also want to have |
|
|
325 | a look at C<ev_set_io_collect_interval ()> to increase the amount of |
|
|
326 | readyness notifications you get per iteration. |
307 | |
327 | |
308 | =item C<EVBACKEND_POLL> (value 2, poll backend, available everywhere except on windows) |
328 | =item C<EVBACKEND_POLL> (value 2, poll backend, available everywhere except on windows) |
309 | |
329 | |
310 | And this is your standard poll(2) backend. It's more complicated than |
330 | And this is your standard poll(2) backend. It's more complicated |
311 | select, but handles sparse fds better and has no artificial limit on the |
331 | than select, but handles sparse fds better and has no artificial |
312 | number of fds you can use (except it will slow down considerably with a |
332 | limit on the number of fds you can use (except it will slow down |
313 | lot of inactive fds). It scales similarly to select, i.e. O(total_fds). |
333 | considerably with a lot of inactive fds). It scales similarly to select, |
|
|
334 | i.e. O(total_fds). See the entry for C<EVBACKEND_SELECT>, above, for |
|
|
335 | performance tips. |
314 | |
336 | |
315 | =item C<EVBACKEND_EPOLL> (value 4, Linux) |
337 | =item C<EVBACKEND_EPOLL> (value 4, Linux) |
316 | |
338 | |
317 | For few fds, this backend is a bit little slower than poll and select, |
339 | For few fds, this backend is a bit little slower than poll and select, |
318 | but it scales phenomenally better. While poll and select usually scale like |
340 | but it scales phenomenally better. While poll and select usually scale |
319 | O(total_fds) where n is the total number of fds (or the highest fd), epoll scales |
341 | like O(total_fds) where n is the total number of fds (or the highest fd), |
320 | either O(1) or O(active_fds). |
342 | epoll scales either O(1) or O(active_fds). The epoll design has a number |
|
|
343 | of shortcomings, such as silently dropping events in some hard-to-detect |
|
|
344 | cases and rewiring a syscall per fd change, no fork support and bad |
|
|
345 | support for dup. |
321 | |
346 | |
322 | While stopping and starting an I/O watcher in the same iteration will |
347 | While stopping, setting and starting an I/O watcher in the same iteration |
323 | result in some caching, there is still a syscall per such incident |
348 | will result in some caching, there is still a syscall per such incident |
324 | (because the fd could point to a different file description now), so its |
349 | (because the fd could point to a different file description now), so its |
325 | best to avoid that. Also, dup()ed file descriptors might not work very |
350 | best to avoid that. Also, C<dup ()>'ed file descriptors might not work |
326 | well if you register events for both fds. |
351 | very well if you register events for both fds. |
327 | |
352 | |
328 | Please note that epoll sometimes generates spurious notifications, so you |
353 | Please note that epoll sometimes generates spurious notifications, so you |
329 | need to use non-blocking I/O or other means to avoid blocking when no data |
354 | need to use non-blocking I/O or other means to avoid blocking when no data |
330 | (or space) is available. |
355 | (or space) is available. |
331 | |
356 | |
|
|
357 | Best performance from this backend is achieved by not unregistering all |
|
|
358 | watchers for a file descriptor until it has been closed, if possible, i.e. |
|
|
359 | keep at least one watcher active per fd at all times. |
|
|
360 | |
|
|
361 | While nominally embeddeble in other event loops, this feature is broken in |
|
|
362 | all kernel versions tested so far. |
|
|
363 | |
332 | =item C<EVBACKEND_KQUEUE> (value 8, most BSD clones) |
364 | =item C<EVBACKEND_KQUEUE> (value 8, most BSD clones) |
333 | |
365 | |
334 | Kqueue deserves special mention, as at the time of this writing, it |
366 | Kqueue deserves special mention, as at the time of this writing, it |
335 | was broken on all BSDs except NetBSD (usually it doesn't work with |
367 | was broken on all BSDs except NetBSD (usually it doesn't work reliably |
336 | anything but sockets and pipes, except on Darwin, where of course its |
368 | with anything but sockets and pipes, except on Darwin, where of course |
337 | completely useless). For this reason its not being "autodetected" |
369 | it's completely useless). For this reason it's not being "autodetected" |
338 | unless you explicitly specify it explicitly in the flags (i.e. using |
370 | unless you explicitly specify it explicitly in the flags (i.e. using |
339 | C<EVBACKEND_KQUEUE>). |
371 | C<EVBACKEND_KQUEUE>) or libev was compiled on a known-to-be-good (-enough) |
|
|
372 | system like NetBSD. |
|
|
373 | |
|
|
374 | You still can embed kqueue into a normal poll or select backend and use it |
|
|
375 | only for sockets (after having made sure that sockets work with kqueue on |
|
|
376 | the target platform). See C<ev_embed> watchers for more info. |
340 | |
377 | |
341 | It scales in the same way as the epoll backend, but the interface to the |
378 | It scales in the same way as the epoll backend, but the interface to the |
342 | kernel is more efficient (which says nothing about its actual speed, of |
379 | kernel is more efficient (which says nothing about its actual speed, of |
343 | course). While starting and stopping an I/O watcher does not cause an |
380 | course). While stopping, setting and starting an I/O watcher does never |
344 | extra syscall as with epoll, it still adds up to four event changes per |
381 | cause an extra syscall as with C<EVBACKEND_EPOLL>, it still adds up to |
345 | incident, so its best to avoid that. |
382 | two event changes per incident, support for C<fork ()> is very bad and it |
|
|
383 | drops fds silently in similarly hard-to-detect cases. |
|
|
384 | |
|
|
385 | This backend usually performs well under most conditions. |
|
|
386 | |
|
|
387 | While nominally embeddable in other event loops, this doesn't work |
|
|
388 | everywhere, so you might need to test for this. And since it is broken |
|
|
389 | almost everywhere, you should only use it when you have a lot of sockets |
|
|
390 | (for which it usually works), by embedding it into another event loop |
|
|
391 | (e.g. C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>) and using it only for |
|
|
392 | sockets. |
346 | |
393 | |
347 | =item C<EVBACKEND_DEVPOLL> (value 16, Solaris 8) |
394 | =item C<EVBACKEND_DEVPOLL> (value 16, Solaris 8) |
348 | |
395 | |
349 | This is not implemented yet (and might never be). |
396 | This is not implemented yet (and might never be, unless you send me an |
|
|
397 | implementation). According to reports, C</dev/poll> only supports sockets |
|
|
398 | and is not embeddable, which would limit the usefulness of this backend |
|
|
399 | immensely. |
350 | |
400 | |
351 | =item C<EVBACKEND_PORT> (value 32, Solaris 10) |
401 | =item C<EVBACKEND_PORT> (value 32, Solaris 10) |
352 | |
402 | |
353 | This uses the Solaris 10 port mechanism. As with everything on Solaris, |
403 | This uses the Solaris 10 event port mechanism. As with everything on Solaris, |
354 | it's really slow, but it still scales very well (O(active_fds)). |
404 | it's really slow, but it still scales very well (O(active_fds)). |
355 | |
405 | |
356 | Please note that solaris ports can result in a lot of spurious |
406 | Please note that solaris event ports can deliver a lot of spurious |
357 | notifications, so you need to use non-blocking I/O or other means to avoid |
407 | notifications, so you need to use non-blocking I/O or other means to avoid |
358 | blocking when no data (or space) is available. |
408 | blocking when no data (or space) is available. |
|
|
409 | |
|
|
410 | While this backend scales well, it requires one system call per active |
|
|
411 | file descriptor per loop iteration. For small and medium numbers of file |
|
|
412 | descriptors a "slow" C<EVBACKEND_SELECT> or C<EVBACKEND_POLL> backend |
|
|
413 | might perform better. |
|
|
414 | |
|
|
415 | On the positive side, ignoring the spurious readyness notifications, this |
|
|
416 | backend actually performed to specification in all tests and is fully |
|
|
417 | embeddable, which is a rare feat among the OS-specific backends. |
359 | |
418 | |
360 | =item C<EVBACKEND_ALL> |
419 | =item C<EVBACKEND_ALL> |
361 | |
420 | |
362 | Try all backends (even potentially broken ones that wouldn't be tried |
421 | Try all backends (even potentially broken ones that wouldn't be tried |
363 | with C<EVFLAG_AUTO>). Since this is a mask, you can do stuff such as |
422 | with C<EVFLAG_AUTO>). Since this is a mask, you can do stuff such as |
364 | C<EVBACKEND_ALL & ~EVBACKEND_KQUEUE>. |
423 | C<EVBACKEND_ALL & ~EVBACKEND_KQUEUE>. |
365 | |
424 | |
|
|
425 | It is definitely not recommended to use this flag. |
|
|
426 | |
366 | =back |
427 | =back |
367 | |
428 | |
368 | If one or more of these are ored into the flags value, then only these |
429 | If one or more of these are ored into the flags value, then only these |
369 | backends will be tried (in the reverse order as given here). If none are |
430 | backends will be tried (in the reverse order as listed here). If none are |
370 | specified, most compiled-in backend will be tried, usually in reverse |
431 | specified, all backends in C<ev_recommended_backends ()> will be tried. |
371 | order of their flag values :) |
|
|
372 | |
432 | |
373 | The most typical usage is like this: |
433 | The most typical usage is like this: |
374 | |
434 | |
375 | if (!ev_default_loop (0)) |
435 | if (!ev_default_loop (0)) |
376 | fatal ("could not initialise libev, bad $LIBEV_FLAGS in environment?"); |
436 | fatal ("could not initialise libev, bad $LIBEV_FLAGS in environment?"); |
… | |
… | |
423 | Like C<ev_default_destroy>, but destroys an event loop created by an |
483 | Like C<ev_default_destroy>, but destroys an event loop created by an |
424 | earlier call to C<ev_loop_new>. |
484 | earlier call to C<ev_loop_new>. |
425 | |
485 | |
426 | =item ev_default_fork () |
486 | =item ev_default_fork () |
427 | |
487 | |
|
|
488 | This function sets a flag that causes subsequent C<ev_loop> iterations |
428 | This function reinitialises the kernel state for backends that have |
489 | to reinitialise the kernel state for backends that have one. Despite the |
429 | one. Despite the name, you can call it anytime, but it makes most sense |
490 | name, you can call it anytime, but it makes most sense after forking, in |
430 | after forking, in either the parent or child process (or both, but that |
491 | the child process (or both child and parent, but that again makes little |
431 | again makes little sense). |
492 | sense). You I<must> call it in the child before using any of the libev |
|
|
493 | functions, and it will only take effect at the next C<ev_loop> iteration. |
432 | |
494 | |
433 | You I<must> call this function in the child process after forking if and |
495 | On the other hand, you only need to call this function in the child |
434 | only if you want to use the event library in both processes. If you just |
496 | process if and only if you want to use the event library in the child. If |
435 | fork+exec, you don't have to call it. |
497 | you just fork+exec, you don't have to call it at all. |
436 | |
498 | |
437 | The function itself is quite fast and it's usually not a problem to call |
499 | The function itself is quite fast and it's usually not a problem to call |
438 | it just in case after a fork. To make this easy, the function will fit in |
500 | it just in case after a fork. To make this easy, the function will fit in |
439 | quite nicely into a call to C<pthread_atfork>: |
501 | quite nicely into a call to C<pthread_atfork>: |
440 | |
502 | |
441 | pthread_atfork (0, 0, ev_default_fork); |
503 | pthread_atfork (0, 0, ev_default_fork); |
442 | |
|
|
443 | At the moment, C<EVBACKEND_SELECT> and C<EVBACKEND_POLL> are safe to use |
|
|
444 | without calling this function, so if you force one of those backends you |
|
|
445 | do not need to care. |
|
|
446 | |
504 | |
447 | =item ev_loop_fork (loop) |
505 | =item ev_loop_fork (loop) |
448 | |
506 | |
449 | Like C<ev_default_fork>, but acts on an event loop created by |
507 | Like C<ev_default_fork>, but acts on an event loop created by |
450 | C<ev_loop_new>. Yes, you have to call this on every allocated event loop |
508 | C<ev_loop_new>. Yes, you have to call this on every allocated event loop |
… | |
… | |
469 | |
527 | |
470 | Returns the current "event loop time", which is the time the event loop |
528 | Returns the current "event loop time", which is the time the event loop |
471 | received events and started processing them. This timestamp does not |
529 | received events and started processing them. This timestamp does not |
472 | change as long as callbacks are being processed, and this is also the base |
530 | change as long as callbacks are being processed, and this is also the base |
473 | time used for relative timers. You can treat it as the timestamp of the |
531 | time used for relative timers. You can treat it as the timestamp of the |
474 | event occuring (or more correctly, libev finding out about it). |
532 | event occurring (or more correctly, libev finding out about it). |
475 | |
533 | |
476 | =item ev_loop (loop, int flags) |
534 | =item ev_loop (loop, int flags) |
477 | |
535 | |
478 | Finally, this is it, the event handler. This function usually is called |
536 | Finally, this is it, the event handler. This function usually is called |
479 | after you initialised all your watchers and you want to start handling |
537 | after you initialised all your watchers and you want to start handling |
… | |
… | |
501 | usually a better approach for this kind of thing. |
559 | usually a better approach for this kind of thing. |
502 | |
560 | |
503 | Here are the gory details of what C<ev_loop> does: |
561 | Here are the gory details of what C<ev_loop> does: |
504 | |
562 | |
505 | - Before the first iteration, call any pending watchers. |
563 | - Before the first iteration, call any pending watchers. |
506 | * If there are no active watchers (reference count is zero), return. |
564 | * If EVFLAG_FORKCHECK was used, check for a fork. |
507 | - Queue all prepare watchers and then call all outstanding watchers. |
565 | - If a fork was detected, queue and call all fork watchers. |
|
|
566 | - Queue and call all prepare watchers. |
508 | - If we have been forked, recreate the kernel state. |
567 | - If we have been forked, recreate the kernel state. |
509 | - Update the kernel state with all outstanding changes. |
568 | - Update the kernel state with all outstanding changes. |
510 | - Update the "event loop time". |
569 | - Update the "event loop time". |
511 | - Calculate for how long to block. |
570 | - Calculate for how long to sleep or block, if at all |
|
|
571 | (active idle watchers, EVLOOP_NONBLOCK or not having |
|
|
572 | any active watchers at all will result in not sleeping). |
|
|
573 | - Sleep if the I/O and timer collect interval say so. |
512 | - Block the process, waiting for any events. |
574 | - Block the process, waiting for any events. |
513 | - Queue all outstanding I/O (fd) events. |
575 | - Queue all outstanding I/O (fd) events. |
514 | - Update the "event loop time" and do time jump handling. |
576 | - Update the "event loop time" and do time jump handling. |
515 | - Queue all outstanding timers. |
577 | - Queue all outstanding timers. |
516 | - Queue all outstanding periodics. |
578 | - Queue all outstanding periodics. |
517 | - If no events are pending now, queue all idle watchers. |
579 | - If no events are pending now, queue all idle watchers. |
518 | - Queue all check watchers. |
580 | - Queue all check watchers. |
519 | - Call all queued watchers in reverse order (i.e. check watchers first). |
581 | - Call all queued watchers in reverse order (i.e. check watchers first). |
520 | Signals and child watchers are implemented as I/O watchers, and will |
582 | Signals and child watchers are implemented as I/O watchers, and will |
521 | be handled here by queueing them when their watcher gets executed. |
583 | be handled here by queueing them when their watcher gets executed. |
522 | - If ev_unloop has been called or EVLOOP_ONESHOT or EVLOOP_NONBLOCK |
584 | - If ev_unloop has been called, or EVLOOP_ONESHOT or EVLOOP_NONBLOCK |
523 | were used, return, otherwise continue with step *. |
585 | were used, or there are no active watchers, return, otherwise |
|
|
586 | continue with step *. |
524 | |
587 | |
525 | Example: Queue some jobs and then loop until no events are outsanding |
588 | Example: Queue some jobs and then loop until no events are outstanding |
526 | anymore. |
589 | anymore. |
527 | |
590 | |
528 | ... queue jobs here, make sure they register event watchers as long |
591 | ... queue jobs here, make sure they register event watchers as long |
529 | ... as they still have work to do (even an idle watcher will do..) |
592 | ... as they still have work to do (even an idle watcher will do..) |
530 | ev_loop (my_loop, 0); |
593 | ev_loop (my_loop, 0); |
… | |
… | |
534 | |
597 | |
535 | Can be used to make a call to C<ev_loop> return early (but only after it |
598 | Can be used to make a call to C<ev_loop> return early (but only after it |
536 | has processed all outstanding events). The C<how> argument must be either |
599 | has processed all outstanding events). The C<how> argument must be either |
537 | C<EVUNLOOP_ONE>, which will make the innermost C<ev_loop> call return, or |
600 | C<EVUNLOOP_ONE>, which will make the innermost C<ev_loop> call return, or |
538 | C<EVUNLOOP_ALL>, which will make all nested C<ev_loop> calls return. |
601 | C<EVUNLOOP_ALL>, which will make all nested C<ev_loop> calls return. |
|
|
602 | |
|
|
603 | This "unloop state" will be cleared when entering C<ev_loop> again. |
539 | |
604 | |
540 | =item ev_ref (loop) |
605 | =item ev_ref (loop) |
541 | |
606 | |
542 | =item ev_unref (loop) |
607 | =item ev_unref (loop) |
543 | |
608 | |
… | |
… | |
548 | returning, ev_unref() after starting, and ev_ref() before stopping it. For |
613 | returning, ev_unref() after starting, and ev_ref() before stopping it. For |
549 | example, libev itself uses this for its internal signal pipe: It is not |
614 | example, libev itself uses this for its internal signal pipe: It is not |
550 | visible to the libev user and should not keep C<ev_loop> from exiting if |
615 | visible to the libev user and should not keep C<ev_loop> from exiting if |
551 | no event watchers registered by it are active. It is also an excellent |
616 | no event watchers registered by it are active. It is also an excellent |
552 | way to do this for generic recurring timers or from within third-party |
617 | way to do this for generic recurring timers or from within third-party |
553 | libraries. Just remember to I<unref after start> and I<ref before stop>. |
618 | libraries. Just remember to I<unref after start> and I<ref before stop> |
|
|
619 | (but only if the watcher wasn't active before, or was active before, |
|
|
620 | respectively). |
554 | |
621 | |
555 | Example: Create a signal watcher, but keep it from keeping C<ev_loop> |
622 | Example: Create a signal watcher, but keep it from keeping C<ev_loop> |
556 | running when nothing else is active. |
623 | running when nothing else is active. |
557 | |
624 | |
558 | struct ev_signal exitsig; |
625 | struct ev_signal exitsig; |
… | |
… | |
562 | |
629 | |
563 | Example: For some weird reason, unregister the above signal handler again. |
630 | Example: For some weird reason, unregister the above signal handler again. |
564 | |
631 | |
565 | ev_ref (loop); |
632 | ev_ref (loop); |
566 | ev_signal_stop (loop, &exitsig); |
633 | ev_signal_stop (loop, &exitsig); |
|
|
634 | |
|
|
635 | =item ev_set_io_collect_interval (loop, ev_tstamp interval) |
|
|
636 | |
|
|
637 | =item ev_set_timeout_collect_interval (loop, ev_tstamp interval) |
|
|
638 | |
|
|
639 | These advanced functions influence the time that libev will spend waiting |
|
|
640 | for events. Both are by default C<0>, meaning that libev will try to |
|
|
641 | invoke timer/periodic callbacks and I/O callbacks with minimum latency. |
|
|
642 | |
|
|
643 | Setting these to a higher value (the C<interval> I<must> be >= C<0>) |
|
|
644 | allows libev to delay invocation of I/O and timer/periodic callbacks to |
|
|
645 | increase efficiency of loop iterations. |
|
|
646 | |
|
|
647 | The background is that sometimes your program runs just fast enough to |
|
|
648 | handle one (or very few) event(s) per loop iteration. While this makes |
|
|
649 | the program responsive, it also wastes a lot of CPU time to poll for new |
|
|
650 | events, especially with backends like C<select ()> which have a high |
|
|
651 | overhead for the actual polling but can deliver many events at once. |
|
|
652 | |
|
|
653 | By setting a higher I<io collect interval> you allow libev to spend more |
|
|
654 | time collecting I/O events, so you can handle more events per iteration, |
|
|
655 | at the cost of increasing latency. Timeouts (both C<ev_periodic> and |
|
|
656 | C<ev_timer>) will be not affected. Setting this to a non-null value will |
|
|
657 | introduce an additional C<ev_sleep ()> call into most loop iterations. |
|
|
658 | |
|
|
659 | Likewise, by setting a higher I<timeout collect interval> you allow libev |
|
|
660 | to spend more time collecting timeouts, at the expense of increased |
|
|
661 | latency (the watcher callback will be called later). C<ev_io> watchers |
|
|
662 | will not be affected. Setting this to a non-null value will not introduce |
|
|
663 | any overhead in libev. |
|
|
664 | |
|
|
665 | Many (busy) programs can usually benefit by setting the io collect |
|
|
666 | interval to a value near C<0.1> or so, which is often enough for |
|
|
667 | interactive servers (of course not for games), likewise for timeouts. It |
|
|
668 | usually doesn't make much sense to set it to a lower value than C<0.01>, |
|
|
669 | as this approsaches the timing granularity of most systems. |
567 | |
670 | |
568 | =back |
671 | =back |
569 | |
672 | |
570 | |
673 | |
571 | =head1 ANATOMY OF A WATCHER |
674 | =head1 ANATOMY OF A WATCHER |
… | |
… | |
670 | |
773 | |
671 | =item C<EV_FORK> |
774 | =item C<EV_FORK> |
672 | |
775 | |
673 | The event loop has been resumed in the child process after fork (see |
776 | The event loop has been resumed in the child process after fork (see |
674 | C<ev_fork>). |
777 | C<ev_fork>). |
|
|
778 | |
|
|
779 | =item C<EV_ASYNC> |
|
|
780 | |
|
|
781 | The given async watcher has been asynchronously notified (see C<ev_async>). |
675 | |
782 | |
676 | =item C<EV_ERROR> |
783 | =item C<EV_ERROR> |
677 | |
784 | |
678 | An unspecified error has occured, the watcher has been stopped. This might |
785 | An unspecified error has occured, the watcher has been stopped. This might |
679 | happen because the watcher could not be properly started because libev |
786 | happen because the watcher could not be properly started because libev |
… | |
… | |
897 | In general you can register as many read and/or write event watchers per |
1004 | In general you can register as many read and/or write event watchers per |
898 | fd as you want (as long as you don't confuse yourself). Setting all file |
1005 | fd as you want (as long as you don't confuse yourself). Setting all file |
899 | descriptors to non-blocking mode is also usually a good idea (but not |
1006 | descriptors to non-blocking mode is also usually a good idea (but not |
900 | required if you know what you are doing). |
1007 | required if you know what you are doing). |
901 | |
1008 | |
902 | You have to be careful with dup'ed file descriptors, though. Some backends |
|
|
903 | (the linux epoll backend is a notable example) cannot handle dup'ed file |
|
|
904 | descriptors correctly if you register interest in two or more fds pointing |
|
|
905 | to the same underlying file/socket/etc. description (that is, they share |
|
|
906 | the same underlying "file open"). |
|
|
907 | |
|
|
908 | If you must do this, then force the use of a known-to-be-good backend |
1009 | If you must do this, then force the use of a known-to-be-good backend |
909 | (at the time of this writing, this includes only C<EVBACKEND_SELECT> and |
1010 | (at the time of this writing, this includes only C<EVBACKEND_SELECT> and |
910 | C<EVBACKEND_POLL>). |
1011 | C<EVBACKEND_POLL>). |
911 | |
1012 | |
912 | Another thing you have to watch out for is that it is quite easy to |
1013 | Another thing you have to watch out for is that it is quite easy to |
… | |
… | |
924 | such as poll (fortunately in our Xlib example, Xlib already does this on |
1025 | such as poll (fortunately in our Xlib example, Xlib already does this on |
925 | its own, so its quite safe to use). |
1026 | its own, so its quite safe to use). |
926 | |
1027 | |
927 | =head3 The special problem of disappearing file descriptors |
1028 | =head3 The special problem of disappearing file descriptors |
928 | |
1029 | |
929 | Some backends (e.g kqueue, epoll) need to be told about closing a file |
1030 | Some backends (e.g. kqueue, epoll) need to be told about closing a file |
930 | descriptor (either by calling C<close> explicitly or by any other means, |
1031 | descriptor (either by calling C<close> explicitly or by any other means, |
931 | such as C<dup>). The reason is that you register interest in some file |
1032 | such as C<dup>). The reason is that you register interest in some file |
932 | descriptor, but when it goes away, the operating system will silently drop |
1033 | descriptor, but when it goes away, the operating system will silently drop |
933 | this interest. If another file descriptor with the same number then is |
1034 | this interest. If another file descriptor with the same number then is |
934 | registered with libev, there is no efficient way to see that this is, in |
1035 | registered with libev, there is no efficient way to see that this is, in |
… | |
… | |
943 | |
1044 | |
944 | This is how one would do it normally anyway, the important point is that |
1045 | This is how one would do it normally anyway, the important point is that |
945 | the libev application should not optimise around libev but should leave |
1046 | the libev application should not optimise around libev but should leave |
946 | optimisations to libev. |
1047 | optimisations to libev. |
947 | |
1048 | |
|
|
1049 | =head3 The special problem of dup'ed file descriptors |
|
|
1050 | |
|
|
1051 | Some backends (e.g. epoll), cannot register events for file descriptors, |
|
|
1052 | but only events for the underlying file descriptions. That means when you |
|
|
1053 | have C<dup ()>'ed file descriptors or weirder constellations, and register |
|
|
1054 | events for them, only one file descriptor might actually receive events. |
|
|
1055 | |
|
|
1056 | There is no workaround possible except not registering events |
|
|
1057 | for potentially C<dup ()>'ed file descriptors, or to resort to |
|
|
1058 | C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>. |
|
|
1059 | |
|
|
1060 | =head3 The special problem of fork |
|
|
1061 | |
|
|
1062 | Some backends (epoll, kqueue) do not support C<fork ()> at all or exhibit |
|
|
1063 | useless behaviour. Libev fully supports fork, but needs to be told about |
|
|
1064 | it in the child. |
|
|
1065 | |
|
|
1066 | To support fork in your programs, you either have to call |
|
|
1067 | C<ev_default_fork ()> or C<ev_loop_fork ()> after a fork in the child, |
|
|
1068 | enable C<EVFLAG_FORKCHECK>, or resort to C<EVBACKEND_SELECT> or |
|
|
1069 | C<EVBACKEND_POLL>. |
|
|
1070 | |
948 | |
1071 | |
949 | =head3 Watcher-Specific Functions |
1072 | =head3 Watcher-Specific Functions |
950 | |
1073 | |
951 | =over 4 |
1074 | =over 4 |
952 | |
1075 | |
… | |
… | |
965 | =item int events [read-only] |
1088 | =item int events [read-only] |
966 | |
1089 | |
967 | The events being watched. |
1090 | The events being watched. |
968 | |
1091 | |
969 | =back |
1092 | =back |
|
|
1093 | |
|
|
1094 | =head3 Examples |
970 | |
1095 | |
971 | Example: Call C<stdin_readable_cb> when STDIN_FILENO has become, well |
1096 | Example: Call C<stdin_readable_cb> when STDIN_FILENO has become, well |
972 | readable, but only once. Since it is likely line-buffered, you could |
1097 | readable, but only once. Since it is likely line-buffered, you could |
973 | attempt to read a whole line in the callback. |
1098 | attempt to read a whole line in the callback. |
974 | |
1099 | |
… | |
… | |
1072 | or C<ev_timer_again> is called and determines the next timeout (if any), |
1197 | or C<ev_timer_again> is called and determines the next timeout (if any), |
1073 | which is also when any modifications are taken into account. |
1198 | which is also when any modifications are taken into account. |
1074 | |
1199 | |
1075 | =back |
1200 | =back |
1076 | |
1201 | |
|
|
1202 | =head3 Examples |
|
|
1203 | |
1077 | Example: Create a timer that fires after 60 seconds. |
1204 | Example: Create a timer that fires after 60 seconds. |
1078 | |
1205 | |
1079 | static void |
1206 | static void |
1080 | one_minute_cb (struct ev_loop *loop, struct ev_timer *w, int revents) |
1207 | one_minute_cb (struct ev_loop *loop, struct ev_timer *w, int revents) |
1081 | { |
1208 | { |
… | |
… | |
1238 | When active, contains the absolute time that the watcher is supposed to |
1365 | When active, contains the absolute time that the watcher is supposed to |
1239 | trigger next. |
1366 | trigger next. |
1240 | |
1367 | |
1241 | =back |
1368 | =back |
1242 | |
1369 | |
|
|
1370 | =head3 Examples |
|
|
1371 | |
1243 | Example: Call a callback every hour, or, more precisely, whenever the |
1372 | Example: Call a callback every hour, or, more precisely, whenever the |
1244 | system clock is divisible by 3600. The callback invocation times have |
1373 | system clock is divisible by 3600. The callback invocation times have |
1245 | potentially a lot of jittering, but good long-term stability. |
1374 | potentially a lot of jittering, but good long-term stability. |
1246 | |
1375 | |
1247 | static void |
1376 | static void |
… | |
… | |
1313 | |
1442 | |
1314 | =head3 Watcher-Specific Functions and Data Members |
1443 | =head3 Watcher-Specific Functions and Data Members |
1315 | |
1444 | |
1316 | =over 4 |
1445 | =over 4 |
1317 | |
1446 | |
1318 | =item ev_child_init (ev_child *, callback, int pid) |
1447 | =item ev_child_init (ev_child *, callback, int pid, int trace) |
1319 | |
1448 | |
1320 | =item ev_child_set (ev_child *, int pid) |
1449 | =item ev_child_set (ev_child *, int pid, int trace) |
1321 | |
1450 | |
1322 | Configures the watcher to wait for status changes of process C<pid> (or |
1451 | Configures the watcher to wait for status changes of process C<pid> (or |
1323 | I<any> process if C<pid> is specified as C<0>). The callback can look |
1452 | I<any> process if C<pid> is specified as C<0>). The callback can look |
1324 | at the C<rstatus> member of the C<ev_child> watcher structure to see |
1453 | at the C<rstatus> member of the C<ev_child> watcher structure to see |
1325 | the status word (use the macros from C<sys/wait.h> and see your systems |
1454 | the status word (use the macros from C<sys/wait.h> and see your systems |
1326 | C<waitpid> documentation). The C<rpid> member contains the pid of the |
1455 | C<waitpid> documentation). The C<rpid> member contains the pid of the |
1327 | process causing the status change. |
1456 | process causing the status change. C<trace> must be either C<0> (only |
|
|
1457 | activate the watcher when the process terminates) or C<1> (additionally |
|
|
1458 | activate the watcher when the process is stopped or continued). |
1328 | |
1459 | |
1329 | =item int pid [read-only] |
1460 | =item int pid [read-only] |
1330 | |
1461 | |
1331 | The process id this watcher watches out for, or C<0>, meaning any process id. |
1462 | The process id this watcher watches out for, or C<0>, meaning any process id. |
1332 | |
1463 | |
… | |
… | |
1338 | |
1469 | |
1339 | The process exit/trace status caused by C<rpid> (see your systems |
1470 | The process exit/trace status caused by C<rpid> (see your systems |
1340 | C<waitpid> and C<sys/wait.h> documentation for details). |
1471 | C<waitpid> and C<sys/wait.h> documentation for details). |
1341 | |
1472 | |
1342 | =back |
1473 | =back |
|
|
1474 | |
|
|
1475 | =head3 Examples |
1343 | |
1476 | |
1344 | Example: Try to exit cleanly on SIGINT and SIGTERM. |
1477 | Example: Try to exit cleanly on SIGINT and SIGTERM. |
1345 | |
1478 | |
1346 | static void |
1479 | static void |
1347 | sigint_cb (struct ev_loop *loop, struct ev_signal *w, int revents) |
1480 | sigint_cb (struct ev_loop *loop, struct ev_signal *w, int revents) |
… | |
… | |
1388 | semantics of C<ev_stat> watchers, which means that libev sometimes needs |
1521 | semantics of C<ev_stat> watchers, which means that libev sometimes needs |
1389 | to fall back to regular polling again even with inotify, but changes are |
1522 | to fall back to regular polling again even with inotify, but changes are |
1390 | usually detected immediately, and if the file exists there will be no |
1523 | usually detected immediately, and if the file exists there will be no |
1391 | polling. |
1524 | polling. |
1392 | |
1525 | |
|
|
1526 | =head3 Inotify |
|
|
1527 | |
|
|
1528 | When C<inotify (7)> support has been compiled into libev (generally only |
|
|
1529 | available on Linux) and present at runtime, it will be used to speed up |
|
|
1530 | change detection where possible. The inotify descriptor will be created lazily |
|
|
1531 | when the first C<ev_stat> watcher is being started. |
|
|
1532 | |
|
|
1533 | Inotify presense does not change the semantics of C<ev_stat> watchers |
|
|
1534 | except that changes might be detected earlier, and in some cases, to avoid |
|
|
1535 | making regular C<stat> calls. Even in the presense of inotify support |
|
|
1536 | there are many cases where libev has to resort to regular C<stat> polling. |
|
|
1537 | |
|
|
1538 | (There is no support for kqueue, as apparently it cannot be used to |
|
|
1539 | implement this functionality, due to the requirement of having a file |
|
|
1540 | descriptor open on the object at all times). |
|
|
1541 | |
|
|
1542 | =head3 The special problem of stat time resolution |
|
|
1543 | |
|
|
1544 | The C<stat ()> syscall only supports full-second resolution portably, and |
|
|
1545 | even on systems where the resolution is higher, many filesystems still |
|
|
1546 | only support whole seconds. |
|
|
1547 | |
|
|
1548 | That means that, if the time is the only thing that changes, you might |
|
|
1549 | miss updates: on the first update, C<ev_stat> detects a change and calls |
|
|
1550 | your callback, which does something. When there is another update within |
|
|
1551 | the same second, C<ev_stat> will be unable to detect it. |
|
|
1552 | |
|
|
1553 | The solution to this is to delay acting on a change for a second (or till |
|
|
1554 | the next second boundary), using a roughly one-second delay C<ev_timer> |
|
|
1555 | (C<ev_timer_set (w, 0., 1.01); ev_timer_again (loop, w)>). The C<.01> |
|
|
1556 | is added to work around small timing inconsistencies of some operating |
|
|
1557 | systems. |
|
|
1558 | |
1393 | =head3 Watcher-Specific Functions and Data Members |
1559 | =head3 Watcher-Specific Functions and Data Members |
1394 | |
1560 | |
1395 | =over 4 |
1561 | =over 4 |
1396 | |
1562 | |
1397 | =item ev_stat_init (ev_stat *, callback, const char *path, ev_tstamp interval) |
1563 | =item ev_stat_init (ev_stat *, callback, const char *path, ev_tstamp interval) |
… | |
… | |
1434 | =item const char *path [read-only] |
1600 | =item const char *path [read-only] |
1435 | |
1601 | |
1436 | The filesystem path that is being watched. |
1602 | The filesystem path that is being watched. |
1437 | |
1603 | |
1438 | =back |
1604 | =back |
|
|
1605 | |
|
|
1606 | =head3 Examples |
1439 | |
1607 | |
1440 | Example: Watch C</etc/passwd> for attribute changes. |
1608 | Example: Watch C</etc/passwd> for attribute changes. |
1441 | |
1609 | |
1442 | static void |
1610 | static void |
1443 | passwd_cb (struct ev_loop *loop, ev_stat *w, int revents) |
1611 | passwd_cb (struct ev_loop *loop, ev_stat *w, int revents) |
… | |
… | |
1456 | } |
1624 | } |
1457 | |
1625 | |
1458 | ... |
1626 | ... |
1459 | ev_stat passwd; |
1627 | ev_stat passwd; |
1460 | |
1628 | |
1461 | ev_stat_init (&passwd, passwd_cb, "/etc/passwd"); |
1629 | ev_stat_init (&passwd, passwd_cb, "/etc/passwd", 0.); |
1462 | ev_stat_start (loop, &passwd); |
1630 | ev_stat_start (loop, &passwd); |
|
|
1631 | |
|
|
1632 | Example: Like above, but additionally use a one-second delay so we do not |
|
|
1633 | miss updates (however, frequent updates will delay processing, too, so |
|
|
1634 | one might do the work both on C<ev_stat> callback invocation I<and> on |
|
|
1635 | C<ev_timer> callback invocation). |
|
|
1636 | |
|
|
1637 | static ev_stat passwd; |
|
|
1638 | static ev_timer timer; |
|
|
1639 | |
|
|
1640 | static void |
|
|
1641 | timer_cb (EV_P_ ev_timer *w, int revents) |
|
|
1642 | { |
|
|
1643 | ev_timer_stop (EV_A_ w); |
|
|
1644 | |
|
|
1645 | /* now it's one second after the most recent passwd change */ |
|
|
1646 | } |
|
|
1647 | |
|
|
1648 | static void |
|
|
1649 | stat_cb (EV_P_ ev_stat *w, int revents) |
|
|
1650 | { |
|
|
1651 | /* reset the one-second timer */ |
|
|
1652 | ev_timer_again (EV_A_ &timer); |
|
|
1653 | } |
|
|
1654 | |
|
|
1655 | ... |
|
|
1656 | ev_stat_init (&passwd, stat_cb, "/etc/passwd", 0.); |
|
|
1657 | ev_stat_start (loop, &passwd); |
|
|
1658 | ev_timer_init (&timer, timer_cb, 0., 1.01); |
1463 | |
1659 | |
1464 | |
1660 | |
1465 | =head2 C<ev_idle> - when you've got nothing better to do... |
1661 | =head2 C<ev_idle> - when you've got nothing better to do... |
1466 | |
1662 | |
1467 | Idle watchers trigger events when no other events of the same or higher |
1663 | Idle watchers trigger events when no other events of the same or higher |
… | |
… | |
1493 | kind. There is a C<ev_idle_set> macro, but using it is utterly pointless, |
1689 | kind. There is a C<ev_idle_set> macro, but using it is utterly pointless, |
1494 | believe me. |
1690 | believe me. |
1495 | |
1691 | |
1496 | =back |
1692 | =back |
1497 | |
1693 | |
|
|
1694 | =head3 Examples |
|
|
1695 | |
1498 | Example: Dynamically allocate an C<ev_idle> watcher, start it, and in the |
1696 | Example: Dynamically allocate an C<ev_idle> watcher, start it, and in the |
1499 | callback, free it. Also, use no error checking, as usual. |
1697 | callback, free it. Also, use no error checking, as usual. |
1500 | |
1698 | |
1501 | static void |
1699 | static void |
1502 | idle_cb (struct ev_loop *loop, struct ev_idle *w, int revents) |
1700 | idle_cb (struct ev_loop *loop, struct ev_idle *w, int revents) |
1503 | { |
1701 | { |
1504 | free (w); |
1702 | free (w); |
1505 | // now do something you wanted to do when the program has |
1703 | // now do something you wanted to do when the program has |
1506 | // no longer asnything immediate to do. |
1704 | // no longer anything immediate to do. |
1507 | } |
1705 | } |
1508 | |
1706 | |
1509 | struct ev_idle *idle_watcher = malloc (sizeof (struct ev_idle)); |
1707 | struct ev_idle *idle_watcher = malloc (sizeof (struct ev_idle)); |
1510 | ev_idle_init (idle_watcher, idle_cb); |
1708 | ev_idle_init (idle_watcher, idle_cb); |
1511 | ev_idle_start (loop, idle_cb); |
1709 | ev_idle_start (loop, idle_cb); |
… | |
… | |
1553 | |
1751 | |
1554 | It is recommended to give C<ev_check> watchers highest (C<EV_MAXPRI>) |
1752 | It is recommended to give C<ev_check> watchers highest (C<EV_MAXPRI>) |
1555 | priority, to ensure that they are being run before any other watchers |
1753 | priority, to ensure that they are being run before any other watchers |
1556 | after the poll. Also, C<ev_check> watchers (and C<ev_prepare> watchers, |
1754 | after the poll. Also, C<ev_check> watchers (and C<ev_prepare> watchers, |
1557 | too) should not activate ("feed") events into libev. While libev fully |
1755 | too) should not activate ("feed") events into libev. While libev fully |
1558 | supports this, they will be called before other C<ev_check> watchers did |
1756 | supports this, they will be called before other C<ev_check> watchers |
1559 | their job. As C<ev_check> watchers are often used to embed other event |
1757 | did their job. As C<ev_check> watchers are often used to embed other |
1560 | loops those other event loops might be in an unusable state until their |
1758 | (non-libev) event loops those other event loops might be in an unusable |
1561 | C<ev_check> watcher ran (always remind yourself to coexist peacefully with |
1759 | state until their C<ev_check> watcher ran (always remind yourself to |
1562 | others). |
1760 | coexist peacefully with others). |
1563 | |
1761 | |
1564 | =head3 Watcher-Specific Functions and Data Members |
1762 | =head3 Watcher-Specific Functions and Data Members |
1565 | |
1763 | |
1566 | =over 4 |
1764 | =over 4 |
1567 | |
1765 | |
… | |
… | |
1572 | Initialises and configures the prepare or check watcher - they have no |
1770 | Initialises and configures the prepare or check watcher - they have no |
1573 | parameters of any kind. There are C<ev_prepare_set> and C<ev_check_set> |
1771 | parameters of any kind. There are C<ev_prepare_set> and C<ev_check_set> |
1574 | macros, but using them is utterly, utterly and completely pointless. |
1772 | macros, but using them is utterly, utterly and completely pointless. |
1575 | |
1773 | |
1576 | =back |
1774 | =back |
|
|
1775 | |
|
|
1776 | =head3 Examples |
1577 | |
1777 | |
1578 | There are a number of principal ways to embed other event loops or modules |
1778 | There are a number of principal ways to embed other event loops or modules |
1579 | into libev. Here are some ideas on how to include libadns into libev |
1779 | into libev. Here are some ideas on how to include libadns into libev |
1580 | (there is a Perl module named C<EV::ADNS> that does this, which you could |
1780 | (there is a Perl module named C<EV::ADNS> that does this, which you could |
1581 | use for an actually working example. Another Perl module named C<EV::Glib> |
1781 | use for an actually working example. Another Perl module named C<EV::Glib> |
… | |
… | |
1750 | portable one. |
1950 | portable one. |
1751 | |
1951 | |
1752 | So when you want to use this feature you will always have to be prepared |
1952 | So when you want to use this feature you will always have to be prepared |
1753 | that you cannot get an embeddable loop. The recommended way to get around |
1953 | that you cannot get an embeddable loop. The recommended way to get around |
1754 | this is to have a separate variables for your embeddable loop, try to |
1954 | this is to have a separate variables for your embeddable loop, try to |
1755 | create it, and if that fails, use the normal loop for everything: |
1955 | create it, and if that fails, use the normal loop for everything. |
|
|
1956 | |
|
|
1957 | =head3 Watcher-Specific Functions and Data Members |
|
|
1958 | |
|
|
1959 | =over 4 |
|
|
1960 | |
|
|
1961 | =item ev_embed_init (ev_embed *, callback, struct ev_loop *embedded_loop) |
|
|
1962 | |
|
|
1963 | =item ev_embed_set (ev_embed *, callback, struct ev_loop *embedded_loop) |
|
|
1964 | |
|
|
1965 | Configures the watcher to embed the given loop, which must be |
|
|
1966 | embeddable. If the callback is C<0>, then C<ev_embed_sweep> will be |
|
|
1967 | invoked automatically, otherwise it is the responsibility of the callback |
|
|
1968 | to invoke it (it will continue to be called until the sweep has been done, |
|
|
1969 | if you do not want thta, you need to temporarily stop the embed watcher). |
|
|
1970 | |
|
|
1971 | =item ev_embed_sweep (loop, ev_embed *) |
|
|
1972 | |
|
|
1973 | Make a single, non-blocking sweep over the embedded loop. This works |
|
|
1974 | similarly to C<ev_loop (embedded_loop, EVLOOP_NONBLOCK)>, but in the most |
|
|
1975 | apropriate way for embedded loops. |
|
|
1976 | |
|
|
1977 | =item struct ev_loop *other [read-only] |
|
|
1978 | |
|
|
1979 | The embedded event loop. |
|
|
1980 | |
|
|
1981 | =back |
|
|
1982 | |
|
|
1983 | =head3 Examples |
|
|
1984 | |
|
|
1985 | Example: Try to get an embeddable event loop and embed it into the default |
|
|
1986 | event loop. If that is not possible, use the default loop. The default |
|
|
1987 | loop is stored in C<loop_hi>, while the mebeddable loop is stored in |
|
|
1988 | C<loop_lo> (which is C<loop_hi> in the acse no embeddable loop can be |
|
|
1989 | used). |
1756 | |
1990 | |
1757 | struct ev_loop *loop_hi = ev_default_init (0); |
1991 | struct ev_loop *loop_hi = ev_default_init (0); |
1758 | struct ev_loop *loop_lo = 0; |
1992 | struct ev_loop *loop_lo = 0; |
1759 | struct ev_embed embed; |
1993 | struct ev_embed embed; |
1760 | |
1994 | |
… | |
… | |
1771 | ev_embed_start (loop_hi, &embed); |
2005 | ev_embed_start (loop_hi, &embed); |
1772 | } |
2006 | } |
1773 | else |
2007 | else |
1774 | loop_lo = loop_hi; |
2008 | loop_lo = loop_hi; |
1775 | |
2009 | |
1776 | =head3 Watcher-Specific Functions and Data Members |
2010 | Example: Check if kqueue is available but not recommended and create |
|
|
2011 | a kqueue backend for use with sockets (which usually work with any |
|
|
2012 | kqueue implementation). Store the kqueue/socket-only event loop in |
|
|
2013 | C<loop_socket>. (One might optionally use C<EVFLAG_NOENV>, too). |
1777 | |
2014 | |
1778 | =over 4 |
2015 | struct ev_loop *loop = ev_default_init (0); |
|
|
2016 | struct ev_loop *loop_socket = 0; |
|
|
2017 | struct ev_embed embed; |
|
|
2018 | |
|
|
2019 | if (ev_supported_backends () & ~ev_recommended_backends () & EVBACKEND_KQUEUE) |
|
|
2020 | if ((loop_socket = ev_loop_new (EVBACKEND_KQUEUE)) |
|
|
2021 | { |
|
|
2022 | ev_embed_init (&embed, 0, loop_socket); |
|
|
2023 | ev_embed_start (loop, &embed); |
|
|
2024 | } |
1779 | |
2025 | |
1780 | =item ev_embed_init (ev_embed *, callback, struct ev_loop *embedded_loop) |
2026 | if (!loop_socket) |
|
|
2027 | loop_socket = loop; |
1781 | |
2028 | |
1782 | =item ev_embed_set (ev_embed *, callback, struct ev_loop *embedded_loop) |
2029 | // now use loop_socket for all sockets, and loop for everything else |
1783 | |
|
|
1784 | Configures the watcher to embed the given loop, which must be |
|
|
1785 | embeddable. If the callback is C<0>, then C<ev_embed_sweep> will be |
|
|
1786 | invoked automatically, otherwise it is the responsibility of the callback |
|
|
1787 | to invoke it (it will continue to be called until the sweep has been done, |
|
|
1788 | if you do not want thta, you need to temporarily stop the embed watcher). |
|
|
1789 | |
|
|
1790 | =item ev_embed_sweep (loop, ev_embed *) |
|
|
1791 | |
|
|
1792 | Make a single, non-blocking sweep over the embedded loop. This works |
|
|
1793 | similarly to C<ev_loop (embedded_loop, EVLOOP_NONBLOCK)>, but in the most |
|
|
1794 | apropriate way for embedded loops. |
|
|
1795 | |
|
|
1796 | =item struct ev_loop *loop [read-only] |
|
|
1797 | |
|
|
1798 | The embedded event loop. |
|
|
1799 | |
|
|
1800 | =back |
|
|
1801 | |
2030 | |
1802 | |
2031 | |
1803 | =head2 C<ev_fork> - the audacity to resume the event loop after a fork |
2032 | =head2 C<ev_fork> - the audacity to resume the event loop after a fork |
1804 | |
2033 | |
1805 | Fork watchers are called when a C<fork ()> was detected (usually because |
2034 | Fork watchers are called when a C<fork ()> was detected (usually because |
… | |
… | |
1821 | believe me. |
2050 | believe me. |
1822 | |
2051 | |
1823 | =back |
2052 | =back |
1824 | |
2053 | |
1825 | |
2054 | |
|
|
2055 | =head2 C<ev_async> - how to wake up another event loop |
|
|
2056 | |
|
|
2057 | In general, you cannot use an C<ev_loop> from multiple threads or other |
|
|
2058 | asynchronous sources such as signal handlers (as opposed to multiple event |
|
|
2059 | loops - those are of course safe to use in different threads). |
|
|
2060 | |
|
|
2061 | Sometimes, however, you need to wake up another event loop you do not |
|
|
2062 | control, for example because it belongs to another thread. This is what |
|
|
2063 | C<ev_async> watchers do: as long as the C<ev_async> watcher is active, you |
|
|
2064 | can signal it by calling C<ev_async_send>, which is thread- and signal |
|
|
2065 | safe. |
|
|
2066 | |
|
|
2067 | This functionality is very similar to C<ev_signal> watchers, as signals, |
|
|
2068 | too, are asynchronous in nature, and signals, too, will be compressed |
|
|
2069 | (i.e. the number of callback invocations may be less than the number of |
|
|
2070 | C<ev_async_sent> calls). |
|
|
2071 | |
|
|
2072 | Unlike C<ev_signal> watchers, C<ev_async> works with any event loop, not |
|
|
2073 | just the default loop. |
|
|
2074 | |
|
|
2075 | =head3 Queueing |
|
|
2076 | |
|
|
2077 | C<ev_async> does not support queueing of data in any way. The reason |
|
|
2078 | is that the author does not know of a simple (or any) algorithm for a |
|
|
2079 | multiple-writer-single-reader queue that works in all cases and doesn't |
|
|
2080 | need elaborate support such as pthreads. |
|
|
2081 | |
|
|
2082 | That means that if you want to queue data, you have to provide your own |
|
|
2083 | queue. But at least I can tell you would implement locking around your |
|
|
2084 | queue: |
|
|
2085 | |
|
|
2086 | =over 4 |
|
|
2087 | |
|
|
2088 | =item queueing from a signal handler context |
|
|
2089 | |
|
|
2090 | To implement race-free queueing, you simply add to the queue in the signal |
|
|
2091 | handler but you block the signal handler in the watcher callback. Here is an example that does that for |
|
|
2092 | some fictitiuous SIGUSR1 handler: |
|
|
2093 | |
|
|
2094 | static ev_async mysig; |
|
|
2095 | |
|
|
2096 | static void |
|
|
2097 | sigusr1_handler (void) |
|
|
2098 | { |
|
|
2099 | sometype data; |
|
|
2100 | |
|
|
2101 | // no locking etc. |
|
|
2102 | queue_put (data); |
|
|
2103 | ev_async_send (DEFAULT_ &mysig); |
|
|
2104 | } |
|
|
2105 | |
|
|
2106 | static void |
|
|
2107 | mysig_cb (EV_P_ ev_async *w, int revents) |
|
|
2108 | { |
|
|
2109 | sometype data; |
|
|
2110 | sigset_t block, prev; |
|
|
2111 | |
|
|
2112 | sigemptyset (&block); |
|
|
2113 | sigaddset (&block, SIGUSR1); |
|
|
2114 | sigprocmask (SIG_BLOCK, &block, &prev); |
|
|
2115 | |
|
|
2116 | while (queue_get (&data)) |
|
|
2117 | process (data); |
|
|
2118 | |
|
|
2119 | if (sigismember (&prev, SIGUSR1) |
|
|
2120 | sigprocmask (SIG_UNBLOCK, &block, 0); |
|
|
2121 | } |
|
|
2122 | |
|
|
2123 | (Note: pthreads in theory requires you to use C<pthread_setmask> |
|
|
2124 | instead of C<sigprocmask> when you use threads, but libev doesn't do it |
|
|
2125 | either...). |
|
|
2126 | |
|
|
2127 | =item queueing from a thread context |
|
|
2128 | |
|
|
2129 | The strategy for threads is different, as you cannot (easily) block |
|
|
2130 | threads but you can easily preempt them, so to queue safely you need to |
|
|
2131 | employ a traditional mutex lock, such as in this pthread example: |
|
|
2132 | |
|
|
2133 | static ev_async mysig; |
|
|
2134 | static pthread_mutex_t mymutex = PTHREAD_MUTEX_INITIALIZER; |
|
|
2135 | |
|
|
2136 | static void |
|
|
2137 | otherthread (void) |
|
|
2138 | { |
|
|
2139 | // only need to lock the actual queueing operation |
|
|
2140 | pthread_mutex_lock (&mymutex); |
|
|
2141 | queue_put (data); |
|
|
2142 | pthread_mutex_unlock (&mymutex); |
|
|
2143 | |
|
|
2144 | ev_async_send (DEFAULT_ &mysig); |
|
|
2145 | } |
|
|
2146 | |
|
|
2147 | static void |
|
|
2148 | mysig_cb (EV_P_ ev_async *w, int revents) |
|
|
2149 | { |
|
|
2150 | pthread_mutex_lock (&mymutex); |
|
|
2151 | |
|
|
2152 | while (queue_get (&data)) |
|
|
2153 | process (data); |
|
|
2154 | |
|
|
2155 | pthread_mutex_unlock (&mymutex); |
|
|
2156 | } |
|
|
2157 | |
|
|
2158 | =back |
|
|
2159 | |
|
|
2160 | |
|
|
2161 | =head3 Watcher-Specific Functions and Data Members |
|
|
2162 | |
|
|
2163 | =over 4 |
|
|
2164 | |
|
|
2165 | =item ev_async_init (ev_async *, callback) |
|
|
2166 | |
|
|
2167 | Initialises and configures the async watcher - it has no parameters of any |
|
|
2168 | kind. There is a C<ev_asynd_set> macro, but using it is utterly pointless, |
|
|
2169 | believe me. |
|
|
2170 | |
|
|
2171 | =item ev_async_send (loop, ev_async *) |
|
|
2172 | |
|
|
2173 | Sends/signals/activates the given C<ev_async> watcher, that is, feeds |
|
|
2174 | an C<EV_ASYNC> event on the watcher into the event loop. Unlike |
|
|
2175 | C<ev_feed_event>, this call is safe to do in other threads, signal or |
|
|
2176 | similar contexts (see the dicusssion of C<EV_ATOMIC_T> in the embedding |
|
|
2177 | section below on what exactly this means). |
|
|
2178 | |
|
|
2179 | This call incurs the overhead of a syscall only once per loop iteration, |
|
|
2180 | so while the overhead might be noticable, it doesn't apply to repeated |
|
|
2181 | calls to C<ev_async_send>. |
|
|
2182 | |
|
|
2183 | =back |
|
|
2184 | |
|
|
2185 | |
1826 | =head1 OTHER FUNCTIONS |
2186 | =head1 OTHER FUNCTIONS |
1827 | |
2187 | |
1828 | There are some other functions of possible interest. Described. Here. Now. |
2188 | There are some other functions of possible interest. Described. Here. Now. |
1829 | |
2189 | |
1830 | =over 4 |
2190 | =over 4 |
… | |
… | |
2057 | Example: Define a class with an IO and idle watcher, start one of them in |
2417 | Example: Define a class with an IO and idle watcher, start one of them in |
2058 | the constructor. |
2418 | the constructor. |
2059 | |
2419 | |
2060 | class myclass |
2420 | class myclass |
2061 | { |
2421 | { |
2062 | ev_io io; void io_cb (ev::io &w, int revents); |
2422 | ev::io io; void io_cb (ev::io &w, int revents); |
2063 | ev_idle idle void idle_cb (ev::idle &w, int revents); |
2423 | ev:idle idle void idle_cb (ev::idle &w, int revents); |
2064 | |
2424 | |
2065 | myclass (); |
2425 | myclass (int fd) |
2066 | } |
|
|
2067 | |
|
|
2068 | myclass::myclass (int fd) |
|
|
2069 | { |
2426 | { |
2070 | io .set <myclass, &myclass::io_cb > (this); |
2427 | io .set <myclass, &myclass::io_cb > (this); |
2071 | idle.set <myclass, &myclass::idle_cb> (this); |
2428 | idle.set <myclass, &myclass::idle_cb> (this); |
2072 | |
2429 | |
2073 | io.start (fd, ev::READ); |
2430 | io.start (fd, ev::READ); |
|
|
2431 | } |
2074 | } |
2432 | }; |
2075 | |
2433 | |
2076 | |
2434 | |
2077 | =head1 MACRO MAGIC |
2435 | =head1 MACRO MAGIC |
2078 | |
2436 | |
2079 | Libev can be compiled with a variety of options, the most fundamantal |
2437 | Libev can be compiled with a variety of options, the most fundamantal |
… | |
… | |
2140 | Libev can (and often is) directly embedded into host |
2498 | Libev can (and often is) directly embedded into host |
2141 | applications. Examples of applications that embed it include the Deliantra |
2499 | applications. Examples of applications that embed it include the Deliantra |
2142 | Game Server, the EV perl module, the GNU Virtual Private Ethernet (gvpe) |
2500 | Game Server, the EV perl module, the GNU Virtual Private Ethernet (gvpe) |
2143 | and rxvt-unicode. |
2501 | and rxvt-unicode. |
2144 | |
2502 | |
2145 | The goal is to enable you to just copy the neecssary files into your |
2503 | The goal is to enable you to just copy the necessary files into your |
2146 | source directory without having to change even a single line in them, so |
2504 | source directory without having to change even a single line in them, so |
2147 | you can easily upgrade by simply copying (or having a checked-out copy of |
2505 | you can easily upgrade by simply copying (or having a checked-out copy of |
2148 | libev somewhere in your source tree). |
2506 | libev somewhere in your source tree). |
2149 | |
2507 | |
2150 | =head2 FILESETS |
2508 | =head2 FILESETS |
… | |
… | |
2240 | |
2598 | |
2241 | If defined to be C<1>, libev will try to detect the availability of the |
2599 | If defined to be C<1>, libev will try to detect the availability of the |
2242 | monotonic clock option at both compiletime and runtime. Otherwise no use |
2600 | monotonic clock option at both compiletime and runtime. Otherwise no use |
2243 | of the monotonic clock option will be attempted. If you enable this, you |
2601 | of the monotonic clock option will be attempted. If you enable this, you |
2244 | usually have to link against librt or something similar. Enabling it when |
2602 | usually have to link against librt or something similar. Enabling it when |
2245 | the functionality isn't available is safe, though, althoguh you have |
2603 | the functionality isn't available is safe, though, although you have |
2246 | to make sure you link against any libraries where the C<clock_gettime> |
2604 | to make sure you link against any libraries where the C<clock_gettime> |
2247 | function is hiding in (often F<-lrt>). |
2605 | function is hiding in (often F<-lrt>). |
2248 | |
2606 | |
2249 | =item EV_USE_REALTIME |
2607 | =item EV_USE_REALTIME |
2250 | |
2608 | |
2251 | If defined to be C<1>, libev will try to detect the availability of the |
2609 | If defined to be C<1>, libev will try to detect the availability of the |
2252 | realtime clock option at compiletime (and assume its availability at |
2610 | realtime clock option at compiletime (and assume its availability at |
2253 | runtime if successful). Otherwise no use of the realtime clock option will |
2611 | runtime if successful). Otherwise no use of the realtime clock option will |
2254 | be attempted. This effectively replaces C<gettimeofday> by C<clock_get |
2612 | be attempted. This effectively replaces C<gettimeofday> by C<clock_get |
2255 | (CLOCK_REALTIME, ...)> and will not normally affect correctness. See tzhe note about libraries |
2613 | (CLOCK_REALTIME, ...)> and will not normally affect correctness. See the |
2256 | in the description of C<EV_USE_MONOTONIC>, though. |
2614 | note about libraries in the description of C<EV_USE_MONOTONIC>, though. |
|
|
2615 | |
|
|
2616 | =item EV_USE_NANOSLEEP |
|
|
2617 | |
|
|
2618 | If defined to be C<1>, libev will assume that C<nanosleep ()> is available |
|
|
2619 | and will use it for delays. Otherwise it will use C<select ()>. |
2257 | |
2620 | |
2258 | =item EV_USE_SELECT |
2621 | =item EV_USE_SELECT |
2259 | |
2622 | |
2260 | If undefined or defined to be C<1>, libev will compile in support for the |
2623 | If undefined or defined to be C<1>, libev will compile in support for the |
2261 | C<select>(2) backend. No attempt at autodetection will be done: if no |
2624 | C<select>(2) backend. No attempt at autodetection will be done: if no |
… | |
… | |
2279 | wants osf handles on win32 (this is the case when the select to |
2642 | wants osf handles on win32 (this is the case when the select to |
2280 | be used is the winsock select). This means that it will call |
2643 | be used is the winsock select). This means that it will call |
2281 | C<_get_osfhandle> on the fd to convert it to an OS handle. Otherwise, |
2644 | C<_get_osfhandle> on the fd to convert it to an OS handle. Otherwise, |
2282 | it is assumed that all these functions actually work on fds, even |
2645 | it is assumed that all these functions actually work on fds, even |
2283 | on win32. Should not be defined on non-win32 platforms. |
2646 | on win32. Should not be defined on non-win32 platforms. |
|
|
2647 | |
|
|
2648 | =item EV_FD_TO_WIN32_HANDLE |
|
|
2649 | |
|
|
2650 | If C<EV_SELECT_IS_WINSOCKET> is enabled, then libev needs a way to map |
|
|
2651 | file descriptors to socket handles. When not defining this symbol (the |
|
|
2652 | default), then libev will call C<_get_osfhandle>, which is usually |
|
|
2653 | correct. In some cases, programs use their own file descriptor management, |
|
|
2654 | in which case they can provide this function to map fds to socket handles. |
2284 | |
2655 | |
2285 | =item EV_USE_POLL |
2656 | =item EV_USE_POLL |
2286 | |
2657 | |
2287 | If defined to be C<1>, libev will compile in support for the C<poll>(2) |
2658 | If defined to be C<1>, libev will compile in support for the C<poll>(2) |
2288 | backend. Otherwise it will be enabled on non-win32 platforms. It |
2659 | backend. Otherwise it will be enabled on non-win32 platforms. It |
… | |
… | |
2322 | |
2693 | |
2323 | If defined to be C<1>, libev will compile in support for the Linux inotify |
2694 | If defined to be C<1>, libev will compile in support for the Linux inotify |
2324 | interface to speed up C<ev_stat> watchers. Its actual availability will |
2695 | interface to speed up C<ev_stat> watchers. Its actual availability will |
2325 | be detected at runtime. |
2696 | be detected at runtime. |
2326 | |
2697 | |
|
|
2698 | =item EV_ATOMIC_T |
|
|
2699 | |
|
|
2700 | Libev requires an integer type (suitable for storing C<0> or C<1>) whose |
|
|
2701 | access is atomic with respect to other threads or signal contexts. No such |
|
|
2702 | type is easily found in the C language, so you can provide your own type |
|
|
2703 | that you know is safe for your purposes. It is used both for signal handler "locking" |
|
|
2704 | as well as for signal and thread safety in C<ev_async> watchers. |
|
|
2705 | |
|
|
2706 | In the absense of this define, libev will use C<sig_atomic_t volatile> |
|
|
2707 | (from F<signal.h>), which is usually good enough on most platforms. |
|
|
2708 | |
2327 | =item EV_H |
2709 | =item EV_H |
2328 | |
2710 | |
2329 | The name of the F<ev.h> header file used to include it. The default if |
2711 | The name of the F<ev.h> header file used to include it. The default if |
2330 | undefined is C<< <ev.h> >> in F<event.h> and C<"ev.h"> in F<ev.c>. This |
2712 | undefined is C<"ev.h"> in F<event.h>, F<ev.c> and F<ev++.h>. This can be |
2331 | can be used to virtually rename the F<ev.h> header file in case of conflicts. |
2713 | used to virtually rename the F<ev.h> header file in case of conflicts. |
2332 | |
2714 | |
2333 | =item EV_CONFIG_H |
2715 | =item EV_CONFIG_H |
2334 | |
2716 | |
2335 | If C<EV_STANDALONE> isn't C<1>, this variable can be used to override |
2717 | If C<EV_STANDALONE> isn't C<1>, this variable can be used to override |
2336 | F<ev.c>'s idea of where to find the F<config.h> file, similarly to |
2718 | F<ev.c>'s idea of where to find the F<config.h> file, similarly to |
2337 | C<EV_H>, above. |
2719 | C<EV_H>, above. |
2338 | |
2720 | |
2339 | =item EV_EVENT_H |
2721 | =item EV_EVENT_H |
2340 | |
2722 | |
2341 | Similarly to C<EV_H>, this macro can be used to override F<event.c>'s idea |
2723 | Similarly to C<EV_H>, this macro can be used to override F<event.c>'s idea |
2342 | of how the F<event.h> header can be found. |
2724 | of how the F<event.h> header can be found, the default is C<"event.h">. |
2343 | |
2725 | |
2344 | =item EV_PROTOTYPES |
2726 | =item EV_PROTOTYPES |
2345 | |
2727 | |
2346 | If defined to be C<0>, then F<ev.h> will not define any function |
2728 | If defined to be C<0>, then F<ev.h> will not define any function |
2347 | prototypes, but still define all the structs and other symbols. This is |
2729 | prototypes, but still define all the structs and other symbols. This is |
… | |
… | |
2398 | =item EV_FORK_ENABLE |
2780 | =item EV_FORK_ENABLE |
2399 | |
2781 | |
2400 | If undefined or defined to be C<1>, then fork watchers are supported. If |
2782 | If undefined or defined to be C<1>, then fork watchers are supported. If |
2401 | defined to be C<0>, then they are not. |
2783 | defined to be C<0>, then they are not. |
2402 | |
2784 | |
|
|
2785 | =item EV_ASYNC_ENABLE |
|
|
2786 | |
|
|
2787 | If undefined or defined to be C<1>, then async watchers are supported. If |
|
|
2788 | defined to be C<0>, then they are not. |
|
|
2789 | |
2403 | =item EV_MINIMAL |
2790 | =item EV_MINIMAL |
2404 | |
2791 | |
2405 | If you need to shave off some kilobytes of code at the expense of some |
2792 | If you need to shave off some kilobytes of code at the expense of some |
2406 | speed, define this symbol to C<1>. Currently only used for gcc to override |
2793 | speed, define this symbol to C<1>. Currently only used for gcc to override |
2407 | some inlining decisions, saves roughly 30% codesize of amd64. |
2794 | some inlining decisions, saves roughly 30% codesize of amd64. |
… | |
… | |
2413 | than enough. If you need to manage thousands of children you might want to |
2800 | than enough. If you need to manage thousands of children you might want to |
2414 | increase this value (I<must> be a power of two). |
2801 | increase this value (I<must> be a power of two). |
2415 | |
2802 | |
2416 | =item EV_INOTIFY_HASHSIZE |
2803 | =item EV_INOTIFY_HASHSIZE |
2417 | |
2804 | |
2418 | C<ev_staz> watchers use a small hash table to distribute workload by |
2805 | C<ev_stat> watchers use a small hash table to distribute workload by |
2419 | inotify watch id. The default size is C<16> (or C<1> with C<EV_MINIMAL>), |
2806 | inotify watch id. The default size is C<16> (or C<1> with C<EV_MINIMAL>), |
2420 | usually more than enough. If you need to manage thousands of C<ev_stat> |
2807 | usually more than enough. If you need to manage thousands of C<ev_stat> |
2421 | watchers you might want to increase this value (I<must> be a power of |
2808 | watchers you might want to increase this value (I<must> be a power of |
2422 | two). |
2809 | two). |
2423 | |
2810 | |
… | |
… | |
2440 | |
2827 | |
2441 | =item ev_set_cb (ev, cb) |
2828 | =item ev_set_cb (ev, cb) |
2442 | |
2829 | |
2443 | Can be used to change the callback member declaration in each watcher, |
2830 | Can be used to change the callback member declaration in each watcher, |
2444 | and the way callbacks are invoked and set. Must expand to a struct member |
2831 | and the way callbacks are invoked and set. Must expand to a struct member |
2445 | definition and a statement, respectively. See the F<ev.v> header file for |
2832 | definition and a statement, respectively. See the F<ev.h> header file for |
2446 | their default definitions. One possible use for overriding these is to |
2833 | their default definitions. One possible use for overriding these is to |
2447 | avoid the C<struct ev_loop *> as first argument in all cases, or to use |
2834 | avoid the C<struct ev_loop *> as first argument in all cases, or to use |
2448 | method calls instead of plain function calls in C++. |
2835 | method calls instead of plain function calls in C++. |
|
|
2836 | |
|
|
2837 | =head2 EXPORTED API SYMBOLS |
|
|
2838 | |
|
|
2839 | If you need to re-export the API (e.g. via a dll) and you need a list of |
|
|
2840 | exported symbols, you can use the provided F<Symbol.*> files which list |
|
|
2841 | all public symbols, one per line: |
|
|
2842 | |
|
|
2843 | Symbols.ev for libev proper |
|
|
2844 | Symbols.event for the libevent emulation |
|
|
2845 | |
|
|
2846 | This can also be used to rename all public symbols to avoid clashes with |
|
|
2847 | multiple versions of libev linked together (which is obviously bad in |
|
|
2848 | itself, but sometimes it is inconvinient to avoid this). |
|
|
2849 | |
|
|
2850 | A sed command like this will create wrapper C<#define>'s that you need to |
|
|
2851 | include before including F<ev.h>: |
|
|
2852 | |
|
|
2853 | <Symbols.ev sed -e "s/.*/#define & myprefix_&/" >wrap.h |
|
|
2854 | |
|
|
2855 | This would create a file F<wrap.h> which essentially looks like this: |
|
|
2856 | |
|
|
2857 | #define ev_backend myprefix_ev_backend |
|
|
2858 | #define ev_check_start myprefix_ev_check_start |
|
|
2859 | #define ev_check_stop myprefix_ev_check_stop |
|
|
2860 | ... |
2449 | |
2861 | |
2450 | =head2 EXAMPLES |
2862 | =head2 EXAMPLES |
2451 | |
2863 | |
2452 | For a real-world example of a program the includes libev |
2864 | For a real-world example of a program the includes libev |
2453 | verbatim, you can have a look at the EV perl module |
2865 | verbatim, you can have a look at the EV perl module |
… | |
… | |
2494 | |
2906 | |
2495 | =item Starting and stopping timer/periodic watchers: O(log skipped_other_timers) |
2907 | =item Starting and stopping timer/periodic watchers: O(log skipped_other_timers) |
2496 | |
2908 | |
2497 | This means that, when you have a watcher that triggers in one hour and |
2909 | This means that, when you have a watcher that triggers in one hour and |
2498 | there are 100 watchers that would trigger before that then inserting will |
2910 | there are 100 watchers that would trigger before that then inserting will |
2499 | have to skip those 100 watchers. |
2911 | have to skip roughly seven (C<ld 100>) of these watchers. |
2500 | |
2912 | |
2501 | =item Changing timer/periodic watchers (by autorepeat, again): O(log skipped_other_timers) |
2913 | =item Changing timer/periodic watchers (by autorepeat or calling again): O(log skipped_other_timers) |
2502 | |
2914 | |
2503 | That means that for changing a timer costs less than removing/adding them |
2915 | That means that changing a timer costs less than removing/adding them |
2504 | as only the relative motion in the event queue has to be paid for. |
2916 | as only the relative motion in the event queue has to be paid for. |
2505 | |
2917 | |
2506 | =item Starting io/check/prepare/idle/signal/child watchers: O(1) |
2918 | =item Starting io/check/prepare/idle/signal/child/fork/async watchers: O(1) |
2507 | |
2919 | |
2508 | These just add the watcher into an array or at the head of a list. |
2920 | These just add the watcher into an array or at the head of a list. |
|
|
2921 | |
2509 | =item Stopping check/prepare/idle watchers: O(1) |
2922 | =item Stopping check/prepare/idle/fork/async watchers: O(1) |
2510 | |
2923 | |
2511 | =item Stopping an io/signal/child watcher: O(number_of_watchers_for_this_(fd/signal/pid % EV_PID_HASHSIZE)) |
2924 | =item Stopping an io/signal/child watcher: O(number_of_watchers_for_this_(fd/signal/pid % EV_PID_HASHSIZE)) |
2512 | |
2925 | |
2513 | These watchers are stored in lists then need to be walked to find the |
2926 | These watchers are stored in lists then need to be walked to find the |
2514 | correct watcher to remove. The lists are usually short (you don't usually |
2927 | correct watcher to remove. The lists are usually short (you don't usually |
2515 | have many watchers waiting for the same fd or signal). |
2928 | have many watchers waiting for the same fd or signal). |
2516 | |
2929 | |
2517 | =item Finding the next timer per loop iteration: O(1) |
2930 | =item Finding the next timer in each loop iteration: O(1) |
|
|
2931 | |
|
|
2932 | By virtue of using a binary heap, the next timer is always found at the |
|
|
2933 | beginning of the storage array. |
2518 | |
2934 | |
2519 | =item Each change on a file descriptor per loop iteration: O(number_of_watchers_for_this_fd) |
2935 | =item Each change on a file descriptor per loop iteration: O(number_of_watchers_for_this_fd) |
2520 | |
2936 | |
2521 | A change means an I/O watcher gets started or stopped, which requires |
2937 | A change means an I/O watcher gets started or stopped, which requires |
2522 | libev to recalculate its status (and possibly tell the kernel). |
2938 | libev to recalculate its status (and possibly tell the kernel, depending |
|
|
2939 | on backend and wether C<ev_io_set> was used). |
2523 | |
2940 | |
2524 | =item Activating one watcher: O(1) |
2941 | =item Activating one watcher (putting it into the pending state): O(1) |
2525 | |
2942 | |
2526 | =item Priority handling: O(number_of_priorities) |
2943 | =item Priority handling: O(number_of_priorities) |
2527 | |
2944 | |
2528 | Priorities are implemented by allocating some space for each |
2945 | Priorities are implemented by allocating some space for each |
2529 | priority. When doing priority-based operations, libev usually has to |
2946 | priority. When doing priority-based operations, libev usually has to |
2530 | linearly search all the priorities. |
2947 | linearly search all the priorities, but starting/stopping and activating |
|
|
2948 | watchers becomes O(1) w.r.t. priority handling. |
|
|
2949 | |
|
|
2950 | =item Sending an ev_async: O(1) |
|
|
2951 | |
|
|
2952 | =item Processing ev_async_send: O(number_of_async_watchers) |
|
|
2953 | |
|
|
2954 | =item Processing signals: O(max_signal_number) |
|
|
2955 | |
|
|
2956 | Sending involves a syscall I<iff> there were no other C<ev_async_send> |
|
|
2957 | calls in the current loop iteration. Checking for async and signal events |
|
|
2958 | involves iterating over all running async watchers or all signal numbers. |
2531 | |
2959 | |
2532 | =back |
2960 | =back |
2533 | |
2961 | |
2534 | |
2962 | |
|
|
2963 | =head1 Win32 platform limitations and workarounds |
|
|
2964 | |
|
|
2965 | Win32 doesn't support any of the standards (e.g. POSIX) that libev |
|
|
2966 | requires, and its I/O model is fundamentally incompatible with the POSIX |
|
|
2967 | model. Libev still offers limited functionality on this platform in |
|
|
2968 | the form of the C<EVBACKEND_SELECT> backend, and only supports socket |
|
|
2969 | descriptors. This only applies when using Win32 natively, not when using |
|
|
2970 | e.g. cygwin. |
|
|
2971 | |
|
|
2972 | There is no supported compilation method available on windows except |
|
|
2973 | embedding it into other applications. |
|
|
2974 | |
|
|
2975 | Due to the many, low, and arbitrary limits on the win32 platform and the |
|
|
2976 | abysmal performance of winsockets, using a large number of sockets is not |
|
|
2977 | recommended (and not reasonable). If your program needs to use more than |
|
|
2978 | a hundred or so sockets, then likely it needs to use a totally different |
|
|
2979 | implementation for windows, as libev offers the POSIX model, which cannot |
|
|
2980 | be implemented efficiently on windows (microsoft monopoly games). |
|
|
2981 | |
|
|
2982 | =over 4 |
|
|
2983 | |
|
|
2984 | =item The winsocket select function |
|
|
2985 | |
|
|
2986 | The winsocket C<select> function doesn't follow POSIX in that it requires |
|
|
2987 | socket I<handles> and not socket I<file descriptors>. This makes select |
|
|
2988 | very inefficient, and also requires a mapping from file descriptors |
|
|
2989 | to socket handles. See the discussion of the C<EV_SELECT_USE_FD_SET>, |
|
|
2990 | C<EV_SELECT_IS_WINSOCKET> and C<EV_FD_TO_WIN32_HANDLE> preprocessor |
|
|
2991 | symbols for more info. |
|
|
2992 | |
|
|
2993 | The configuration for a "naked" win32 using the microsoft runtime |
|
|
2994 | libraries and raw winsocket select is: |
|
|
2995 | |
|
|
2996 | #define EV_USE_SELECT 1 |
|
|
2997 | #define EV_SELECT_IS_WINSOCKET 1 /* forces EV_SELECT_USE_FD_SET, too */ |
|
|
2998 | |
|
|
2999 | Note that winsockets handling of fd sets is O(n), so you can easily get a |
|
|
3000 | complexity in the O(n²) range when using win32. |
|
|
3001 | |
|
|
3002 | =item Limited number of file descriptors |
|
|
3003 | |
|
|
3004 | Windows has numerous arbitrary (and low) limits on things. Early versions |
|
|
3005 | of winsocket's select only supported waiting for a max. of C<64> handles |
|
|
3006 | (probably owning to the fact that all windows kernels can only wait for |
|
|
3007 | C<64> things at the same time internally; microsoft recommends spawning a |
|
|
3008 | chain of threads and wait for 63 handles and the previous thread in each). |
|
|
3009 | |
|
|
3010 | Newer versions support more handles, but you need to define C<FD_SETSIZE> |
|
|
3011 | to some high number (e.g. C<2048>) before compiling the winsocket select |
|
|
3012 | call (which might be in libev or elsewhere, for example, perl does its own |
|
|
3013 | select emulation on windows). |
|
|
3014 | |
|
|
3015 | Another limit is the number of file descriptors in the microsoft runtime |
|
|
3016 | libraries, which by default is C<64> (there must be a hidden I<64> fetish |
|
|
3017 | or something like this inside microsoft). You can increase this by calling |
|
|
3018 | C<_setmaxstdio>, which can increase this limit to C<2048> (another |
|
|
3019 | arbitrary limit), but is broken in many versions of the microsoft runtime |
|
|
3020 | libraries. |
|
|
3021 | |
|
|
3022 | This might get you to about C<512> or C<2048> sockets (depending on |
|
|
3023 | windows version and/or the phase of the moon). To get more, you need to |
|
|
3024 | wrap all I/O functions and provide your own fd management, but the cost of |
|
|
3025 | calling select (O(n²)) will likely make this unworkable. |
|
|
3026 | |
|
|
3027 | =back |
|
|
3028 | |
|
|
3029 | |
2535 | =head1 AUTHOR |
3030 | =head1 AUTHOR |
2536 | |
3031 | |
2537 | Marc Lehmann <libev@schmorp.de>. |
3032 | Marc Lehmann <libev@schmorp.de>. |
2538 | |
3033 | |