… | |
… | |
214 | C<ev_embeddable_backends () & ev_supported_backends ()>, likewise for |
214 | C<ev_embeddable_backends () & ev_supported_backends ()>, likewise for |
215 | recommended ones. |
215 | recommended ones. |
216 | |
216 | |
217 | See the description of C<ev_embed> watchers for more info. |
217 | See the description of C<ev_embed> watchers for more info. |
218 | |
218 | |
219 | =item ev_set_allocator (void *(*cb)(void *ptr, long size)) |
219 | =item ev_set_allocator (void *(*cb)(void *ptr, long size)) [NOT REENTRANT] |
220 | |
220 | |
221 | Sets the allocation function to use (the prototype is similar - the |
221 | Sets the allocation function to use (the prototype is similar - the |
222 | semantics are identical to the C<realloc> C89/SuS/POSIX function). It is |
222 | semantics are identical to the C<realloc> C89/SuS/POSIX function). It is |
223 | used to allocate and free memory (no surprises here). If it returns zero |
223 | used to allocate and free memory (no surprises here). If it returns zero |
224 | when memory needs to be allocated (C<size != 0>), the library might abort |
224 | when memory needs to be allocated (C<size != 0>), the library might abort |
… | |
… | |
250 | } |
250 | } |
251 | |
251 | |
252 | ... |
252 | ... |
253 | ev_set_allocator (persistent_realloc); |
253 | ev_set_allocator (persistent_realloc); |
254 | |
254 | |
255 | =item ev_set_syserr_cb (void (*cb)(const char *msg)); |
255 | =item ev_set_syserr_cb (void (*cb)(const char *msg)); [NOT REENTRANT] |
256 | |
256 | |
257 | Set the callback function to call on a retryable system call error (such |
257 | Set the callback function to call on a retryable system call error (such |
258 | as failed select, poll, epoll_wait). The message is a printable string |
258 | as failed select, poll, epoll_wait). The message is a printable string |
259 | indicating the system call or subsystem causing the problem. If this |
259 | indicating the system call or subsystem causing the problem. If this |
260 | callback is set, then libev will expect it to remedy the situation, no |
260 | callback is set, then libev will expect it to remedy the situation, no |
… | |
… | |
359 | writing a server, you should C<accept ()> in a loop to accept as many |
359 | writing a server, you should C<accept ()> in a loop to accept as many |
360 | connections as possible during one iteration. You might also want to have |
360 | connections as possible during one iteration. You might also want to have |
361 | a look at C<ev_set_io_collect_interval ()> to increase the amount of |
361 | a look at C<ev_set_io_collect_interval ()> to increase the amount of |
362 | readiness notifications you get per iteration. |
362 | readiness notifications you get per iteration. |
363 | |
363 | |
|
|
364 | This backend maps C<EV_READ> to the C<readfds> set and C<EV_WRITE> to the |
|
|
365 | C<writefds> set (and to work around Microsoft Windows bugs, also onto the |
|
|
366 | C<exceptfds> set on that platform). |
|
|
367 | |
364 | =item C<EVBACKEND_POLL> (value 2, poll backend, available everywhere except on windows) |
368 | =item C<EVBACKEND_POLL> (value 2, poll backend, available everywhere except on windows) |
365 | |
369 | |
366 | And this is your standard poll(2) backend. It's more complicated |
370 | And this is your standard poll(2) backend. It's more complicated |
367 | than select, but handles sparse fds better and has no artificial |
371 | than select, but handles sparse fds better and has no artificial |
368 | limit on the number of fds you can use (except it will slow down |
372 | limit on the number of fds you can use (except it will slow down |
369 | considerably with a lot of inactive fds). It scales similarly to select, |
373 | considerably with a lot of inactive fds). It scales similarly to select, |
370 | i.e. O(total_fds). See the entry for C<EVBACKEND_SELECT>, above, for |
374 | i.e. O(total_fds). See the entry for C<EVBACKEND_SELECT>, above, for |
371 | performance tips. |
375 | performance tips. |
|
|
376 | |
|
|
377 | This backend maps C<EV_READ> to C<POLLIN | POLLERR | POLLHUP>, and |
|
|
378 | C<EV_WRITE> to C<POLLOUT | POLLERR | POLLHUP>. |
372 | |
379 | |
373 | =item C<EVBACKEND_EPOLL> (value 4, Linux) |
380 | =item C<EVBACKEND_EPOLL> (value 4, Linux) |
374 | |
381 | |
375 | For few fds, this backend is a bit little slower than poll and select, |
382 | For few fds, this backend is a bit little slower than poll and select, |
376 | but it scales phenomenally better. While poll and select usually scale |
383 | but it scales phenomenally better. While poll and select usually scale |
… | |
… | |
389 | Please note that epoll sometimes generates spurious notifications, so you |
396 | Please note that epoll sometimes generates spurious notifications, so you |
390 | need to use non-blocking I/O or other means to avoid blocking when no data |
397 | need to use non-blocking I/O or other means to avoid blocking when no data |
391 | (or space) is available. |
398 | (or space) is available. |
392 | |
399 | |
393 | Best performance from this backend is achieved by not unregistering all |
400 | Best performance from this backend is achieved by not unregistering all |
394 | watchers for a file descriptor until it has been closed, if possible, i.e. |
401 | watchers for a file descriptor until it has been closed, if possible, |
395 | keep at least one watcher active per fd at all times. |
402 | i.e. keep at least one watcher active per fd at all times. Stopping and |
|
|
403 | starting a watcher (without re-setting it) also usually doesn't cause |
|
|
404 | extra overhead. |
396 | |
405 | |
397 | While nominally embeddable in other event loops, this feature is broken in |
406 | While nominally embeddable in other event loops, this feature is broken in |
398 | all kernel versions tested so far. |
407 | all kernel versions tested so far. |
399 | |
408 | |
|
|
409 | This backend maps C<EV_READ> and C<EV_WRITE> in the same way as |
|
|
410 | C<EVBACKEND_POLL>. |
|
|
411 | |
400 | =item C<EVBACKEND_KQUEUE> (value 8, most BSD clones) |
412 | =item C<EVBACKEND_KQUEUE> (value 8, most BSD clones) |
401 | |
413 | |
402 | Kqueue deserves special mention, as at the time of this writing, it |
414 | Kqueue deserves special mention, as at the time of this writing, it was |
403 | was broken on all BSDs except NetBSD (usually it doesn't work reliably |
415 | broken on all BSDs except NetBSD (usually it doesn't work reliably with |
404 | with anything but sockets and pipes, except on Darwin, where of course |
416 | anything but sockets and pipes, except on Darwin, where of course it's |
405 | it's completely useless). For this reason it's not being "auto-detected" |
417 | completely useless). For this reason it's not being "auto-detected" unless |
406 | unless you explicitly specify it explicitly in the flags (i.e. using |
418 | you explicitly specify it in the flags (i.e. using C<EVBACKEND_KQUEUE>) or |
407 | C<EVBACKEND_KQUEUE>) or libev was compiled on a known-to-be-good (-enough) |
419 | libev was compiled on a known-to-be-good (-enough) system like NetBSD. |
408 | system like NetBSD. |
|
|
409 | |
420 | |
410 | You still can embed kqueue into a normal poll or select backend and use it |
421 | You still can embed kqueue into a normal poll or select backend and use it |
411 | only for sockets (after having made sure that sockets work with kqueue on |
422 | only for sockets (after having made sure that sockets work with kqueue on |
412 | the target platform). See C<ev_embed> watchers for more info. |
423 | the target platform). See C<ev_embed> watchers for more info. |
413 | |
424 | |
414 | It scales in the same way as the epoll backend, but the interface to the |
425 | It scales in the same way as the epoll backend, but the interface to the |
415 | kernel is more efficient (which says nothing about its actual speed, of |
426 | kernel is more efficient (which says nothing about its actual speed, of |
416 | course). While stopping, setting and starting an I/O watcher does never |
427 | course). While stopping, setting and starting an I/O watcher does never |
417 | cause an extra system call as with C<EVBACKEND_EPOLL>, it still adds up to |
428 | cause an extra system call as with C<EVBACKEND_EPOLL>, it still adds up to |
418 | two event changes per incident, support for C<fork ()> is very bad and it |
429 | two event changes per incident. Support for C<fork ()> is very bad and it |
419 | drops fds silently in similarly hard-to-detect cases. |
430 | drops fds silently in similarly hard-to-detect cases. |
420 | |
431 | |
421 | This backend usually performs well under most conditions. |
432 | This backend usually performs well under most conditions. |
422 | |
433 | |
423 | While nominally embeddable in other event loops, this doesn't work |
434 | While nominally embeddable in other event loops, this doesn't work |
424 | everywhere, so you might need to test for this. And since it is broken |
435 | everywhere, so you might need to test for this. And since it is broken |
425 | almost everywhere, you should only use it when you have a lot of sockets |
436 | almost everywhere, you should only use it when you have a lot of sockets |
426 | (for which it usually works), by embedding it into another event loop |
437 | (for which it usually works), by embedding it into another event loop |
427 | (e.g. C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>) and using it only for |
438 | (e.g. C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>) and, did I mention it, |
428 | sockets. |
439 | using it only for sockets. |
|
|
440 | |
|
|
441 | This backend maps C<EV_READ> into an C<EVFILT_READ> kevent with |
|
|
442 | C<NOTE_EOF>, and C<EV_WRITE> into an C<EVFILT_WRITE> kevent with |
|
|
443 | C<NOTE_EOF>. |
429 | |
444 | |
430 | =item C<EVBACKEND_DEVPOLL> (value 16, Solaris 8) |
445 | =item C<EVBACKEND_DEVPOLL> (value 16, Solaris 8) |
431 | |
446 | |
432 | This is not implemented yet (and might never be, unless you send me an |
447 | This is not implemented yet (and might never be, unless you send me an |
433 | implementation). According to reports, C</dev/poll> only supports sockets |
448 | implementation). According to reports, C</dev/poll> only supports sockets |
… | |
… | |
446 | While this backend scales well, it requires one system call per active |
461 | While this backend scales well, it requires one system call per active |
447 | file descriptor per loop iteration. For small and medium numbers of file |
462 | file descriptor per loop iteration. For small and medium numbers of file |
448 | descriptors a "slow" C<EVBACKEND_SELECT> or C<EVBACKEND_POLL> backend |
463 | descriptors a "slow" C<EVBACKEND_SELECT> or C<EVBACKEND_POLL> backend |
449 | might perform better. |
464 | might perform better. |
450 | |
465 | |
451 | On the positive side, ignoring the spurious readiness notifications, this |
466 | On the positive side, with the exception of the spurious readiness |
452 | backend actually performed to specification in all tests and is fully |
467 | notifications, this backend actually performed fully to specification |
453 | embeddable, which is a rare feat among the OS-specific backends. |
468 | in all tests and is fully embeddable, which is a rare feat among the |
|
|
469 | OS-specific backends. |
|
|
470 | |
|
|
471 | This backend maps C<EV_READ> and C<EV_WRITE> in the same way as |
|
|
472 | C<EVBACKEND_POLL>. |
454 | |
473 | |
455 | =item C<EVBACKEND_ALL> |
474 | =item C<EVBACKEND_ALL> |
456 | |
475 | |
457 | Try all backends (even potentially broken ones that wouldn't be tried |
476 | Try all backends (even potentially broken ones that wouldn't be tried |
458 | with C<EVFLAG_AUTO>). Since this is a mask, you can do stuff such as |
477 | with C<EVFLAG_AUTO>). Since this is a mask, you can do stuff such as |
… | |
… | |
464 | |
483 | |
465 | If one or more of these are or'ed into the flags value, then only these |
484 | If one or more of these are or'ed into the flags value, then only these |
466 | backends will be tried (in the reverse order as listed here). If none are |
485 | backends will be tried (in the reverse order as listed here). If none are |
467 | specified, all backends in C<ev_recommended_backends ()> will be tried. |
486 | specified, all backends in C<ev_recommended_backends ()> will be tried. |
468 | |
487 | |
469 | The most typical usage is like this: |
488 | Example: This is the most typical usage. |
470 | |
489 | |
471 | if (!ev_default_loop (0)) |
490 | if (!ev_default_loop (0)) |
472 | fatal ("could not initialise libev, bad $LIBEV_FLAGS in environment?"); |
491 | fatal ("could not initialise libev, bad $LIBEV_FLAGS in environment?"); |
473 | |
492 | |
474 | Restrict libev to the select and poll backends, and do not allow |
493 | Example: Restrict libev to the select and poll backends, and do not allow |
475 | environment settings to be taken into account: |
494 | environment settings to be taken into account: |
476 | |
495 | |
477 | ev_default_loop (EVBACKEND_POLL | EVBACKEND_SELECT | EVFLAG_NOENV); |
496 | ev_default_loop (EVBACKEND_POLL | EVBACKEND_SELECT | EVFLAG_NOENV); |
478 | |
497 | |
479 | Use whatever libev has to offer, but make sure that kqueue is used if |
498 | Example: Use whatever libev has to offer, but make sure that kqueue is |
480 | available (warning, breaks stuff, best use only with your own private |
499 | used if available (warning, breaks stuff, best use only with your own |
481 | event loop and only if you know the OS supports your types of fds): |
500 | private event loop and only if you know the OS supports your types of |
|
|
501 | fds): |
482 | |
502 | |
483 | ev_default_loop (ev_recommended_backends () | EVBACKEND_KQUEUE); |
503 | ev_default_loop (ev_recommended_backends () | EVBACKEND_KQUEUE); |
484 | |
504 | |
485 | =item struct ev_loop *ev_loop_new (unsigned int flags) |
505 | =item struct ev_loop *ev_loop_new (unsigned int flags) |
486 | |
506 | |
… | |
… | |
544 | |
564 | |
545 | =item ev_loop_fork (loop) |
565 | =item ev_loop_fork (loop) |
546 | |
566 | |
547 | Like C<ev_default_fork>, but acts on an event loop created by |
567 | Like C<ev_default_fork>, but acts on an event loop created by |
548 | C<ev_loop_new>. Yes, you have to call this on every allocated event loop |
568 | C<ev_loop_new>. Yes, you have to call this on every allocated event loop |
549 | after fork, and how you do this is entirely your own problem. |
569 | after fork that you want to re-use in the child, and how you do this is |
|
|
570 | entirely your own problem. |
550 | |
571 | |
551 | =item int ev_is_default_loop (loop) |
572 | =item int ev_is_default_loop (loop) |
552 | |
573 | |
553 | Returns true when the given loop actually is the default loop, false otherwise. |
574 | Returns true when the given loop is, in fact, the default loop, and false |
|
|
575 | otherwise. |
554 | |
576 | |
555 | =item unsigned int ev_loop_count (loop) |
577 | =item unsigned int ev_loop_count (loop) |
556 | |
578 | |
557 | Returns the count of loop iterations for the loop, which is identical to |
579 | Returns the count of loop iterations for the loop, which is identical to |
558 | the number of times libev did poll for new events. It starts at C<0> and |
580 | the number of times libev did poll for new events. It starts at C<0> and |
… | |
… | |
596 | If the flags argument is specified as C<0>, it will not return until |
618 | If the flags argument is specified as C<0>, it will not return until |
597 | either no event watchers are active anymore or C<ev_unloop> was called. |
619 | either no event watchers are active anymore or C<ev_unloop> was called. |
598 | |
620 | |
599 | Please note that an explicit C<ev_unloop> is usually better than |
621 | Please note that an explicit C<ev_unloop> is usually better than |
600 | relying on all watchers to be stopped when deciding when a program has |
622 | relying on all watchers to be stopped when deciding when a program has |
601 | finished (especially in interactive programs), but having a program that |
623 | finished (especially in interactive programs), but having a program |
602 | automatically loops as long as it has to and no longer by virtue of |
624 | that automatically loops as long as it has to and no longer by virtue |
603 | relying on its watchers stopping correctly is a thing of beauty. |
625 | of relying on its watchers stopping correctly, that is truly a thing of |
|
|
626 | beauty. |
604 | |
627 | |
605 | A flags value of C<EVLOOP_NONBLOCK> will look for new events, will handle |
628 | A flags value of C<EVLOOP_NONBLOCK> will look for new events, will handle |
606 | those events and any outstanding ones, but will not block your process in |
629 | those events and any already outstanding ones, but will not block your |
607 | case there are no events and will return after one iteration of the loop. |
630 | process in case there are no events and will return after one iteration of |
|
|
631 | the loop. |
608 | |
632 | |
609 | A flags value of C<EVLOOP_ONESHOT> will look for new events (waiting if |
633 | A flags value of C<EVLOOP_ONESHOT> will look for new events (waiting if |
610 | necessary) and will handle those and any outstanding ones. It will block |
634 | necessary) and will handle those and any already outstanding ones. It |
611 | your process until at least one new event arrives, and will return after |
635 | will block your process until at least one new event arrives (which could |
612 | one iteration of the loop. This is useful if you are waiting for some |
636 | be an event internal to libev itself, so there is no guarentee that a |
613 | external event in conjunction with something not expressible using other |
637 | user-registered callback will be called), and will return after one |
|
|
638 | iteration of the loop. |
|
|
639 | |
|
|
640 | This is useful if you are waiting for some external event in conjunction |
|
|
641 | with something not expressible using other libev watchers (i.e. "roll your |
614 | libev watchers. However, a pair of C<ev_prepare>/C<ev_check> watchers is |
642 | own C<ev_loop>"). However, a pair of C<ev_prepare>/C<ev_check> watchers is |
615 | usually a better approach for this kind of thing. |
643 | usually a better approach for this kind of thing. |
616 | |
644 | |
617 | Here are the gory details of what C<ev_loop> does: |
645 | Here are the gory details of what C<ev_loop> does: |
618 | |
646 | |
619 | - Before the first iteration, call any pending watchers. |
647 | - Before the first iteration, call any pending watchers. |
… | |
… | |
629 | any active watchers at all will result in not sleeping). |
657 | any active watchers at all will result in not sleeping). |
630 | - Sleep if the I/O and timer collect interval say so. |
658 | - Sleep if the I/O and timer collect interval say so. |
631 | - Block the process, waiting for any events. |
659 | - Block the process, waiting for any events. |
632 | - Queue all outstanding I/O (fd) events. |
660 | - Queue all outstanding I/O (fd) events. |
633 | - Update the "event loop time" (ev_now ()), and do time jump adjustments. |
661 | - Update the "event loop time" (ev_now ()), and do time jump adjustments. |
634 | - Queue all outstanding timers. |
662 | - Queue all expired timers. |
635 | - Queue all outstanding periodics. |
663 | - Queue all expired periodics. |
636 | - Unless any events are pending now, queue all idle watchers. |
664 | - Unless any events are pending now, queue all idle watchers. |
637 | - Queue all check watchers. |
665 | - Queue all check watchers. |
638 | - Call all queued watchers in reverse order (i.e. check watchers first). |
666 | - Call all queued watchers in reverse order (i.e. check watchers first). |
639 | Signals and child watchers are implemented as I/O watchers, and will |
667 | Signals and child watchers are implemented as I/O watchers, and will |
640 | be handled here by queueing them when their watcher gets executed. |
668 | be handled here by queueing them when their watcher gets executed. |
… | |
… | |
663 | |
691 | |
664 | =item ev_unref (loop) |
692 | =item ev_unref (loop) |
665 | |
693 | |
666 | Ref/unref can be used to add or remove a reference count on the event |
694 | Ref/unref can be used to add or remove a reference count on the event |
667 | loop: Every watcher keeps one reference, and as long as the reference |
695 | loop: Every watcher keeps one reference, and as long as the reference |
668 | count is nonzero, C<ev_loop> will not return on its own. If you have |
696 | count is nonzero, C<ev_loop> will not return on its own. |
|
|
697 | |
669 | a watcher you never unregister that should not keep C<ev_loop> from |
698 | If you have a watcher you never unregister that should not keep C<ev_loop> |
670 | returning, ev_unref() after starting, and ev_ref() before stopping it. For |
699 | from returning, call ev_unref() after starting, and ev_ref() before |
|
|
700 | stopping it. |
|
|
701 | |
671 | example, libev itself uses this for its internal signal pipe: It is not |
702 | As an example, libev itself uses this for its internal signal pipe: It is |
672 | visible to the libev user and should not keep C<ev_loop> from exiting if |
703 | not visible to the libev user and should not keep C<ev_loop> from exiting |
673 | no event watchers registered by it are active. It is also an excellent |
704 | if no event watchers registered by it are active. It is also an excellent |
674 | way to do this for generic recurring timers or from within third-party |
705 | way to do this for generic recurring timers or from within third-party |
675 | libraries. Just remember to I<unref after start> and I<ref before stop> |
706 | libraries. Just remember to I<unref after start> and I<ref before stop> |
676 | (but only if the watcher wasn't active before, or was active before, |
707 | (but only if the watcher wasn't active before, or was active before, |
677 | respectively). |
708 | respectively). |
678 | |
709 | |
… | |
… | |
701 | Setting these to a higher value (the C<interval> I<must> be >= C<0>) |
732 | Setting these to a higher value (the C<interval> I<must> be >= C<0>) |
702 | allows libev to delay invocation of I/O and timer/periodic callbacks |
733 | allows libev to delay invocation of I/O and timer/periodic callbacks |
703 | to increase efficiency of loop iterations (or to increase power-saving |
734 | to increase efficiency of loop iterations (or to increase power-saving |
704 | opportunities). |
735 | opportunities). |
705 | |
736 | |
706 | The background is that sometimes your program runs just fast enough to |
737 | The idea is that sometimes your program runs just fast enough to handle |
707 | handle one (or very few) event(s) per loop iteration. While this makes |
738 | one (or very few) event(s) per loop iteration. While this makes the |
708 | the program responsive, it also wastes a lot of CPU time to poll for new |
739 | program responsive, it also wastes a lot of CPU time to poll for new |
709 | events, especially with backends like C<select ()> which have a high |
740 | events, especially with backends like C<select ()> which have a high |
710 | overhead for the actual polling but can deliver many events at once. |
741 | overhead for the actual polling but can deliver many events at once. |
711 | |
742 | |
712 | By setting a higher I<io collect interval> you allow libev to spend more |
743 | By setting a higher I<io collect interval> you allow libev to spend more |
713 | time collecting I/O events, so you can handle more events per iteration, |
744 | time collecting I/O events, so you can handle more events per iteration, |
… | |
… | |
715 | C<ev_timer>) will be not affected. Setting this to a non-null value will |
746 | C<ev_timer>) will be not affected. Setting this to a non-null value will |
716 | introduce an additional C<ev_sleep ()> call into most loop iterations. |
747 | introduce an additional C<ev_sleep ()> call into most loop iterations. |
717 | |
748 | |
718 | Likewise, by setting a higher I<timeout collect interval> you allow libev |
749 | Likewise, by setting a higher I<timeout collect interval> you allow libev |
719 | to spend more time collecting timeouts, at the expense of increased |
750 | to spend more time collecting timeouts, at the expense of increased |
720 | latency (the watcher callback will be called later). C<ev_io> watchers |
751 | latency/jitter/inexactness (the watcher callback will be called |
721 | will not be affected. Setting this to a non-null value will not introduce |
752 | later). C<ev_io> watchers will not be affected. Setting this to a non-null |
722 | any overhead in libev. |
753 | value will not introduce any overhead in libev. |
723 | |
754 | |
724 | Many (busy) programs can usually benefit by setting the I/O collect |
755 | Many (busy) programs can usually benefit by setting the I/O collect |
725 | interval to a value near C<0.1> or so, which is often enough for |
756 | interval to a value near C<0.1> or so, which is often enough for |
726 | interactive servers (of course not for games), likewise for timeouts. It |
757 | interactive servers (of course not for games), likewise for timeouts. It |
727 | usually doesn't make much sense to set it to a lower value than C<0.01>, |
758 | usually doesn't make much sense to set it to a lower value than C<0.01>, |
… | |
… | |
735 | they fire on, say, one-second boundaries only. |
766 | they fire on, say, one-second boundaries only. |
736 | |
767 | |
737 | =item ev_loop_verify (loop) |
768 | =item ev_loop_verify (loop) |
738 | |
769 | |
739 | This function only does something when C<EV_VERIFY> support has been |
770 | This function only does something when C<EV_VERIFY> support has been |
740 | compiled in. It tries to go through all internal structures and checks |
771 | compiled in. which is the default for non-minimal builds. It tries to go |
741 | them for validity. If anything is found to be inconsistent, it will print |
772 | through all internal structures and checks them for validity. If anything |
742 | an error message to standard error and call C<abort ()>. |
773 | is found to be inconsistent, it will print an error message to standard |
|
|
774 | error and call C<abort ()>. |
743 | |
775 | |
744 | This can be used to catch bugs inside libev itself: under normal |
776 | This can be used to catch bugs inside libev itself: under normal |
745 | circumstances, this function will never abort as of course libev keeps its |
777 | circumstances, this function will never abort as of course libev keeps its |
746 | data structures consistent. |
778 | data structures consistent. |
747 | |
779 | |
… | |
… | |
863 | happen because the watcher could not be properly started because libev |
895 | happen because the watcher could not be properly started because libev |
864 | ran out of memory, a file descriptor was found to be closed or any other |
896 | ran out of memory, a file descriptor was found to be closed or any other |
865 | problem. You best act on it by reporting the problem and somehow coping |
897 | problem. You best act on it by reporting the problem and somehow coping |
866 | with the watcher being stopped. |
898 | with the watcher being stopped. |
867 | |
899 | |
868 | Libev will usually signal a few "dummy" events together with an error, |
900 | Libev will usually signal a few "dummy" events together with an error, for |
869 | for example it might indicate that a fd is readable or writable, and if |
901 | example it might indicate that a fd is readable or writable, and if your |
870 | your callbacks is well-written it can just attempt the operation and cope |
902 | callbacks is well-written it can just attempt the operation and cope with |
871 | with the error from read() or write(). This will not work in multi-threaded |
903 | the error from read() or write(). This will not work in multi-threaded |
872 | programs, though, so beware. |
904 | programs, though, as the fd could already be closed and reused for another |
|
|
905 | thing, so beware. |
873 | |
906 | |
874 | =back |
907 | =back |
875 | |
908 | |
876 | =head2 GENERIC WATCHER FUNCTIONS |
909 | =head2 GENERIC WATCHER FUNCTIONS |
877 | |
910 | |
… | |
… | |
893 | (or never started) and there are no pending events outstanding. |
926 | (or never started) and there are no pending events outstanding. |
894 | |
927 | |
895 | The callback is always of type C<void (*)(ev_loop *loop, ev_TYPE *watcher, |
928 | The callback is always of type C<void (*)(ev_loop *loop, ev_TYPE *watcher, |
896 | int revents)>. |
929 | int revents)>. |
897 | |
930 | |
|
|
931 | Example: Initialise an C<ev_io> watcher in two steps. |
|
|
932 | |
|
|
933 | ev_io w; |
|
|
934 | ev_init (&w, my_cb); |
|
|
935 | ev_io_set (&w, STDIN_FILENO, EV_READ); |
|
|
936 | |
898 | =item C<ev_TYPE_set> (ev_TYPE *, [args]) |
937 | =item C<ev_TYPE_set> (ev_TYPE *, [args]) |
899 | |
938 | |
900 | This macro initialises the type-specific parts of a watcher. You need to |
939 | This macro initialises the type-specific parts of a watcher. You need to |
901 | call C<ev_init> at least once before you call this macro, but you can |
940 | call C<ev_init> at least once before you call this macro, but you can |
902 | call C<ev_TYPE_set> any number of times. You must not, however, call this |
941 | call C<ev_TYPE_set> any number of times. You must not, however, call this |
… | |
… | |
904 | difference to the C<ev_init> macro). |
943 | difference to the C<ev_init> macro). |
905 | |
944 | |
906 | Although some watcher types do not have type-specific arguments |
945 | Although some watcher types do not have type-specific arguments |
907 | (e.g. C<ev_prepare>) you still need to call its C<set> macro. |
946 | (e.g. C<ev_prepare>) you still need to call its C<set> macro. |
908 | |
947 | |
|
|
948 | See C<ev_init>, above, for an example. |
|
|
949 | |
909 | =item C<ev_TYPE_init> (ev_TYPE *watcher, callback, [args]) |
950 | =item C<ev_TYPE_init> (ev_TYPE *watcher, callback, [args]) |
910 | |
951 | |
911 | This convenience macro rolls both C<ev_init> and C<ev_TYPE_set> macro |
952 | This convenience macro rolls both C<ev_init> and C<ev_TYPE_set> macro |
912 | calls into a single call. This is the most convenient method to initialise |
953 | calls into a single call. This is the most convenient method to initialise |
913 | a watcher. The same limitations apply, of course. |
954 | a watcher. The same limitations apply, of course. |
914 | |
955 | |
|
|
956 | Example: Initialise and set an C<ev_io> watcher in one step. |
|
|
957 | |
|
|
958 | ev_io_init (&w, my_cb, STDIN_FILENO, EV_READ); |
|
|
959 | |
915 | =item C<ev_TYPE_start> (loop *, ev_TYPE *watcher) |
960 | =item C<ev_TYPE_start> (loop *, ev_TYPE *watcher) |
916 | |
961 | |
917 | Starts (activates) the given watcher. Only active watchers will receive |
962 | Starts (activates) the given watcher. Only active watchers will receive |
918 | events. If the watcher is already active nothing will happen. |
963 | events. If the watcher is already active nothing will happen. |
|
|
964 | |
|
|
965 | Example: Start the C<ev_io> watcher that is being abused as example in this |
|
|
966 | whole section. |
|
|
967 | |
|
|
968 | ev_io_start (EV_DEFAULT_UC, &w); |
919 | |
969 | |
920 | =item C<ev_TYPE_stop> (loop *, ev_TYPE *watcher) |
970 | =item C<ev_TYPE_stop> (loop *, ev_TYPE *watcher) |
921 | |
971 | |
922 | Stops the given watcher again (if active) and clears the pending |
972 | Stops the given watcher again (if active) and clears the pending |
923 | status. It is possible that stopped watchers are pending (for example, |
973 | status. It is possible that stopped watchers are pending (for example, |
… | |
… | |
980 | |
1030 | |
981 | =item ev_invoke (loop, ev_TYPE *watcher, int revents) |
1031 | =item ev_invoke (loop, ev_TYPE *watcher, int revents) |
982 | |
1032 | |
983 | Invoke the C<watcher> with the given C<loop> and C<revents>. Neither |
1033 | Invoke the C<watcher> with the given C<loop> and C<revents>. Neither |
984 | C<loop> nor C<revents> need to be valid as long as the watcher callback |
1034 | C<loop> nor C<revents> need to be valid as long as the watcher callback |
985 | can deal with that fact. |
1035 | can deal with that fact, as both are simply passed through to the |
|
|
1036 | callback. |
986 | |
1037 | |
987 | =item int ev_clear_pending (loop, ev_TYPE *watcher) |
1038 | =item int ev_clear_pending (loop, ev_TYPE *watcher) |
988 | |
1039 | |
989 | If the watcher is pending, this function returns clears its pending status |
1040 | If the watcher is pending, this function clears its pending status and |
990 | and returns its C<revents> bitset (as if its callback was invoked). If the |
1041 | returns its C<revents> bitset (as if its callback was invoked). If the |
991 | watcher isn't pending it does nothing and returns C<0>. |
1042 | watcher isn't pending it does nothing and returns C<0>. |
992 | |
1043 | |
|
|
1044 | Sometimes it can be useful to "poll" a watcher instead of waiting for its |
|
|
1045 | callback to be invoked, which can be accomplished with this function. |
|
|
1046 | |
993 | =back |
1047 | =back |
994 | |
1048 | |
995 | |
1049 | |
996 | =head2 ASSOCIATING CUSTOM DATA WITH A WATCHER |
1050 | =head2 ASSOCIATING CUSTOM DATA WITH A WATCHER |
997 | |
1051 | |
998 | Each watcher has, by default, a member C<void *data> that you can change |
1052 | Each watcher has, by default, a member C<void *data> that you can change |
999 | and read at any time, libev will completely ignore it. This can be used |
1053 | and read at any time: libev will completely ignore it. This can be used |
1000 | to associate arbitrary data with your watcher. If you need more data and |
1054 | to associate arbitrary data with your watcher. If you need more data and |
1001 | don't want to allocate memory and store a pointer to it in that data |
1055 | don't want to allocate memory and store a pointer to it in that data |
1002 | member, you can also "subclass" the watcher type and provide your own |
1056 | member, you can also "subclass" the watcher type and provide your own |
1003 | data: |
1057 | data: |
1004 | |
1058 | |
… | |
… | |
1006 | { |
1060 | { |
1007 | struct ev_io io; |
1061 | struct ev_io io; |
1008 | int otherfd; |
1062 | int otherfd; |
1009 | void *somedata; |
1063 | void *somedata; |
1010 | struct whatever *mostinteresting; |
1064 | struct whatever *mostinteresting; |
1011 | } |
1065 | }; |
|
|
1066 | |
|
|
1067 | ... |
|
|
1068 | struct my_io w; |
|
|
1069 | ev_io_init (&w.io, my_cb, fd, EV_READ); |
1012 | |
1070 | |
1013 | And since your callback will be called with a pointer to the watcher, you |
1071 | And since your callback will be called with a pointer to the watcher, you |
1014 | can cast it back to your own type: |
1072 | can cast it back to your own type: |
1015 | |
1073 | |
1016 | static void my_cb (struct ev_loop *loop, struct ev_io *w_, int revents) |
1074 | static void my_cb (struct ev_loop *loop, struct ev_io *w_, int revents) |
… | |
… | |
1020 | } |
1078 | } |
1021 | |
1079 | |
1022 | More interesting and less C-conformant ways of casting your callback type |
1080 | More interesting and less C-conformant ways of casting your callback type |
1023 | instead have been omitted. |
1081 | instead have been omitted. |
1024 | |
1082 | |
1025 | Another common scenario is having some data structure with multiple |
1083 | Another common scenario is to use some data structure with multiple |
1026 | watchers: |
1084 | embedded watchers: |
1027 | |
1085 | |
1028 | struct my_biggy |
1086 | struct my_biggy |
1029 | { |
1087 | { |
1030 | int some_data; |
1088 | int some_data; |
1031 | ev_timer t1; |
1089 | ev_timer t1; |
1032 | ev_timer t2; |
1090 | ev_timer t2; |
1033 | } |
1091 | } |
1034 | |
1092 | |
1035 | In this case getting the pointer to C<my_biggy> is a bit more complicated, |
1093 | In this case getting the pointer to C<my_biggy> is a bit more |
1036 | you need to use C<offsetof>: |
1094 | complicated: Either you store the address of your C<my_biggy> struct |
|
|
1095 | in the C<data> member of the watcher (for woozies), or you need to use |
|
|
1096 | some pointer arithmetic using C<offsetof> inside your watchers (for real |
|
|
1097 | programmers): |
1037 | |
1098 | |
1038 | #include <stddef.h> |
1099 | #include <stddef.h> |
1039 | |
1100 | |
1040 | static void |
1101 | static void |
1041 | t1_cb (EV_P_ struct ev_timer *w, int revents) |
1102 | t1_cb (EV_P_ struct ev_timer *w, int revents) |
… | |
… | |
1081 | In general you can register as many read and/or write event watchers per |
1142 | In general you can register as many read and/or write event watchers per |
1082 | fd as you want (as long as you don't confuse yourself). Setting all file |
1143 | fd as you want (as long as you don't confuse yourself). Setting all file |
1083 | descriptors to non-blocking mode is also usually a good idea (but not |
1144 | descriptors to non-blocking mode is also usually a good idea (but not |
1084 | required if you know what you are doing). |
1145 | required if you know what you are doing). |
1085 | |
1146 | |
1086 | If you must do this, then force the use of a known-to-be-good backend |
1147 | If you cannot use non-blocking mode, then force the use of a |
1087 | (at the time of this writing, this includes only C<EVBACKEND_SELECT> and |
1148 | known-to-be-good backend (at the time of this writing, this includes only |
1088 | C<EVBACKEND_POLL>). |
1149 | C<EVBACKEND_SELECT> and C<EVBACKEND_POLL>). |
1089 | |
1150 | |
1090 | Another thing you have to watch out for is that it is quite easy to |
1151 | Another thing you have to watch out for is that it is quite easy to |
1091 | receive "spurious" readiness notifications, that is your callback might |
1152 | receive "spurious" readiness notifications, that is your callback might |
1092 | be called with C<EV_READ> but a subsequent C<read>(2) will actually block |
1153 | be called with C<EV_READ> but a subsequent C<read>(2) will actually block |
1093 | because there is no data. Not only are some backends known to create a |
1154 | because there is no data. Not only are some backends known to create a |
1094 | lot of those (for example Solaris ports), it is very easy to get into |
1155 | lot of those (for example Solaris ports), it is very easy to get into |
1095 | this situation even with a relatively standard program structure. Thus |
1156 | this situation even with a relatively standard program structure. Thus |
1096 | it is best to always use non-blocking I/O: An extra C<read>(2) returning |
1157 | it is best to always use non-blocking I/O: An extra C<read>(2) returning |
1097 | C<EAGAIN> is far preferable to a program hanging until some data arrives. |
1158 | C<EAGAIN> is far preferable to a program hanging until some data arrives. |
1098 | |
1159 | |
1099 | If you cannot run the fd in non-blocking mode (for example you should not |
1160 | If you cannot run the fd in non-blocking mode (for example you should |
1100 | play around with an Xlib connection), then you have to separately re-test |
1161 | not play around with an Xlib connection), then you have to separately |
1101 | whether a file descriptor is really ready with a known-to-be good interface |
1162 | re-test whether a file descriptor is really ready with a known-to-be good |
1102 | such as poll (fortunately in our Xlib example, Xlib already does this on |
1163 | interface such as poll (fortunately in our Xlib example, Xlib already |
1103 | its own, so its quite safe to use). |
1164 | does this on its own, so its quite safe to use). Some people additionally |
|
|
1165 | use C<SIGALRM> and an interval timer, just to be sure you won't block |
|
|
1166 | indefinitely. |
|
|
1167 | |
|
|
1168 | But really, best use non-blocking mode. |
1104 | |
1169 | |
1105 | =head3 The special problem of disappearing file descriptors |
1170 | =head3 The special problem of disappearing file descriptors |
1106 | |
1171 | |
1107 | Some backends (e.g. kqueue, epoll) need to be told about closing a file |
1172 | Some backends (e.g. kqueue, epoll) need to be told about closing a file |
1108 | descriptor (either by calling C<close> explicitly or by any other means, |
1173 | descriptor (either due to calling C<close> explicitly or any other means, |
1109 | such as C<dup>). The reason is that you register interest in some file |
1174 | such as C<dup2>). The reason is that you register interest in some file |
1110 | descriptor, but when it goes away, the operating system will silently drop |
1175 | descriptor, but when it goes away, the operating system will silently drop |
1111 | this interest. If another file descriptor with the same number then is |
1176 | this interest. If another file descriptor with the same number then is |
1112 | registered with libev, there is no efficient way to see that this is, in |
1177 | registered with libev, there is no efficient way to see that this is, in |
1113 | fact, a different file descriptor. |
1178 | fact, a different file descriptor. |
1114 | |
1179 | |
… | |
… | |
1145 | enable C<EVFLAG_FORKCHECK>, or resort to C<EVBACKEND_SELECT> or |
1210 | enable C<EVFLAG_FORKCHECK>, or resort to C<EVBACKEND_SELECT> or |
1146 | C<EVBACKEND_POLL>. |
1211 | C<EVBACKEND_POLL>. |
1147 | |
1212 | |
1148 | =head3 The special problem of SIGPIPE |
1213 | =head3 The special problem of SIGPIPE |
1149 | |
1214 | |
1150 | While not really specific to libev, it is easy to forget about SIGPIPE: |
1215 | While not really specific to libev, it is easy to forget about C<SIGPIPE>: |
1151 | when writing to a pipe whose other end has been closed, your program gets |
1216 | when writing to a pipe whose other end has been closed, your program gets |
1152 | send a SIGPIPE, which, by default, aborts your program. For most programs |
1217 | sent a SIGPIPE, which, by default, aborts your program. For most programs |
1153 | this is sensible behaviour, for daemons, this is usually undesirable. |
1218 | this is sensible behaviour, for daemons, this is usually undesirable. |
1154 | |
1219 | |
1155 | So when you encounter spurious, unexplained daemon exits, make sure you |
1220 | So when you encounter spurious, unexplained daemon exits, make sure you |
1156 | ignore SIGPIPE (and maybe make sure you log the exit status of your daemon |
1221 | ignore SIGPIPE (and maybe make sure you log the exit status of your daemon |
1157 | somewhere, as that would have given you a big clue). |
1222 | somewhere, as that would have given you a big clue). |
… | |
… | |
1164 | =item ev_io_init (ev_io *, callback, int fd, int events) |
1229 | =item ev_io_init (ev_io *, callback, int fd, int events) |
1165 | |
1230 | |
1166 | =item ev_io_set (ev_io *, int fd, int events) |
1231 | =item ev_io_set (ev_io *, int fd, int events) |
1167 | |
1232 | |
1168 | Configures an C<ev_io> watcher. The C<fd> is the file descriptor to |
1233 | Configures an C<ev_io> watcher. The C<fd> is the file descriptor to |
1169 | receive events for and events is either C<EV_READ>, C<EV_WRITE> or |
1234 | receive events for and C<events> is either C<EV_READ>, C<EV_WRITE> or |
1170 | C<EV_READ | EV_WRITE> to receive the given events. |
1235 | C<EV_READ | EV_WRITE>, to express the desire to receive the given events. |
1171 | |
1236 | |
1172 | =item int fd [read-only] |
1237 | =item int fd [read-only] |
1173 | |
1238 | |
1174 | The file descriptor being watched. |
1239 | The file descriptor being watched. |
1175 | |
1240 | |
… | |
… | |
1187 | |
1252 | |
1188 | static void |
1253 | static void |
1189 | stdin_readable_cb (struct ev_loop *loop, struct ev_io *w, int revents) |
1254 | stdin_readable_cb (struct ev_loop *loop, struct ev_io *w, int revents) |
1190 | { |
1255 | { |
1191 | ev_io_stop (loop, w); |
1256 | ev_io_stop (loop, w); |
1192 | .. read from stdin here (or from w->fd) and haqndle any I/O errors |
1257 | .. read from stdin here (or from w->fd) and handle any I/O errors |
1193 | } |
1258 | } |
1194 | |
1259 | |
1195 | ... |
1260 | ... |
1196 | struct ev_loop *loop = ev_default_init (0); |
1261 | struct ev_loop *loop = ev_default_init (0); |
1197 | struct ev_io stdin_readable; |
1262 | struct ev_io stdin_readable; |
… | |
… | |
1205 | Timer watchers are simple relative timers that generate an event after a |
1270 | Timer watchers are simple relative timers that generate an event after a |
1206 | given time, and optionally repeating in regular intervals after that. |
1271 | given time, and optionally repeating in regular intervals after that. |
1207 | |
1272 | |
1208 | The timers are based on real time, that is, if you register an event that |
1273 | The timers are based on real time, that is, if you register an event that |
1209 | times out after an hour and you reset your system clock to January last |
1274 | times out after an hour and you reset your system clock to January last |
1210 | year, it will still time out after (roughly) and hour. "Roughly" because |
1275 | year, it will still time out after (roughly) one hour. "Roughly" because |
1211 | detecting time jumps is hard, and some inaccuracies are unavoidable (the |
1276 | detecting time jumps is hard, and some inaccuracies are unavoidable (the |
1212 | monotonic clock option helps a lot here). |
1277 | monotonic clock option helps a lot here). |
1213 | |
1278 | |
1214 | The callback is guaranteed to be invoked only after its timeout has passed, |
1279 | The callback is guaranteed to be invoked only I<after> its timeout has |
1215 | but if multiple timers become ready during the same loop iteration then |
1280 | passed, but if multiple timers become ready during the same loop iteration |
1216 | order of execution is undefined. |
1281 | then order of execution is undefined. |
1217 | |
1282 | |
1218 | =head3 The special problem of time updates |
1283 | =head3 The special problem of time updates |
1219 | |
1284 | |
1220 | Establishing the current time is a costly operation (it usually takes at |
1285 | Establishing the current time is a costly operation (it usually takes at |
1221 | least two system calls): EV therefore updates its idea of the current |
1286 | least two system calls): EV therefore updates its idea of the current |
1222 | time only before and after C<ev_loop> polls for new events, which causes |
1287 | time only before and after C<ev_loop> collects new events, which causes a |
1223 | a growing difference between C<ev_now ()> and C<ev_time ()> when handling |
1288 | growing difference between C<ev_now ()> and C<ev_time ()> when handling |
1224 | lots of events. |
1289 | lots of events in one iteration. |
1225 | |
1290 | |
1226 | The relative timeouts are calculated relative to the C<ev_now ()> |
1291 | The relative timeouts are calculated relative to the C<ev_now ()> |
1227 | time. This is usually the right thing as this timestamp refers to the time |
1292 | time. This is usually the right thing as this timestamp refers to the time |
1228 | of the event triggering whatever timeout you are modifying/starting. If |
1293 | of the event triggering whatever timeout you are modifying/starting. If |
1229 | you suspect event processing to be delayed and you I<need> to base the |
1294 | you suspect event processing to be delayed and you I<need> to base the |
… | |
… | |
1290 | ev_timer_again (loop, timer); |
1355 | ev_timer_again (loop, timer); |
1291 | |
1356 | |
1292 | This is more slightly efficient then stopping/starting the timer each time |
1357 | This is more slightly efficient then stopping/starting the timer each time |
1293 | you want to modify its timeout value. |
1358 | you want to modify its timeout value. |
1294 | |
1359 | |
|
|
1360 | Note, however, that it is often even more efficient to remember the |
|
|
1361 | time of the last activity and let the timer time-out naturally. In the |
|
|
1362 | callback, you then check whether the time-out is real, or, if there was |
|
|
1363 | some activity, you reschedule the watcher to time-out in "last_activity + |
|
|
1364 | timeout - ev_now ()" seconds. |
|
|
1365 | |
1295 | =item ev_tstamp repeat [read-write] |
1366 | =item ev_tstamp repeat [read-write] |
1296 | |
1367 | |
1297 | The current C<repeat> value. Will be used each time the watcher times out |
1368 | The current C<repeat> value. Will be used each time the watcher times out |
1298 | or C<ev_timer_again> is called and determines the next timeout (if any), |
1369 | or C<ev_timer_again> is called, and determines the next timeout (if any), |
1299 | which is also when any modifications are taken into account. |
1370 | which is also when any modifications are taken into account. |
1300 | |
1371 | |
1301 | =back |
1372 | =back |
1302 | |
1373 | |
1303 | =head3 Examples |
1374 | =head3 Examples |
… | |
… | |
1347 | to trigger the event (unlike an C<ev_timer>, which would still trigger |
1418 | to trigger the event (unlike an C<ev_timer>, which would still trigger |
1348 | roughly 10 seconds later as it uses a relative timeout). |
1419 | roughly 10 seconds later as it uses a relative timeout). |
1349 | |
1420 | |
1350 | C<ev_periodic>s can also be used to implement vastly more complex timers, |
1421 | C<ev_periodic>s can also be used to implement vastly more complex timers, |
1351 | such as triggering an event on each "midnight, local time", or other |
1422 | such as triggering an event on each "midnight, local time", or other |
1352 | complicated, rules. |
1423 | complicated rules. |
1353 | |
1424 | |
1354 | As with timers, the callback is guaranteed to be invoked only when the |
1425 | As with timers, the callback is guaranteed to be invoked only when the |
1355 | time (C<at>) has passed, but if multiple periodic timers become ready |
1426 | time (C<at>) has passed, but if multiple periodic timers become ready |
1356 | during the same loop iteration then order of execution is undefined. |
1427 | during the same loop iteration, then order of execution is undefined. |
1357 | |
1428 | |
1358 | =head3 Watcher-Specific Functions and Data Members |
1429 | =head3 Watcher-Specific Functions and Data Members |
1359 | |
1430 | |
1360 | =over 4 |
1431 | =over 4 |
1361 | |
1432 | |
1362 | =item ev_periodic_init (ev_periodic *, callback, ev_tstamp at, ev_tstamp interval, reschedule_cb) |
1433 | =item ev_periodic_init (ev_periodic *, callback, ev_tstamp at, ev_tstamp interval, reschedule_cb) |
1363 | |
1434 | |
1364 | =item ev_periodic_set (ev_periodic *, ev_tstamp after, ev_tstamp repeat, reschedule_cb) |
1435 | =item ev_periodic_set (ev_periodic *, ev_tstamp after, ev_tstamp repeat, reschedule_cb) |
1365 | |
1436 | |
1366 | Lots of arguments, lets sort it out... There are basically three modes of |
1437 | Lots of arguments, lets sort it out... There are basically three modes of |
1367 | operation, and we will explain them from simplest to complex: |
1438 | operation, and we will explain them from simplest to most complex: |
1368 | |
1439 | |
1369 | =over 4 |
1440 | =over 4 |
1370 | |
1441 | |
1371 | =item * absolute timer (at = time, interval = reschedule_cb = 0) |
1442 | =item * absolute timer (at = time, interval = reschedule_cb = 0) |
1372 | |
1443 | |
1373 | In this configuration the watcher triggers an event after the wall clock |
1444 | In this configuration the watcher triggers an event after the wall clock |
1374 | time C<at> has passed and doesn't repeat. It will not adjust when a time |
1445 | time C<at> has passed. It will not repeat and will not adjust when a time |
1375 | jump occurs, that is, if it is to be run at January 1st 2011 then it will |
1446 | jump occurs, that is, if it is to be run at January 1st 2011 then it will |
1376 | run when the system time reaches or surpasses this time. |
1447 | only run when the system clock reaches or surpasses this time. |
1377 | |
1448 | |
1378 | =item * repeating interval timer (at = offset, interval > 0, reschedule_cb = 0) |
1449 | =item * repeating interval timer (at = offset, interval > 0, reschedule_cb = 0) |
1379 | |
1450 | |
1380 | In this mode the watcher will always be scheduled to time out at the next |
1451 | In this mode the watcher will always be scheduled to time out at the next |
1381 | C<at + N * interval> time (for some integer N, which can also be negative) |
1452 | C<at + N * interval> time (for some integer N, which can also be negative) |
1382 | and then repeat, regardless of any time jumps. |
1453 | and then repeat, regardless of any time jumps. |
1383 | |
1454 | |
1384 | This can be used to create timers that do not drift with respect to system |
1455 | This can be used to create timers that do not drift with respect to the |
1385 | time, for example, here is a C<ev_periodic> that triggers each hour, on |
1456 | system clock, for example, here is a C<ev_periodic> that triggers each |
1386 | the hour: |
1457 | hour, on the hour: |
1387 | |
1458 | |
1388 | ev_periodic_set (&periodic, 0., 3600., 0); |
1459 | ev_periodic_set (&periodic, 0., 3600., 0); |
1389 | |
1460 | |
1390 | This doesn't mean there will always be 3600 seconds in between triggers, |
1461 | This doesn't mean there will always be 3600 seconds in between triggers, |
1391 | but only that the callback will be called when the system time shows a |
1462 | but only that the callback will be called when the system time shows a |
… | |
… | |
1478 | =back |
1549 | =back |
1479 | |
1550 | |
1480 | =head3 Examples |
1551 | =head3 Examples |
1481 | |
1552 | |
1482 | Example: Call a callback every hour, or, more precisely, whenever the |
1553 | Example: Call a callback every hour, or, more precisely, whenever the |
1483 | system clock is divisible by 3600. The callback invocation times have |
1554 | system time is divisible by 3600. The callback invocation times have |
1484 | potentially a lot of jitter, but good long-term stability. |
1555 | potentially a lot of jitter, but good long-term stability. |
1485 | |
1556 | |
1486 | static void |
1557 | static void |
1487 | clock_cb (struct ev_loop *loop, struct ev_io *w, int revents) |
1558 | clock_cb (struct ev_loop *loop, struct ev_io *w, int revents) |
1488 | { |
1559 | { |
… | |
… | |
1498 | #include <math.h> |
1569 | #include <math.h> |
1499 | |
1570 | |
1500 | static ev_tstamp |
1571 | static ev_tstamp |
1501 | my_scheduler_cb (struct ev_periodic *w, ev_tstamp now) |
1572 | my_scheduler_cb (struct ev_periodic *w, ev_tstamp now) |
1502 | { |
1573 | { |
1503 | return fmod (now, 3600.) + 3600.; |
1574 | return now + (3600. - fmod (now, 3600.)); |
1504 | } |
1575 | } |
1505 | |
1576 | |
1506 | ev_periodic_init (&hourly_tick, clock_cb, 0., 0., my_scheduler_cb); |
1577 | ev_periodic_init (&hourly_tick, clock_cb, 0., 0., my_scheduler_cb); |
1507 | |
1578 | |
1508 | Example: Call a callback every hour, starting now: |
1579 | Example: Call a callback every hour, starting now: |
… | |
… | |
1518 | Signal watchers will trigger an event when the process receives a specific |
1589 | Signal watchers will trigger an event when the process receives a specific |
1519 | signal one or more times. Even though signals are very asynchronous, libev |
1590 | signal one or more times. Even though signals are very asynchronous, libev |
1520 | will try it's best to deliver signals synchronously, i.e. as part of the |
1591 | will try it's best to deliver signals synchronously, i.e. as part of the |
1521 | normal event processing, like any other event. |
1592 | normal event processing, like any other event. |
1522 | |
1593 | |
|
|
1594 | If you want signals asynchronously, just use C<sigaction> as you would |
|
|
1595 | do without libev and forget about sharing the signal. You can even use |
|
|
1596 | C<ev_async> from a signal handler to synchronously wake up an event loop. |
|
|
1597 | |
1523 | You can configure as many watchers as you like per signal. Only when the |
1598 | You can configure as many watchers as you like per signal. Only when the |
1524 | first watcher gets started will libev actually register a signal watcher |
1599 | first watcher gets started will libev actually register a signal handler |
1525 | with the kernel (thus it coexists with your own signal handlers as long |
1600 | with the kernel (thus it coexists with your own signal handlers as long as |
1526 | as you don't register any with libev). Similarly, when the last signal |
1601 | you don't register any with libev for the same signal). Similarly, when |
1527 | watcher for a signal is stopped libev will reset the signal handler to |
1602 | the last signal watcher for a signal is stopped, libev will reset the |
1528 | SIG_DFL (regardless of what it was set to before). |
1603 | signal handler to SIG_DFL (regardless of what it was set to before). |
1529 | |
1604 | |
1530 | If possible and supported, libev will install its handlers with |
1605 | If possible and supported, libev will install its handlers with |
1531 | C<SA_RESTART> behaviour enabled, so system calls should not be unduly |
1606 | C<SA_RESTART> behaviour enabled, so system calls should not be unduly |
1532 | interrupted. If you have a problem with system calls getting interrupted by |
1607 | interrupted. If you have a problem with system calls getting interrupted by |
1533 | signals you can block all signals in an C<ev_check> watcher and unblock |
1608 | signals you can block all signals in an C<ev_check> watcher and unblock |
… | |
… | |
1566 | |
1641 | |
1567 | |
1642 | |
1568 | =head2 C<ev_child> - watch out for process status changes |
1643 | =head2 C<ev_child> - watch out for process status changes |
1569 | |
1644 | |
1570 | Child watchers trigger when your process receives a SIGCHLD in response to |
1645 | Child watchers trigger when your process receives a SIGCHLD in response to |
1571 | some child status changes (most typically when a child of yours dies). It |
1646 | some child status changes (most typically when a child of yours dies or |
1572 | is permissible to install a child watcher I<after> the child has been |
1647 | exits). It is permissible to install a child watcher I<after> the child |
1573 | forked (which implies it might have already exited), as long as the event |
1648 | has been forked (which implies it might have already exited), as long |
1574 | loop isn't entered (or is continued from a watcher). |
1649 | as the event loop isn't entered (or is continued from a watcher), i.e., |
|
|
1650 | forking and then immediately registering a watcher for the child is fine, |
|
|
1651 | but forking and registering a watcher a few event loop iterations later is |
|
|
1652 | not. |
1575 | |
1653 | |
1576 | Only the default event loop is capable of handling signals, and therefore |
1654 | Only the default event loop is capable of handling signals, and therefore |
1577 | you can only register child watchers in the default event loop. |
1655 | you can only register child watchers in the default event loop. |
1578 | |
1656 | |
1579 | =head3 Process Interaction |
1657 | =head3 Process Interaction |
… | |
… | |
1677 | the stat buffer having unspecified contents. |
1755 | the stat buffer having unspecified contents. |
1678 | |
1756 | |
1679 | The path I<should> be absolute and I<must not> end in a slash. If it is |
1757 | The path I<should> be absolute and I<must not> end in a slash. If it is |
1680 | relative and your working directory changes, the behaviour is undefined. |
1758 | relative and your working directory changes, the behaviour is undefined. |
1681 | |
1759 | |
1682 | Since there is no standard to do this, the portable implementation simply |
1760 | Since there is no standard kernel interface to do this, the portable |
1683 | calls C<stat (2)> regularly on the path to see if it changed somehow. You |
1761 | implementation simply calls C<stat (2)> regularly on the path to see if |
1684 | can specify a recommended polling interval for this case. If you specify |
1762 | it changed somehow. You can specify a recommended polling interval for |
1685 | a polling interval of C<0> (highly recommended!) then a I<suitable, |
1763 | this case. If you specify a polling interval of C<0> (highly recommended!) |
1686 | unspecified default> value will be used (which you can expect to be around |
1764 | then a I<suitable, unspecified default> value will be used (which |
1687 | five seconds, although this might change dynamically). Libev will also |
1765 | you can expect to be around five seconds, although this might change |
1688 | impose a minimum interval which is currently around C<0.1>, but thats |
1766 | dynamically). Libev will also impose a minimum interval which is currently |
1689 | usually overkill. |
1767 | around C<0.1>, but thats usually overkill. |
1690 | |
1768 | |
1691 | This watcher type is not meant for massive numbers of stat watchers, |
1769 | This watcher type is not meant for massive numbers of stat watchers, |
1692 | as even with OS-supported change notifications, this can be |
1770 | as even with OS-supported change notifications, this can be |
1693 | resource-intensive. |
1771 | resource-intensive. |
1694 | |
1772 | |
1695 | At the time of this writing, only the Linux inotify interface is |
1773 | At the time of this writing, the only OS-specific interface implemented |
1696 | implemented (implementing kqueue support is left as an exercise for the |
1774 | is the Linux inotify interface (implementing kqueue support is left as |
1697 | reader, note, however, that the author sees no way of implementing ev_stat |
1775 | an exercise for the reader. Note, however, that the author sees no way |
1698 | semantics with kqueue). Inotify will be used to give hints only and should |
1776 | of implementing C<ev_stat> semantics with kqueue). |
1699 | not change the semantics of C<ev_stat> watchers, which means that libev |
|
|
1700 | sometimes needs to fall back to regular polling again even with inotify, |
|
|
1701 | but changes are usually detected immediately, and if the file exists there |
|
|
1702 | will be no polling. |
|
|
1703 | |
1777 | |
1704 | =head3 ABI Issues (Largefile Support) |
1778 | =head3 ABI Issues (Largefile Support) |
1705 | |
1779 | |
1706 | Libev by default (unless the user overrides this) uses the default |
1780 | Libev by default (unless the user overrides this) uses the default |
1707 | compilation environment, which means that on systems with large file |
1781 | compilation environment, which means that on systems with large file |
… | |
… | |
1716 | file interfaces available by default (as e.g. FreeBSD does) and not |
1790 | file interfaces available by default (as e.g. FreeBSD does) and not |
1717 | optional. Libev cannot simply switch on large file support because it has |
1791 | optional. Libev cannot simply switch on large file support because it has |
1718 | to exchange stat structures with application programs compiled using the |
1792 | to exchange stat structures with application programs compiled using the |
1719 | default compilation environment. |
1793 | default compilation environment. |
1720 | |
1794 | |
1721 | =head3 Inotify |
1795 | =head3 Inotify and Kqueue |
1722 | |
1796 | |
1723 | When C<inotify (7)> support has been compiled into libev (generally only |
1797 | When C<inotify (7)> support has been compiled into libev (generally only |
1724 | available on Linux) and present at runtime, it will be used to speed up |
1798 | available with Linux) and present at runtime, it will be used to speed up |
1725 | change detection where possible. The inotify descriptor will be created lazily |
1799 | change detection where possible. The inotify descriptor will be created lazily |
1726 | when the first C<ev_stat> watcher is being started. |
1800 | when the first C<ev_stat> watcher is being started. |
1727 | |
1801 | |
1728 | Inotify presence does not change the semantics of C<ev_stat> watchers |
1802 | Inotify presence does not change the semantics of C<ev_stat> watchers |
1729 | except that changes might be detected earlier, and in some cases, to avoid |
1803 | except that changes might be detected earlier, and in some cases, to avoid |
1730 | making regular C<stat> calls. Even in the presence of inotify support |
1804 | making regular C<stat> calls. Even in the presence of inotify support |
1731 | there are many cases where libev has to resort to regular C<stat> polling. |
1805 | there are many cases where libev has to resort to regular C<stat> polling, |
|
|
1806 | but as long as the path exists, libev usually gets away without polling. |
1732 | |
1807 | |
1733 | (There is no support for kqueue, as apparently it cannot be used to |
1808 | There is no support for kqueue, as apparently it cannot be used to |
1734 | implement this functionality, due to the requirement of having a file |
1809 | implement this functionality, due to the requirement of having a file |
1735 | descriptor open on the object at all times). |
1810 | descriptor open on the object at all times, and detecting renames, unlinks |
|
|
1811 | etc. is difficult. |
1736 | |
1812 | |
1737 | =head3 The special problem of stat time resolution |
1813 | =head3 The special problem of stat time resolution |
1738 | |
1814 | |
1739 | The C<stat ()> system call only supports full-second resolution portably, and |
1815 | The C<stat ()> system call only supports full-second resolution portably, and |
1740 | even on systems where the resolution is higher, many file systems still |
1816 | even on systems where the resolution is higher, most file systems still |
1741 | only support whole seconds. |
1817 | only support whole seconds. |
1742 | |
1818 | |
1743 | That means that, if the time is the only thing that changes, you can |
1819 | That means that, if the time is the only thing that changes, you can |
1744 | easily miss updates: on the first update, C<ev_stat> detects a change and |
1820 | easily miss updates: on the first update, C<ev_stat> detects a change and |
1745 | calls your callback, which does something. When there is another update |
1821 | calls your callback, which does something. When there is another update |
1746 | within the same second, C<ev_stat> will be unable to detect it as the stat |
1822 | within the same second, C<ev_stat> will be unable to detect unless the |
1747 | data does not change. |
1823 | stat data does change in other ways (e.g. file size). |
1748 | |
1824 | |
1749 | The solution to this is to delay acting on a change for slightly more |
1825 | The solution to this is to delay acting on a change for slightly more |
1750 | than a second (or till slightly after the next full second boundary), using |
1826 | than a second (or till slightly after the next full second boundary), using |
1751 | a roughly one-second-delay C<ev_timer> (e.g. C<ev_timer_set (w, 0., 1.02); |
1827 | a roughly one-second-delay C<ev_timer> (e.g. C<ev_timer_set (w, 0., 1.02); |
1752 | ev_timer_again (loop, w)>). |
1828 | ev_timer_again (loop, w)>). |
… | |
… | |
1772 | C<path>. The C<interval> is a hint on how quickly a change is expected to |
1848 | C<path>. The C<interval> is a hint on how quickly a change is expected to |
1773 | be detected and should normally be specified as C<0> to let libev choose |
1849 | be detected and should normally be specified as C<0> to let libev choose |
1774 | a suitable value. The memory pointed to by C<path> must point to the same |
1850 | a suitable value. The memory pointed to by C<path> must point to the same |
1775 | path for as long as the watcher is active. |
1851 | path for as long as the watcher is active. |
1776 | |
1852 | |
1777 | The callback will receive C<EV_STAT> when a change was detected, relative |
1853 | The callback will receive an C<EV_STAT> event when a change was detected, |
1778 | to the attributes at the time the watcher was started (or the last change |
1854 | relative to the attributes at the time the watcher was started (or the |
1779 | was detected). |
1855 | last change was detected). |
1780 | |
1856 | |
1781 | =item ev_stat_stat (loop, ev_stat *) |
1857 | =item ev_stat_stat (loop, ev_stat *) |
1782 | |
1858 | |
1783 | Updates the stat buffer immediately with new values. If you change the |
1859 | Updates the stat buffer immediately with new values. If you change the |
1784 | watched path in your callback, you could call this function to avoid |
1860 | watched path in your callback, you could call this function to avoid |
… | |
… | |
1867 | |
1943 | |
1868 | |
1944 | |
1869 | =head2 C<ev_idle> - when you've got nothing better to do... |
1945 | =head2 C<ev_idle> - when you've got nothing better to do... |
1870 | |
1946 | |
1871 | Idle watchers trigger events when no other events of the same or higher |
1947 | Idle watchers trigger events when no other events of the same or higher |
1872 | priority are pending (prepare, check and other idle watchers do not |
1948 | priority are pending (prepare, check and other idle watchers do not count |
1873 | count). |
1949 | as receiving "events"). |
1874 | |
1950 | |
1875 | That is, as long as your process is busy handling sockets or timeouts |
1951 | That is, as long as your process is busy handling sockets or timeouts |
1876 | (or even signals, imagine) of the same or higher priority it will not be |
1952 | (or even signals, imagine) of the same or higher priority it will not be |
1877 | triggered. But when your process is idle (or only lower-priority watchers |
1953 | triggered. But when your process is idle (or only lower-priority watchers |
1878 | are pending), the idle watchers are being called once per event loop |
1954 | are pending), the idle watchers are being called once per event loop |
… | |
… | |
1917 | ev_idle_start (loop, idle_cb); |
1993 | ev_idle_start (loop, idle_cb); |
1918 | |
1994 | |
1919 | |
1995 | |
1920 | =head2 C<ev_prepare> and C<ev_check> - customise your event loop! |
1996 | =head2 C<ev_prepare> and C<ev_check> - customise your event loop! |
1921 | |
1997 | |
1922 | Prepare and check watchers are usually (but not always) used in tandem: |
1998 | Prepare and check watchers are usually (but not always) used in pairs: |
1923 | prepare watchers get invoked before the process blocks and check watchers |
1999 | prepare watchers get invoked before the process blocks and check watchers |
1924 | afterwards. |
2000 | afterwards. |
1925 | |
2001 | |
1926 | You I<must not> call C<ev_loop> or similar functions that enter |
2002 | You I<must not> call C<ev_loop> or similar functions that enter |
1927 | the current event loop from either C<ev_prepare> or C<ev_check> |
2003 | the current event loop from either C<ev_prepare> or C<ev_check> |
… | |
… | |
1930 | those watchers, i.e. the sequence will always be C<ev_prepare>, blocking, |
2006 | those watchers, i.e. the sequence will always be C<ev_prepare>, blocking, |
1931 | C<ev_check> so if you have one watcher of each kind they will always be |
2007 | C<ev_check> so if you have one watcher of each kind they will always be |
1932 | called in pairs bracketing the blocking call. |
2008 | called in pairs bracketing the blocking call. |
1933 | |
2009 | |
1934 | Their main purpose is to integrate other event mechanisms into libev and |
2010 | Their main purpose is to integrate other event mechanisms into libev and |
1935 | their use is somewhat advanced. This could be used, for example, to track |
2011 | their use is somewhat advanced. They could be used, for example, to track |
1936 | variable changes, implement your own watchers, integrate net-snmp or a |
2012 | variable changes, implement your own watchers, integrate net-snmp or a |
1937 | coroutine library and lots more. They are also occasionally useful if |
2013 | coroutine library and lots more. They are also occasionally useful if |
1938 | you cache some data and want to flush it before blocking (for example, |
2014 | you cache some data and want to flush it before blocking (for example, |
1939 | in X programs you might want to do an C<XFlush ()> in an C<ev_prepare> |
2015 | in X programs you might want to do an C<XFlush ()> in an C<ev_prepare> |
1940 | watcher). |
2016 | watcher). |
1941 | |
2017 | |
1942 | This is done by examining in each prepare call which file descriptors need |
2018 | This is done by examining in each prepare call which file descriptors |
1943 | to be watched by the other library, registering C<ev_io> watchers for |
2019 | need to be watched by the other library, registering C<ev_io> watchers |
1944 | them and starting an C<ev_timer> watcher for any timeouts (many libraries |
2020 | for them and starting an C<ev_timer> watcher for any timeouts (many |
1945 | provide just this functionality). Then, in the check watcher you check for |
2021 | libraries provide exactly this functionality). Then, in the check watcher, |
1946 | any events that occurred (by checking the pending status of all watchers |
2022 | you check for any events that occurred (by checking the pending status |
1947 | and stopping them) and call back into the library. The I/O and timer |
2023 | of all watchers and stopping them) and call back into the library. The |
1948 | callbacks will never actually be called (but must be valid nevertheless, |
2024 | I/O and timer callbacks will never actually be called (but must be valid |
1949 | because you never know, you know?). |
2025 | nevertheless, because you never know, you know?). |
1950 | |
2026 | |
1951 | As another example, the Perl Coro module uses these hooks to integrate |
2027 | As another example, the Perl Coro module uses these hooks to integrate |
1952 | coroutines into libev programs, by yielding to other active coroutines |
2028 | coroutines into libev programs, by yielding to other active coroutines |
1953 | during each prepare and only letting the process block if no coroutines |
2029 | during each prepare and only letting the process block if no coroutines |
1954 | are ready to run (it's actually more complicated: it only runs coroutines |
2030 | are ready to run (it's actually more complicated: it only runs coroutines |
… | |
… | |
1957 | loop from blocking if lower-priority coroutines are active, thus mapping |
2033 | loop from blocking if lower-priority coroutines are active, thus mapping |
1958 | low-priority coroutines to idle/background tasks). |
2034 | low-priority coroutines to idle/background tasks). |
1959 | |
2035 | |
1960 | It is recommended to give C<ev_check> watchers highest (C<EV_MAXPRI>) |
2036 | It is recommended to give C<ev_check> watchers highest (C<EV_MAXPRI>) |
1961 | priority, to ensure that they are being run before any other watchers |
2037 | priority, to ensure that they are being run before any other watchers |
|
|
2038 | after the poll (this doesn't matter for C<ev_prepare> watchers). |
|
|
2039 | |
1962 | after the poll. Also, C<ev_check> watchers (and C<ev_prepare> watchers, |
2040 | Also, C<ev_check> watchers (and C<ev_prepare> watchers, too) should not |
1963 | too) should not activate ("feed") events into libev. While libev fully |
2041 | activate ("feed") events into libev. While libev fully supports this, they |
1964 | supports this, they might get executed before other C<ev_check> watchers |
2042 | might get executed before other C<ev_check> watchers did their job. As |
1965 | did their job. As C<ev_check> watchers are often used to embed other |
2043 | C<ev_check> watchers are often used to embed other (non-libev) event |
1966 | (non-libev) event loops those other event loops might be in an unusable |
2044 | loops those other event loops might be in an unusable state until their |
1967 | state until their C<ev_check> watcher ran (always remind yourself to |
2045 | C<ev_check> watcher ran (always remind yourself to coexist peacefully with |
1968 | coexist peacefully with others). |
2046 | others). |
1969 | |
2047 | |
1970 | =head3 Watcher-Specific Functions and Data Members |
2048 | =head3 Watcher-Specific Functions and Data Members |
1971 | |
2049 | |
1972 | =over 4 |
2050 | =over 4 |
1973 | |
2051 | |
… | |
… | |
1975 | |
2053 | |
1976 | =item ev_check_init (ev_check *, callback) |
2054 | =item ev_check_init (ev_check *, callback) |
1977 | |
2055 | |
1978 | Initialises and configures the prepare or check watcher - they have no |
2056 | Initialises and configures the prepare or check watcher - they have no |
1979 | parameters of any kind. There are C<ev_prepare_set> and C<ev_check_set> |
2057 | parameters of any kind. There are C<ev_prepare_set> and C<ev_check_set> |
1980 | macros, but using them is utterly, utterly and completely pointless. |
2058 | macros, but using them is utterly, utterly, utterly and completely |
|
|
2059 | pointless. |
1981 | |
2060 | |
1982 | =back |
2061 | =back |
1983 | |
2062 | |
1984 | =head3 Examples |
2063 | =head3 Examples |
1985 | |
2064 | |
… | |
… | |
2078 | } |
2157 | } |
2079 | |
2158 | |
2080 | // do not ever call adns_afterpoll |
2159 | // do not ever call adns_afterpoll |
2081 | |
2160 | |
2082 | Method 4: Do not use a prepare or check watcher because the module you |
2161 | Method 4: Do not use a prepare or check watcher because the module you |
2083 | want to embed is too inflexible to support it. Instead, you can override |
2162 | want to embed is not flexible enough to support it. Instead, you can |
2084 | their poll function. The drawback with this solution is that the main |
2163 | override their poll function. The drawback with this solution is that the |
2085 | loop is now no longer controllable by EV. The C<Glib::EV> module does |
2164 | main loop is now no longer controllable by EV. The C<Glib::EV> module uses |
2086 | this. |
2165 | this approach, effectively embedding EV as a client into the horrible |
|
|
2166 | libglib event loop. |
2087 | |
2167 | |
2088 | static gint |
2168 | static gint |
2089 | event_poll_func (GPollFD *fds, guint nfds, gint timeout) |
2169 | event_poll_func (GPollFD *fds, guint nfds, gint timeout) |
2090 | { |
2170 | { |
2091 | int got_events = 0; |
2171 | int got_events = 0; |
… | |
… | |
2122 | prioritise I/O. |
2202 | prioritise I/O. |
2123 | |
2203 | |
2124 | As an example for a bug workaround, the kqueue backend might only support |
2204 | As an example for a bug workaround, the kqueue backend might only support |
2125 | sockets on some platform, so it is unusable as generic backend, but you |
2205 | sockets on some platform, so it is unusable as generic backend, but you |
2126 | still want to make use of it because you have many sockets and it scales |
2206 | still want to make use of it because you have many sockets and it scales |
2127 | so nicely. In this case, you would create a kqueue-based loop and embed it |
2207 | so nicely. In this case, you would create a kqueue-based loop and embed |
2128 | into your default loop (which might use e.g. poll). Overall operation will |
2208 | it into your default loop (which might use e.g. poll). Overall operation |
2129 | be a bit slower because first libev has to poll and then call kevent, but |
2209 | will be a bit slower because first libev has to call C<poll> and then |
2130 | at least you can use both at what they are best. |
2210 | C<kevent>, but at least you can use both mechanisms for what they are |
|
|
2211 | best: C<kqueue> for scalable sockets and C<poll> if you want it to work :) |
2131 | |
2212 | |
2132 | As for prioritising I/O: rarely you have the case where some fds have |
2213 | As for prioritising I/O: under rare circumstances you have the case where |
2133 | to be watched and handled very quickly (with low latency), and even |
2214 | some fds have to be watched and handled very quickly (with low latency), |
2134 | priorities and idle watchers might have too much overhead. In this case |
2215 | and even priorities and idle watchers might have too much overhead. In |
2135 | you would put all the high priority stuff in one loop and all the rest in |
2216 | this case you would put all the high priority stuff in one loop and all |
2136 | a second one, and embed the second one in the first. |
2217 | the rest in a second one, and embed the second one in the first. |
2137 | |
2218 | |
2138 | As long as the watcher is active, the callback will be invoked every time |
2219 | As long as the watcher is active, the callback will be invoked every time |
2139 | there might be events pending in the embedded loop. The callback must then |
2220 | there might be events pending in the embedded loop. The callback must then |
2140 | call C<ev_embed_sweep (mainloop, watcher)> to make a single sweep and invoke |
2221 | call C<ev_embed_sweep (mainloop, watcher)> to make a single sweep and invoke |
2141 | their callbacks (you could also start an idle watcher to give the embedded |
2222 | their callbacks (you could also start an idle watcher to give the embedded |
… | |
… | |
2149 | interested in that. |
2230 | interested in that. |
2150 | |
2231 | |
2151 | Also, there have not currently been made special provisions for forking: |
2232 | Also, there have not currently been made special provisions for forking: |
2152 | when you fork, you not only have to call C<ev_loop_fork> on both loops, |
2233 | when you fork, you not only have to call C<ev_loop_fork> on both loops, |
2153 | but you will also have to stop and restart any C<ev_embed> watchers |
2234 | but you will also have to stop and restart any C<ev_embed> watchers |
2154 | yourself. |
2235 | yourself - but you can use a fork watcher to handle this automatically, |
|
|
2236 | and future versions of libev might do just that. |
2155 | |
2237 | |
2156 | Unfortunately, not all backends are embeddable, only the ones returned by |
2238 | Unfortunately, not all backends are embeddable: only the ones returned by |
2157 | C<ev_embeddable_backends> are, which, unfortunately, does not include any |
2239 | C<ev_embeddable_backends> are, which, unfortunately, does not include any |
2158 | portable one. |
2240 | portable one. |
2159 | |
2241 | |
2160 | So when you want to use this feature you will always have to be prepared |
2242 | So when you want to use this feature you will always have to be prepared |
2161 | that you cannot get an embeddable loop. The recommended way to get around |
2243 | that you cannot get an embeddable loop. The recommended way to get around |
… | |
… | |
2286 | is that the author does not know of a simple (or any) algorithm for a |
2368 | is that the author does not know of a simple (or any) algorithm for a |
2287 | multiple-writer-single-reader queue that works in all cases and doesn't |
2369 | multiple-writer-single-reader queue that works in all cases and doesn't |
2288 | need elaborate support such as pthreads. |
2370 | need elaborate support such as pthreads. |
2289 | |
2371 | |
2290 | That means that if you want to queue data, you have to provide your own |
2372 | That means that if you want to queue data, you have to provide your own |
2291 | queue. But at least I can tell you would implement locking around your |
2373 | queue. But at least I can tell you how to implement locking around your |
2292 | queue: |
2374 | queue: |
2293 | |
2375 | |
2294 | =over 4 |
2376 | =over 4 |
2295 | |
2377 | |
2296 | =item queueing from a signal handler context |
2378 | =item queueing from a signal handler context |
… | |
… | |
2372 | |
2454 | |
2373 | =item ev_async_init (ev_async *, callback) |
2455 | =item ev_async_init (ev_async *, callback) |
2374 | |
2456 | |
2375 | Initialises and configures the async watcher - it has no parameters of any |
2457 | Initialises and configures the async watcher - it has no parameters of any |
2376 | kind. There is a C<ev_asynd_set> macro, but using it is utterly pointless, |
2458 | kind. There is a C<ev_asynd_set> macro, but using it is utterly pointless, |
2377 | believe me. |
2459 | trust me. |
2378 | |
2460 | |
2379 | =item ev_async_send (loop, ev_async *) |
2461 | =item ev_async_send (loop, ev_async *) |
2380 | |
2462 | |
2381 | Sends/signals/activates the given C<ev_async> watcher, that is, feeds |
2463 | Sends/signals/activates the given C<ev_async> watcher, that is, feeds |
2382 | an C<EV_ASYNC> event on the watcher into the event loop. Unlike |
2464 | an C<EV_ASYNC> event on the watcher into the event loop. Unlike |
2383 | C<ev_feed_event>, this call is safe to do in other threads, signal or |
2465 | C<ev_feed_event>, this call is safe to do from other threads, signal or |
2384 | similar contexts (see the discussion of C<EV_ATOMIC_T> in the embedding |
2466 | similar contexts (see the discussion of C<EV_ATOMIC_T> in the embedding |
2385 | section below on what exactly this means). |
2467 | section below on what exactly this means). |
2386 | |
2468 | |
2387 | This call incurs the overhead of a system call only once per loop iteration, |
2469 | This call incurs the overhead of a system call only once per loop iteration, |
2388 | so while the overhead might be noticeable, it doesn't apply to repeated |
2470 | so while the overhead might be noticeable, it doesn't apply to repeated |
… | |
… | |
2594 | |
2676 | |
2595 | The prototype of the C<function> must be C<void (*)(ev::TYPE &w, int)>. |
2677 | The prototype of the C<function> must be C<void (*)(ev::TYPE &w, int)>. |
2596 | |
2678 | |
2597 | See the method-C<set> above for more details. |
2679 | See the method-C<set> above for more details. |
2598 | |
2680 | |
2599 | Example: |
2681 | Example: Use a plain function as callback. |
2600 | |
2682 | |
2601 | static void io_cb (ev::io &w, int revents) { } |
2683 | static void io_cb (ev::io &w, int revents) { } |
2602 | iow.set <io_cb> (); |
2684 | iow.set <io_cb> (); |
2603 | |
2685 | |
2604 | =item w->set (struct ev_loop *) |
2686 | =item w->set (struct ev_loop *) |
… | |
… | |
2642 | Example: Define a class with an IO and idle watcher, start one of them in |
2724 | Example: Define a class with an IO and idle watcher, start one of them in |
2643 | the constructor. |
2725 | the constructor. |
2644 | |
2726 | |
2645 | class myclass |
2727 | class myclass |
2646 | { |
2728 | { |
2647 | ev::io io; void io_cb (ev::io &w, int revents); |
2729 | ev::io io ; void io_cb (ev::io &w, int revents); |
2648 | ev:idle idle void idle_cb (ev::idle &w, int revents); |
2730 | ev::idle idle; void idle_cb (ev::idle &w, int revents); |
2649 | |
2731 | |
2650 | myclass (int fd) |
2732 | myclass (int fd) |
2651 | { |
2733 | { |
2652 | io .set <myclass, &myclass::io_cb > (this); |
2734 | io .set <myclass, &myclass::io_cb > (this); |
2653 | idle.set <myclass, &myclass::idle_cb> (this); |
2735 | idle.set <myclass, &myclass::idle_cb> (this); |
… | |
… | |
2669 | =item Perl |
2751 | =item Perl |
2670 | |
2752 | |
2671 | The EV module implements the full libev API and is actually used to test |
2753 | The EV module implements the full libev API and is actually used to test |
2672 | libev. EV is developed together with libev. Apart from the EV core module, |
2754 | libev. EV is developed together with libev. Apart from the EV core module, |
2673 | there are additional modules that implement libev-compatible interfaces |
2755 | there are additional modules that implement libev-compatible interfaces |
2674 | to C<libadns> (C<EV::ADNS>), C<Net::SNMP> (C<Net::SNMP::EV>) and the |
2756 | to C<libadns> (C<EV::ADNS>, but C<AnyEvent::DNS> is preferred nowadays), |
2675 | C<libglib> event core (C<Glib::EV> and C<EV::Glib>). |
2757 | C<Net::SNMP> (C<Net::SNMP::EV>) and the C<libglib> event core (C<Glib::EV> |
|
|
2758 | and C<EV::Glib>). |
2676 | |
2759 | |
2677 | It can be found and installed via CPAN, its homepage is at |
2760 | It can be found and installed via CPAN, its homepage is at |
2678 | L<http://software.schmorp.de/pkg/EV>. |
2761 | L<http://software.schmorp.de/pkg/EV>. |
2679 | |
2762 | |
2680 | =item Python |
2763 | =item Python |
… | |
… | |
2859 | |
2942 | |
2860 | =head2 PREPROCESSOR SYMBOLS/MACROS |
2943 | =head2 PREPROCESSOR SYMBOLS/MACROS |
2861 | |
2944 | |
2862 | Libev can be configured via a variety of preprocessor symbols you have to |
2945 | Libev can be configured via a variety of preprocessor symbols you have to |
2863 | define before including any of its files. The default in the absence of |
2946 | define before including any of its files. The default in the absence of |
2864 | autoconf is noted for every option. |
2947 | autoconf is documented for every option. |
2865 | |
2948 | |
2866 | =over 4 |
2949 | =over 4 |
2867 | |
2950 | |
2868 | =item EV_STANDALONE |
2951 | =item EV_STANDALONE |
2869 | |
2952 | |
… | |
… | |
3039 | When doing priority-based operations, libev usually has to linearly search |
3122 | When doing priority-based operations, libev usually has to linearly search |
3040 | all the priorities, so having many of them (hundreds) uses a lot of space |
3123 | all the priorities, so having many of them (hundreds) uses a lot of space |
3041 | and time, so using the defaults of five priorities (-2 .. +2) is usually |
3124 | and time, so using the defaults of five priorities (-2 .. +2) is usually |
3042 | fine. |
3125 | fine. |
3043 | |
3126 | |
3044 | If your embedding application does not need any priorities, defining these both to |
3127 | If your embedding application does not need any priorities, defining these |
3045 | C<0> will save some memory and CPU. |
3128 | both to C<0> will save some memory and CPU. |
3046 | |
3129 | |
3047 | =item EV_PERIODIC_ENABLE |
3130 | =item EV_PERIODIC_ENABLE |
3048 | |
3131 | |
3049 | If undefined or defined to be C<1>, then periodic timers are supported. If |
3132 | If undefined or defined to be C<1>, then periodic timers are supported. If |
3050 | defined to be C<0>, then they are not. Disabling them saves a few kB of |
3133 | defined to be C<0>, then they are not. Disabling them saves a few kB of |
… | |
… | |
3057 | code. |
3140 | code. |
3058 | |
3141 | |
3059 | =item EV_EMBED_ENABLE |
3142 | =item EV_EMBED_ENABLE |
3060 | |
3143 | |
3061 | If undefined or defined to be C<1>, then embed watchers are supported. If |
3144 | If undefined or defined to be C<1>, then embed watchers are supported. If |
3062 | defined to be C<0>, then they are not. |
3145 | defined to be C<0>, then they are not. Embed watchers rely on most other |
|
|
3146 | watcher types, which therefore must not be disabled. |
3063 | |
3147 | |
3064 | =item EV_STAT_ENABLE |
3148 | =item EV_STAT_ENABLE |
3065 | |
3149 | |
3066 | If undefined or defined to be C<1>, then stat watchers are supported. If |
3150 | If undefined or defined to be C<1>, then stat watchers are supported. If |
3067 | defined to be C<0>, then they are not. |
3151 | defined to be C<0>, then they are not. |
… | |
… | |
3099 | two). |
3183 | two). |
3100 | |
3184 | |
3101 | =item EV_USE_4HEAP |
3185 | =item EV_USE_4HEAP |
3102 | |
3186 | |
3103 | Heaps are not very cache-efficient. To improve the cache-efficiency of the |
3187 | Heaps are not very cache-efficient. To improve the cache-efficiency of the |
3104 | timer and periodics heap, libev uses a 4-heap when this symbol is defined |
3188 | timer and periodics heaps, libev uses a 4-heap when this symbol is defined |
3105 | to C<1>. The 4-heap uses more complicated (longer) code but has |
3189 | to C<1>. The 4-heap uses more complicated (longer) code but has noticeably |
3106 | noticeably faster performance with many (thousands) of watchers. |
3190 | faster performance with many (thousands) of watchers. |
3107 | |
3191 | |
3108 | The default is C<1> unless C<EV_MINIMAL> is set in which case it is C<0> |
3192 | The default is C<1> unless C<EV_MINIMAL> is set in which case it is C<0> |
3109 | (disabled). |
3193 | (disabled). |
3110 | |
3194 | |
3111 | =item EV_HEAP_CACHE_AT |
3195 | =item EV_HEAP_CACHE_AT |
3112 | |
3196 | |
3113 | Heaps are not very cache-efficient. To improve the cache-efficiency of the |
3197 | Heaps are not very cache-efficient. To improve the cache-efficiency of the |
3114 | timer and periodics heap, libev can cache the timestamp (I<at>) within |
3198 | timer and periodics heaps, libev can cache the timestamp (I<at>) within |
3115 | the heap structure (selected by defining C<EV_HEAP_CACHE_AT> to C<1>), |
3199 | the heap structure (selected by defining C<EV_HEAP_CACHE_AT> to C<1>), |
3116 | which uses 8-12 bytes more per watcher and a few hundred bytes more code, |
3200 | which uses 8-12 bytes more per watcher and a few hundred bytes more code, |
3117 | but avoids random read accesses on heap changes. This improves performance |
3201 | but avoids random read accesses on heap changes. This improves performance |
3118 | noticeably with with many (hundreds) of watchers. |
3202 | noticeably with many (hundreds) of watchers. |
3119 | |
3203 | |
3120 | The default is C<1> unless C<EV_MINIMAL> is set in which case it is C<0> |
3204 | The default is C<1> unless C<EV_MINIMAL> is set in which case it is C<0> |
3121 | (disabled). |
3205 | (disabled). |
3122 | |
3206 | |
3123 | =item EV_VERIFY |
3207 | =item EV_VERIFY |
… | |
… | |
3129 | called once per loop, which can slow down libev. If set to C<3>, then the |
3213 | called once per loop, which can slow down libev. If set to C<3>, then the |
3130 | verification code will be called very frequently, which will slow down |
3214 | verification code will be called very frequently, which will slow down |
3131 | libev considerably. |
3215 | libev considerably. |
3132 | |
3216 | |
3133 | The default is C<1>, unless C<EV_MINIMAL> is set, in which case it will be |
3217 | The default is C<1>, unless C<EV_MINIMAL> is set, in which case it will be |
3134 | C<0.> |
3218 | C<0>. |
3135 | |
3219 | |
3136 | =item EV_COMMON |
3220 | =item EV_COMMON |
3137 | |
3221 | |
3138 | By default, all watchers have a C<void *data> member. By redefining |
3222 | By default, all watchers have a C<void *data> member. By redefining |
3139 | this macro to a something else you can include more and other types of |
3223 | this macro to a something else you can include more and other types of |
… | |
… | |
3156 | and the way callbacks are invoked and set. Must expand to a struct member |
3240 | and the way callbacks are invoked and set. Must expand to a struct member |
3157 | definition and a statement, respectively. See the F<ev.h> header file for |
3241 | definition and a statement, respectively. See the F<ev.h> header file for |
3158 | their default definitions. One possible use for overriding these is to |
3242 | their default definitions. One possible use for overriding these is to |
3159 | avoid the C<struct ev_loop *> as first argument in all cases, or to use |
3243 | avoid the C<struct ev_loop *> as first argument in all cases, or to use |
3160 | method calls instead of plain function calls in C++. |
3244 | method calls instead of plain function calls in C++. |
|
|
3245 | |
|
|
3246 | =back |
3161 | |
3247 | |
3162 | =head2 EXPORTED API SYMBOLS |
3248 | =head2 EXPORTED API SYMBOLS |
3163 | |
3249 | |
3164 | If you need to re-export the API (e.g. via a DLL) and you need a list of |
3250 | If you need to re-export the API (e.g. via a DLL) and you need a list of |
3165 | exported symbols, you can use the provided F<Symbol.*> files which list |
3251 | exported symbols, you can use the provided F<Symbol.*> files which list |
… | |
… | |
3217 | |
3303 | |
3218 | =head1 THREADS AND COROUTINES |
3304 | =head1 THREADS AND COROUTINES |
3219 | |
3305 | |
3220 | =head2 THREADS |
3306 | =head2 THREADS |
3221 | |
3307 | |
3222 | Libev itself is completely thread-safe, but it uses no locking. This |
3308 | All libev functions are reentrant and thread-safe unless explicitly |
|
|
3309 | documented otherwise, but it uses no locking itself. This means that you |
3223 | means that you can use as many loops as you want in parallel, as long as |
3310 | can use as many loops as you want in parallel, as long as there are no |
3224 | only one thread ever calls into one libev function with the same loop |
3311 | concurrent calls into any libev function with the same loop parameter |
3225 | parameter. |
3312 | (C<ev_default_*> calls have an implicit default loop parameter, of |
|
|
3313 | course): libev guarantees that different event loops share no data |
|
|
3314 | structures that need any locking. |
3226 | |
3315 | |
3227 | Or put differently: calls with different loop parameters can be done in |
3316 | Or to put it differently: calls with different loop parameters can be done |
3228 | parallel from multiple threads, calls with the same loop parameter must be |
3317 | concurrently from multiple threads, calls with the same loop parameter |
3229 | done serially (but can be done from different threads, as long as only one |
3318 | must be done serially (but can be done from different threads, as long as |
3230 | thread ever is inside a call at any point in time, e.g. by using a mutex |
3319 | only one thread ever is inside a call at any point in time, e.g. by using |
3231 | per loop). |
3320 | a mutex per loop). |
|
|
3321 | |
|
|
3322 | Specifically to support threads (and signal handlers), libev implements |
|
|
3323 | so-called C<ev_async> watchers, which allow some limited form of |
|
|
3324 | concurrency on the same event loop, namely waking it up "from the |
|
|
3325 | outside". |
3232 | |
3326 | |
3233 | If you want to know which design (one loop, locking, or multiple loops |
3327 | If you want to know which design (one loop, locking, or multiple loops |
3234 | without or something else still) is best for your problem, then I cannot |
3328 | without or something else still) is best for your problem, then I cannot |
3235 | help you. I can give some generic advice however: |
3329 | help you, but here is some generic advice: |
3236 | |
3330 | |
3237 | =over 4 |
3331 | =over 4 |
3238 | |
3332 | |
3239 | =item * most applications have a main thread: use the default libev loop |
3333 | =item * most applications have a main thread: use the default libev loop |
3240 | in that thread, or create a separate thread running only the default loop. |
3334 | in that thread, or create a separate thread running only the default loop. |
… | |
… | |
3252 | |
3346 | |
3253 | Choosing a model is hard - look around, learn, know that usually you can do |
3347 | Choosing a model is hard - look around, learn, know that usually you can do |
3254 | better than you currently do :-) |
3348 | better than you currently do :-) |
3255 | |
3349 | |
3256 | =item * often you need to talk to some other thread which blocks in the |
3350 | =item * often you need to talk to some other thread which blocks in the |
|
|
3351 | event loop. |
|
|
3352 | |
3257 | event loop - C<ev_async> watchers can be used to wake them up from other |
3353 | C<ev_async> watchers can be used to wake them up from other threads safely |
3258 | threads safely (or from signal contexts...). |
3354 | (or from signal contexts...). |
|
|
3355 | |
|
|
3356 | An example use would be to communicate signals or other events that only |
|
|
3357 | work in the default loop by registering the signal watcher with the |
|
|
3358 | default loop and triggering an C<ev_async> watcher from the default loop |
|
|
3359 | watcher callback into the event loop interested in the signal. |
3259 | |
3360 | |
3260 | =back |
3361 | =back |
3261 | |
3362 | |
3262 | =head2 COROUTINES |
3363 | =head2 COROUTINES |
3263 | |
3364 | |
… | |
… | |
3266 | coroutines (e.g. you can call C<ev_loop> on the same loop from two |
3367 | coroutines (e.g. you can call C<ev_loop> on the same loop from two |
3267 | different coroutines and switch freely between both coroutines running the |
3368 | different coroutines and switch freely between both coroutines running the |
3268 | loop, as long as you don't confuse yourself). The only exception is that |
3369 | loop, as long as you don't confuse yourself). The only exception is that |
3269 | you must not do this from C<ev_periodic> reschedule callbacks. |
3370 | you must not do this from C<ev_periodic> reschedule callbacks. |
3270 | |
3371 | |
3271 | Care has been invested into making sure that libev does not keep local |
3372 | Care has been taken to ensure that libev does not keep local state inside |
3272 | state inside C<ev_loop>, and other calls do not usually allow coroutine |
3373 | C<ev_loop>, and other calls do not usually allow coroutine switches. |
3273 | switches. |
|
|
3274 | |
3374 | |
3275 | |
3375 | |
3276 | =head1 COMPLEXITIES |
3376 | =head1 COMPLEXITIES |
3277 | |
3377 | |
3278 | In this section the complexities of (many of) the algorithms used inside |
3378 | In this section the complexities of (many of) the algorithms used inside |
… | |
… | |
3326 | =item Priority handling: O(number_of_priorities) |
3426 | =item Priority handling: O(number_of_priorities) |
3327 | |
3427 | |
3328 | Priorities are implemented by allocating some space for each |
3428 | Priorities are implemented by allocating some space for each |
3329 | priority. When doing priority-based operations, libev usually has to |
3429 | priority. When doing priority-based operations, libev usually has to |
3330 | linearly search all the priorities, but starting/stopping and activating |
3430 | linearly search all the priorities, but starting/stopping and activating |
3331 | watchers becomes O(1) w.r.t. priority handling. |
3431 | watchers becomes O(1) with respect to priority handling. |
3332 | |
3432 | |
3333 | =item Sending an ev_async: O(1) |
3433 | =item Sending an ev_async: O(1) |
3334 | |
3434 | |
3335 | =item Processing ev_async_send: O(number_of_async_watchers) |
3435 | =item Processing ev_async_send: O(number_of_async_watchers) |
3336 | |
3436 | |
… | |
… | |
3362 | |
3462 | |
3363 | Not a libev limitation but worth mentioning: windows apparently doesn't |
3463 | Not a libev limitation but worth mentioning: windows apparently doesn't |
3364 | accept large writes: instead of resulting in a partial write, windows will |
3464 | accept large writes: instead of resulting in a partial write, windows will |
3365 | either accept everything or return C<ENOBUFS> if the buffer is too large, |
3465 | either accept everything or return C<ENOBUFS> if the buffer is too large, |
3366 | so make sure you only write small amounts into your sockets (less than a |
3466 | so make sure you only write small amounts into your sockets (less than a |
3367 | megabyte seems safe, but thsi apparently depends on the amount of memory |
3467 | megabyte seems safe, but this apparently depends on the amount of memory |
3368 | available). |
3468 | available). |
3369 | |
3469 | |
3370 | Due to the many, low, and arbitrary limits on the win32 platform and |
3470 | Due to the many, low, and arbitrary limits on the win32 platform and |
3371 | the abysmal performance of winsockets, using a large number of sockets |
3471 | the abysmal performance of winsockets, using a large number of sockets |
3372 | is not recommended (and not reasonable). If your program needs to use |
3472 | is not recommended (and not reasonable). If your program needs to use |
… | |
… | |
3383 | #define EV_SELECT_IS_WINSOCKET 1 /* configure libev for windows select */ |
3483 | #define EV_SELECT_IS_WINSOCKET 1 /* configure libev for windows select */ |
3384 | |
3484 | |
3385 | #include "ev.h" |
3485 | #include "ev.h" |
3386 | |
3486 | |
3387 | And compile the following F<evwrap.c> file into your project (make sure |
3487 | And compile the following F<evwrap.c> file into your project (make sure |
3388 | you do I<not> compile the F<ev.c> or any other embedded soruce files!): |
3488 | you do I<not> compile the F<ev.c> or any other embedded source files!): |
3389 | |
3489 | |
3390 | #include "evwrap.h" |
3490 | #include "evwrap.h" |
3391 | #include "ev.c" |
3491 | #include "ev.c" |
3392 | |
3492 | |
3393 | =over 4 |
3493 | =over 4 |
… | |
… | |
3458 | calls them using an C<ev_watcher *> internally. |
3558 | calls them using an C<ev_watcher *> internally. |
3459 | |
3559 | |
3460 | =item C<sig_atomic_t volatile> must be thread-atomic as well |
3560 | =item C<sig_atomic_t volatile> must be thread-atomic as well |
3461 | |
3561 | |
3462 | The type C<sig_atomic_t volatile> (or whatever is defined as |
3562 | The type C<sig_atomic_t volatile> (or whatever is defined as |
3463 | C<EV_ATOMIC_T>) must be atomic w.r.t. accesses from different |
3563 | C<EV_ATOMIC_T>) must be atomic with respect to accesses from different |
3464 | threads. This is not part of the specification for C<sig_atomic_t>, but is |
3564 | threads. This is not part of the specification for C<sig_atomic_t>, but is |
3465 | believed to be sufficiently portable. |
3565 | believed to be sufficiently portable. |
3466 | |
3566 | |
3467 | =item C<sigprocmask> must work in a threaded environment |
3567 | =item C<sigprocmask> must work in a threaded environment |
3468 | |
3568 | |