… | |
… | |
403 | While this backend scales well, it requires one system call per active |
403 | While this backend scales well, it requires one system call per active |
404 | file descriptor per loop iteration. For small and medium numbers of file |
404 | file descriptor per loop iteration. For small and medium numbers of file |
405 | descriptors a "slow" C<EVBACKEND_SELECT> or C<EVBACKEND_POLL> backend |
405 | descriptors a "slow" C<EVBACKEND_SELECT> or C<EVBACKEND_POLL> backend |
406 | might perform better. |
406 | might perform better. |
407 | |
407 | |
|
|
408 | On the positive side, ignoring the spurious readyness notifications, this |
|
|
409 | backend actually performed to specification in all tests and is fully |
|
|
410 | embeddable, which is a rare feat among the OS-specific backends. |
|
|
411 | |
408 | =item C<EVBACKEND_ALL> |
412 | =item C<EVBACKEND_ALL> |
409 | |
413 | |
410 | Try all backends (even potentially broken ones that wouldn't be tried |
414 | Try all backends (even potentially broken ones that wouldn't be tried |
411 | with C<EVFLAG_AUTO>). Since this is a mask, you can do stuff such as |
415 | with C<EVFLAG_AUTO>). Since this is a mask, you can do stuff such as |
412 | C<EVBACKEND_ALL & ~EVBACKEND_KQUEUE>. |
416 | C<EVBACKEND_ALL & ~EVBACKEND_KQUEUE>. |
… | |
… | |
414 | It is definitely not recommended to use this flag. |
418 | It is definitely not recommended to use this flag. |
415 | |
419 | |
416 | =back |
420 | =back |
417 | |
421 | |
418 | If one or more of these are ored into the flags value, then only these |
422 | If one or more of these are ored into the flags value, then only these |
419 | backends will be tried (in the reverse order as given here). If none are |
423 | backends will be tried (in the reverse order as listed here). If none are |
420 | specified, most compiled-in backend will be tried, usually in reverse |
424 | specified, all backends in C<ev_recommended_backends ()> will be tried. |
421 | order of their flag values :) |
|
|
422 | |
425 | |
423 | The most typical usage is like this: |
426 | The most typical usage is like this: |
424 | |
427 | |
425 | if (!ev_default_loop (0)) |
428 | if (!ev_default_loop (0)) |
426 | fatal ("could not initialise libev, bad $LIBEV_FLAGS in environment?"); |
429 | fatal ("could not initialise libev, bad $LIBEV_FLAGS in environment?"); |
… | |
… | |
551 | usually a better approach for this kind of thing. |
554 | usually a better approach for this kind of thing. |
552 | |
555 | |
553 | Here are the gory details of what C<ev_loop> does: |
556 | Here are the gory details of what C<ev_loop> does: |
554 | |
557 | |
555 | - Before the first iteration, call any pending watchers. |
558 | - Before the first iteration, call any pending watchers. |
556 | * If there are no active watchers (reference count is zero), return. |
559 | * If EVFLAG_FORKCHECK was used, check for a fork. |
557 | - Queue all prepare watchers and then call all outstanding watchers. |
560 | - If a fork was detected, queue and call all fork watchers. |
|
|
561 | - Queue and call all prepare watchers. |
558 | - If we have been forked, recreate the kernel state. |
562 | - If we have been forked, recreate the kernel state. |
559 | - Update the kernel state with all outstanding changes. |
563 | - Update the kernel state with all outstanding changes. |
560 | - Update the "event loop time". |
564 | - Update the "event loop time". |
561 | - Calculate for how long to block. |
565 | - Calculate for how long to sleep or block, if at all |
|
|
566 | (active idle watchers, EVLOOP_NONBLOCK or not having |
|
|
567 | any active watchers at all will result in not sleeping). |
|
|
568 | - Sleep if the I/O and timer collect interval say so. |
562 | - Block the process, waiting for any events. |
569 | - Block the process, waiting for any events. |
563 | - Queue all outstanding I/O (fd) events. |
570 | - Queue all outstanding I/O (fd) events. |
564 | - Update the "event loop time" and do time jump handling. |
571 | - Update the "event loop time" and do time jump handling. |
565 | - Queue all outstanding timers. |
572 | - Queue all outstanding timers. |
566 | - Queue all outstanding periodics. |
573 | - Queue all outstanding periodics. |
567 | - If no events are pending now, queue all idle watchers. |
574 | - If no events are pending now, queue all idle watchers. |
568 | - Queue all check watchers. |
575 | - Queue all check watchers. |
569 | - Call all queued watchers in reverse order (i.e. check watchers first). |
576 | - Call all queued watchers in reverse order (i.e. check watchers first). |
570 | Signals and child watchers are implemented as I/O watchers, and will |
577 | Signals and child watchers are implemented as I/O watchers, and will |
571 | be handled here by queueing them when their watcher gets executed. |
578 | be handled here by queueing them when their watcher gets executed. |
572 | - If ev_unloop has been called or EVLOOP_ONESHOT or EVLOOP_NONBLOCK |
579 | - If ev_unloop has been called, or EVLOOP_ONESHOT or EVLOOP_NONBLOCK |
573 | were used, return, otherwise continue with step *. |
580 | were used, or there are no active watchers, return, otherwise |
|
|
581 | continue with step *. |
574 | |
582 | |
575 | Example: Queue some jobs and then loop until no events are outsanding |
583 | Example: Queue some jobs and then loop until no events are outstanding |
576 | anymore. |
584 | anymore. |
577 | |
585 | |
578 | ... queue jobs here, make sure they register event watchers as long |
586 | ... queue jobs here, make sure they register event watchers as long |
579 | ... as they still have work to do (even an idle watcher will do..) |
587 | ... as they still have work to do (even an idle watcher will do..) |
580 | ev_loop (my_loop, 0); |
588 | ev_loop (my_loop, 0); |
… | |
… | |
584 | |
592 | |
585 | Can be used to make a call to C<ev_loop> return early (but only after it |
593 | Can be used to make a call to C<ev_loop> return early (but only after it |
586 | has processed all outstanding events). The C<how> argument must be either |
594 | has processed all outstanding events). The C<how> argument must be either |
587 | C<EVUNLOOP_ONE>, which will make the innermost C<ev_loop> call return, or |
595 | C<EVUNLOOP_ONE>, which will make the innermost C<ev_loop> call return, or |
588 | C<EVUNLOOP_ALL>, which will make all nested C<ev_loop> calls return. |
596 | C<EVUNLOOP_ALL>, which will make all nested C<ev_loop> calls return. |
|
|
597 | |
|
|
598 | This "unloop state" will be cleared when entering C<ev_loop> again. |
589 | |
599 | |
590 | =item ev_ref (loop) |
600 | =item ev_ref (loop) |
591 | |
601 | |
592 | =item ev_unref (loop) |
602 | =item ev_unref (loop) |
593 | |
603 | |
… | |
… | |
598 | returning, ev_unref() after starting, and ev_ref() before stopping it. For |
608 | returning, ev_unref() after starting, and ev_ref() before stopping it. For |
599 | example, libev itself uses this for its internal signal pipe: It is not |
609 | example, libev itself uses this for its internal signal pipe: It is not |
600 | visible to the libev user and should not keep C<ev_loop> from exiting if |
610 | visible to the libev user and should not keep C<ev_loop> from exiting if |
601 | no event watchers registered by it are active. It is also an excellent |
611 | no event watchers registered by it are active. It is also an excellent |
602 | way to do this for generic recurring timers or from within third-party |
612 | way to do this for generic recurring timers or from within third-party |
603 | libraries. Just remember to I<unref after start> and I<ref before stop>. |
613 | libraries. Just remember to I<unref after start> and I<ref before stop> |
|
|
614 | (but only if the watcher wasn't active before, or was active before, |
|
|
615 | respectively). |
604 | |
616 | |
605 | Example: Create a signal watcher, but keep it from keeping C<ev_loop> |
617 | Example: Create a signal watcher, but keep it from keeping C<ev_loop> |
606 | running when nothing else is active. |
618 | running when nothing else is active. |
607 | |
619 | |
608 | struct ev_signal exitsig; |
620 | struct ev_signal exitsig; |