… | |
… | |
569 | kernel is more efficient (which says nothing about its actual speed, of |
569 | kernel is more efficient (which says nothing about its actual speed, of |
570 | course). While stopping, setting and starting an I/O watcher does never |
570 | course). While stopping, setting and starting an I/O watcher does never |
571 | cause an extra system call as with C<EVBACKEND_EPOLL>, it still adds up to |
571 | cause an extra system call as with C<EVBACKEND_EPOLL>, it still adds up to |
572 | two event changes per incident. Support for C<fork ()> is very bad (you |
572 | two event changes per incident. Support for C<fork ()> is very bad (you |
573 | might have to leak fd's on fork, but it's more sane than epoll) and it |
573 | might have to leak fd's on fork, but it's more sane than epoll) and it |
574 | drops fds silently in similarly hard-to-detect cases |
574 | drops fds silently in similarly hard-to-detect cases. |
575 | |
575 | |
576 | This backend usually performs well under most conditions. |
576 | This backend usually performs well under most conditions. |
577 | |
577 | |
578 | While nominally embeddable in other event loops, this doesn't work |
578 | While nominally embeddable in other event loops, this doesn't work |
579 | everywhere, so you might need to test for this. And since it is broken |
579 | everywhere, so you might need to test for this. And since it is broken |
… | |
… | |
1393 | transition between them will be described in more detail - and while these |
1393 | transition between them will be described in more detail - and while these |
1394 | rules might look complicated, they usually do "the right thing". |
1394 | rules might look complicated, they usually do "the right thing". |
1395 | |
1395 | |
1396 | =over 4 |
1396 | =over 4 |
1397 | |
1397 | |
1398 | =item initialiased |
1398 | =item initialised |
1399 | |
1399 | |
1400 | Before a watcher can be registered with the event loop it has to be |
1400 | Before a watcher can be registered with the event loop it has to be |
1401 | initialised. This can be done with a call to C<ev_TYPE_init>, or calls to |
1401 | initialised. This can be done with a call to C<ev_TYPE_init>, or calls to |
1402 | C<ev_init> followed by the watcher-specific C<ev_TYPE_set> function. |
1402 | C<ev_init> followed by the watcher-specific C<ev_TYPE_set> function. |
1403 | |
1403 | |
… | |
… | |
2606 | |
2606 | |
2607 | =head2 C<ev_stat> - did the file attributes just change? |
2607 | =head2 C<ev_stat> - did the file attributes just change? |
2608 | |
2608 | |
2609 | This watches a file system path for attribute changes. That is, it calls |
2609 | This watches a file system path for attribute changes. That is, it calls |
2610 | C<stat> on that path in regular intervals (or when the OS says it changed) |
2610 | C<stat> on that path in regular intervals (or when the OS says it changed) |
2611 | and sees if it changed compared to the last time, invoking the callback if |
2611 | and sees if it changed compared to the last time, invoking the callback |
2612 | it did. |
2612 | if it did. Starting the watcher C<stat>'s the file, so only changes that |
|
|
2613 | happen after the watcher has been started will be reported. |
2613 | |
2614 | |
2614 | The path does not need to exist: changing from "path exists" to "path does |
2615 | The path does not need to exist: changing from "path exists" to "path does |
2615 | not exist" is a status change like any other. The condition "path does not |
2616 | not exist" is a status change like any other. The condition "path does not |
2616 | exist" (or more correctly "path cannot be stat'ed") is signified by the |
2617 | exist" (or more correctly "path cannot be stat'ed") is signified by the |
2617 | C<st_nlink> field being zero (which is otherwise always forced to be at |
2618 | C<st_nlink> field being zero (which is otherwise always forced to be at |
… | |
… | |
3177 | |
3178 | |
3178 | =over 4 |
3179 | =over 4 |
3179 | |
3180 | |
3180 | =item ev_embed_init (ev_embed *, callback, struct ev_loop *embedded_loop) |
3181 | =item ev_embed_init (ev_embed *, callback, struct ev_loop *embedded_loop) |
3181 | |
3182 | |
3182 | =item ev_embed_set (ev_embed *, callback, struct ev_loop *embedded_loop) |
3183 | =item ev_embed_set (ev_embed *, struct ev_loop *embedded_loop) |
3183 | |
3184 | |
3184 | Configures the watcher to embed the given loop, which must be |
3185 | Configures the watcher to embed the given loop, which must be |
3185 | embeddable. If the callback is C<0>, then C<ev_embed_sweep> will be |
3186 | embeddable. If the callback is C<0>, then C<ev_embed_sweep> will be |
3186 | invoked automatically, otherwise it is the responsibility of the callback |
3187 | invoked automatically, otherwise it is the responsibility of the callback |
3187 | to invoke it (it will continue to be called until the sweep has been done, |
3188 | to invoke it (it will continue to be called until the sweep has been done, |
… | |
… | |
3658 | already been invoked. |
3659 | already been invoked. |
3659 | |
3660 | |
3660 | A common way around all these issues is to make sure that |
3661 | A common way around all these issues is to make sure that |
3661 | C<start_new_request> I<always> returns before the callback is invoked. If |
3662 | C<start_new_request> I<always> returns before the callback is invoked. If |
3662 | C<start_new_request> immediately knows the result, it can artificially |
3663 | C<start_new_request> immediately knows the result, it can artificially |
3663 | delay invoking the callback by e.g. using a C<prepare> or C<idle> watcher |
3664 | delay invoking the callback by using a C<prepare> or C<idle> watcher for |
3664 | for example, or more sneakily, by reusing an existing (stopped) watcher |
3665 | example, or more sneakily, by reusing an existing (stopped) watcher and |
3665 | and pushing it into the pending queue: |
3666 | pushing it into the pending queue: |
3666 | |
3667 | |
3667 | ev_set_cb (watcher, callback); |
3668 | ev_set_cb (watcher, callback); |
3668 | ev_feed_event (EV_A_ watcher, 0); |
3669 | ev_feed_event (EV_A_ watcher, 0); |
3669 | |
3670 | |
3670 | This way, C<start_new_request> can safely return before the callback is |
3671 | This way, C<start_new_request> can safely return before the callback is |
… | |
… | |
3678 | |
3679 | |
3679 | This brings the problem of exiting - a callback might want to finish the |
3680 | This brings the problem of exiting - a callback might want to finish the |
3680 | main C<ev_run> call, but not the nested one (e.g. user clicked "Quit", but |
3681 | main C<ev_run> call, but not the nested one (e.g. user clicked "Quit", but |
3681 | a modal "Are you sure?" dialog is still waiting), or just the nested one |
3682 | a modal "Are you sure?" dialog is still waiting), or just the nested one |
3682 | and not the main one (e.g. user clocked "Ok" in a modal dialog), or some |
3683 | and not the main one (e.g. user clocked "Ok" in a modal dialog), or some |
3683 | other combination: In these cases, C<ev_break> will not work alone. |
3684 | other combination: In these cases, a simple C<ev_break> will not work. |
3684 | |
3685 | |
3685 | The solution is to maintain "break this loop" variable for each C<ev_run> |
3686 | The solution is to maintain "break this loop" variable for each C<ev_run> |
3686 | invocation, and use a loop around C<ev_run> until the condition is |
3687 | invocation, and use a loop around C<ev_run> until the condition is |
3687 | triggered, using C<EVRUN_ONCE>: |
3688 | triggered, using C<EVRUN_ONCE>: |
3688 | |
3689 | |
… | |
… | |
4617 | different cpus (or different cpu cores). This reduces dependencies |
4618 | different cpus (or different cpu cores). This reduces dependencies |
4618 | and makes libev faster. |
4619 | and makes libev faster. |
4619 | |
4620 | |
4620 | =item EV_NO_THREADS |
4621 | =item EV_NO_THREADS |
4621 | |
4622 | |
4622 | If defined to be C<1>, libev will assume that it will never be called |
4623 | If defined to be C<1>, libev will assume that it will never be called from |
4623 | from different threads, which is a stronger assumption than C<EV_NO_SMP>, |
4624 | different threads (that includes signal handlers), which is a stronger |
4624 | above. This reduces dependencies and makes libev faster. |
4625 | assumption than C<EV_NO_SMP>, above. This reduces dependencies and makes |
|
|
4626 | libev faster. |
4625 | |
4627 | |
4626 | =item EV_ATOMIC_T |
4628 | =item EV_ATOMIC_T |
4627 | |
4629 | |
4628 | Libev requires an integer type (suitable for storing C<0> or C<1>) whose |
4630 | Libev requires an integer type (suitable for storing C<0> or C<1>) whose |
4629 | access is atomic with respect to other threads or signal contexts. No |
4631 | access is atomic with respect to other threads or signal contexts. No |