… | |
… | |
506 | employing an additional generation counter and comparing that against the |
506 | employing an additional generation counter and comparing that against the |
507 | events to filter out spurious ones, recreating the set when required. Last |
507 | events to filter out spurious ones, recreating the set when required. Last |
508 | not least, it also refuses to work with some file descriptors which work |
508 | not least, it also refuses to work with some file descriptors which work |
509 | perfectly fine with C<select> (files, many character devices...). |
509 | perfectly fine with C<select> (files, many character devices...). |
510 | |
510 | |
511 | Epoll is truly the train wreck analog among event poll mechanisms. |
511 | Epoll is truly the train wreck analog among event poll mechanisms, |
|
|
512 | a frankenpoll, cobbled together in a hurry, no thought to design or |
|
|
513 | interaction with others. |
512 | |
514 | |
513 | While stopping, setting and starting an I/O watcher in the same iteration |
515 | While stopping, setting and starting an I/O watcher in the same iteration |
514 | will result in some caching, there is still a system call per such |
516 | will result in some caching, there is still a system call per such |
515 | incident (because the same I<file descriptor> could point to a different |
517 | incident (because the same I<file descriptor> could point to a different |
516 | I<file description> now), so its best to avoid that. Also, C<dup ()>'ed |
518 | I<file description> now), so its best to avoid that. Also, C<dup ()>'ed |
… | |
… | |
592 | On the positive side, this backend actually performed fully to |
594 | On the positive side, this backend actually performed fully to |
593 | specification in all tests and is fully embeddable, which is a rare feat |
595 | specification in all tests and is fully embeddable, which is a rare feat |
594 | among the OS-specific backends (I vastly prefer correctness over speed |
596 | among the OS-specific backends (I vastly prefer correctness over speed |
595 | hacks). |
597 | hacks). |
596 | |
598 | |
597 | On the negative side, the interface is I<bizarre>, with the event polling |
599 | On the negative side, the interface is I<bizarre> - so bizarre that |
|
|
600 | even sun itself gets it wrong in their code examples: The event polling |
598 | function sometimes returning events to the caller even though an error |
601 | function sometimes returning events to the caller even though an error |
599 | occured, but with no indication whether it has done so or not (yes, it's |
602 | occurred, but with no indication whether it has done so or not (yes, it's |
600 | even documented that way) - deadly for edge-triggered interfaces, but |
603 | even documented that way) - deadly for edge-triggered interfaces where |
|
|
604 | you absolutely have to know whether an event occurred or not because you |
|
|
605 | have to re-arm the watcher. |
|
|
606 | |
601 | fortunately libev seems to be able to work around it. |
607 | Fortunately libev seems to be able to work around these idiocies. |
602 | |
608 | |
603 | This backend maps C<EV_READ> and C<EV_WRITE> in the same way as |
609 | This backend maps C<EV_READ> and C<EV_WRITE> in the same way as |
604 | C<EVBACKEND_POLL>. |
610 | C<EVBACKEND_POLL>. |
605 | |
611 | |
606 | =item C<EVBACKEND_ALL> |
612 | =item C<EVBACKEND_ALL> |
… | |
… | |
1610 | In general you can register as many read and/or write event watchers per |
1616 | In general you can register as many read and/or write event watchers per |
1611 | fd as you want (as long as you don't confuse yourself). Setting all file |
1617 | fd as you want (as long as you don't confuse yourself). Setting all file |
1612 | descriptors to non-blocking mode is also usually a good idea (but not |
1618 | descriptors to non-blocking mode is also usually a good idea (but not |
1613 | required if you know what you are doing). |
1619 | required if you know what you are doing). |
1614 | |
1620 | |
1615 | If you cannot use non-blocking mode, then force the use of a |
|
|
1616 | known-to-be-good backend (at the time of this writing, this includes only |
|
|
1617 | C<EVBACKEND_SELECT> and C<EVBACKEND_POLL>). The same applies to file |
|
|
1618 | descriptors for which non-blocking operation makes no sense (such as |
|
|
1619 | files) - libev doesn't guarantee any specific behaviour in that case. |
|
|
1620 | |
|
|
1621 | Another thing you have to watch out for is that it is quite easy to |
1621 | Another thing you have to watch out for is that it is quite easy to |
1622 | receive "spurious" readiness notifications, that is your callback might |
1622 | receive "spurious" readiness notifications, that is, your callback might |
1623 | be called with C<EV_READ> but a subsequent C<read>(2) will actually block |
1623 | be called with C<EV_READ> but a subsequent C<read>(2) will actually block |
1624 | because there is no data. Not only are some backends known to create a |
1624 | because there is no data. It is very easy to get into this situation even |
1625 | lot of those (for example Solaris ports), it is very easy to get into |
1625 | with a relatively standard program structure. Thus it is best to always |
1626 | this situation even with a relatively standard program structure. Thus |
1626 | use non-blocking I/O: An extra C<read>(2) returning C<EAGAIN> is far |
1627 | it is best to always use non-blocking I/O: An extra C<read>(2) returning |
|
|
1628 | C<EAGAIN> is far preferable to a program hanging until some data arrives. |
1627 | preferable to a program hanging until some data arrives. |
1629 | |
1628 | |
1630 | If you cannot run the fd in non-blocking mode (for example you should |
1629 | If you cannot run the fd in non-blocking mode (for example you should |
1631 | not play around with an Xlib connection), then you have to separately |
1630 | not play around with an Xlib connection), then you have to separately |
1632 | re-test whether a file descriptor is really ready with a known-to-be good |
1631 | re-test whether a file descriptor is really ready with a known-to-be good |
1633 | interface such as poll (fortunately in our Xlib example, Xlib already |
1632 | interface such as poll (fortunately in the case of Xlib, it already does |
1634 | does this on its own, so its quite safe to use). Some people additionally |
1633 | this on its own, so its quite safe to use). Some people additionally |
1635 | use C<SIGALRM> and an interval timer, just to be sure you won't block |
1634 | use C<SIGALRM> and an interval timer, just to be sure you won't block |
1636 | indefinitely. |
1635 | indefinitely. |
1637 | |
1636 | |
1638 | But really, best use non-blocking mode. |
1637 | But really, best use non-blocking mode. |
1639 | |
1638 | |
… | |
… | |
1667 | |
1666 | |
1668 | There is no workaround possible except not registering events |
1667 | There is no workaround possible except not registering events |
1669 | for potentially C<dup ()>'ed file descriptors, or to resort to |
1668 | for potentially C<dup ()>'ed file descriptors, or to resort to |
1670 | C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>. |
1669 | C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>. |
1671 | |
1670 | |
|
|
1671 | =head3 The special problem of files |
|
|
1672 | |
|
|
1673 | Many people try to use C<select> (or libev) on file descriptors |
|
|
1674 | representing files, and expect it to become ready when their program |
|
|
1675 | doesn't block on disk accesses (which can take a long time on their own). |
|
|
1676 | |
|
|
1677 | However, this cannot ever work in the "expected" way - you get a readiness |
|
|
1678 | notification as soon as the kernel knows whether and how much data is |
|
|
1679 | there, and in the case of open files, that's always the case, so you |
|
|
1680 | always get a readiness notification instantly, and your read (or possibly |
|
|
1681 | write) will still block on the disk I/O. |
|
|
1682 | |
|
|
1683 | Another way to view it is that in the case of sockets, pipes, character |
|
|
1684 | devices and so on, there is another party (the sender) that delivers data |
|
|
1685 | on it's own, but in the case of files, there is no such thing: the disk |
|
|
1686 | will not send data on it's own, simply because it doesn't know what you |
|
|
1687 | wish to read - you would first have to request some data. |
|
|
1688 | |
|
|
1689 | Since files are typically not-so-well supported by advanced notification |
|
|
1690 | mechanism, libev tries hard to emulate POSIX behaviour with respect |
|
|
1691 | to files, even though you should not use it. The reason for this is |
|
|
1692 | convenience: sometimes you want to watch STDIN or STDOUT, which is |
|
|
1693 | usually a tty, often a pipe, but also sometimes files or special devices |
|
|
1694 | (for example, C<epoll> on Linux works with F</dev/random> but not with |
|
|
1695 | F</dev/urandom>), and even though the file might better be served with |
|
|
1696 | asynchronous I/O instead of with non-blocking I/O, it is still useful when |
|
|
1697 | it "just works" instead of freezing. |
|
|
1698 | |
|
|
1699 | So avoid file descriptors pointing to files when you know it (e.g. use |
|
|
1700 | libeio), but use them when it is convenient, e.g. for STDIN/STDOUT, or |
|
|
1701 | when you rarely read from a file instead of from a socket, and want to |
|
|
1702 | reuse the same code path. |
|
|
1703 | |
1672 | =head3 The special problem of fork |
1704 | =head3 The special problem of fork |
1673 | |
1705 | |
1674 | Some backends (epoll, kqueue) do not support C<fork ()> at all or exhibit |
1706 | Some backends (epoll, kqueue) do not support C<fork ()> at all or exhibit |
1675 | useless behaviour. Libev fully supports fork, but needs to be told about |
1707 | useless behaviour. Libev fully supports fork, but needs to be told about |
1676 | it in the child. |
1708 | it in the child if you want to continue to use it in the child. |
1677 | |
1709 | |
1678 | To support fork in your programs, you either have to call |
1710 | To support fork in your child processes, you have to call C<ev_loop_fork |
1679 | C<ev_default_fork ()> or C<ev_loop_fork ()> after a fork in the child, |
1711 | ()> after a fork in the child, enable C<EVFLAG_FORKCHECK>, or resort to |
1680 | enable C<EVFLAG_FORKCHECK>, or resort to C<EVBACKEND_SELECT> or |
1712 | C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>. |
1681 | C<EVBACKEND_POLL>. |
|
|
1682 | |
1713 | |
1683 | =head3 The special problem of SIGPIPE |
1714 | =head3 The special problem of SIGPIPE |
1684 | |
1715 | |
1685 | While not really specific to libev, it is easy to forget about C<SIGPIPE>: |
1716 | While not really specific to libev, it is easy to forget about C<SIGPIPE>: |
1686 | when writing to a pipe whose other end has been closed, your program gets |
1717 | when writing to a pipe whose other end has been closed, your program gets |
… | |
… | |
3425 | |
3456 | |
3426 | This section explains some common idioms that are not immediately |
3457 | This section explains some common idioms that are not immediately |
3427 | obvious. Note that examples are sprinkled over the whole manual, and this |
3458 | obvious. Note that examples are sprinkled over the whole manual, and this |
3428 | section only contains stuff that wouldn't fit anywhere else. |
3459 | section only contains stuff that wouldn't fit anywhere else. |
3429 | |
3460 | |
3430 | =over 4 |
3461 | =head2 MODEL/NESTED EVENT LOOP INVOCATIONS AND EXIT CONDITIONS |
3431 | |
|
|
3432 | =item Model/nested event loop invocations and exit conditions. |
|
|
3433 | |
3462 | |
3434 | Often (especially in GUI toolkits) there are places where you have |
3463 | Often (especially in GUI toolkits) there are places where you have |
3435 | I<modal> interaction, which is most easily implemented by recursively |
3464 | I<modal> interaction, which is most easily implemented by recursively |
3436 | invoking C<ev_run>. |
3465 | invoking C<ev_run>. |
3437 | |
3466 | |
… | |
… | |
3465 | // exit main program, after modal loop is finished |
3494 | // exit main program, after modal loop is finished |
3466 | exit_main_loop = 1; |
3495 | exit_main_loop = 1; |
3467 | |
3496 | |
3468 | // exit both |
3497 | // exit both |
3469 | exit_main_loop = exit_nested_loop = 1; |
3498 | exit_main_loop = exit_nested_loop = 1; |
|
|
3499 | |
|
|
3500 | =head2 THREAD LOCKING EXAMPLE |
|
|
3501 | |
|
|
3502 | Here is a fictitious example of how to run an event loop in a different |
|
|
3503 | thread than where callbacks are being invoked and watchers are |
|
|
3504 | created/added/removed. |
|
|
3505 | |
|
|
3506 | For a real-world example, see the C<EV::Loop::Async> perl module, |
|
|
3507 | which uses exactly this technique (which is suited for many high-level |
|
|
3508 | languages). |
|
|
3509 | |
|
|
3510 | The example uses a pthread mutex to protect the loop data, a condition |
|
|
3511 | variable to wait for callback invocations, an async watcher to notify the |
|
|
3512 | event loop thread and an unspecified mechanism to wake up the main thread. |
|
|
3513 | |
|
|
3514 | First, you need to associate some data with the event loop: |
|
|
3515 | |
|
|
3516 | typedef struct { |
|
|
3517 | mutex_t lock; /* global loop lock */ |
|
|
3518 | ev_async async_w; |
|
|
3519 | thread_t tid; |
|
|
3520 | cond_t invoke_cv; |
|
|
3521 | } userdata; |
|
|
3522 | |
|
|
3523 | void prepare_loop (EV_P) |
|
|
3524 | { |
|
|
3525 | // for simplicity, we use a static userdata struct. |
|
|
3526 | static userdata u; |
|
|
3527 | |
|
|
3528 | ev_async_init (&u->async_w, async_cb); |
|
|
3529 | ev_async_start (EV_A_ &u->async_w); |
|
|
3530 | |
|
|
3531 | pthread_mutex_init (&u->lock, 0); |
|
|
3532 | pthread_cond_init (&u->invoke_cv, 0); |
|
|
3533 | |
|
|
3534 | // now associate this with the loop |
|
|
3535 | ev_set_userdata (EV_A_ u); |
|
|
3536 | ev_set_invoke_pending_cb (EV_A_ l_invoke); |
|
|
3537 | ev_set_loop_release_cb (EV_A_ l_release, l_acquire); |
|
|
3538 | |
|
|
3539 | // then create the thread running ev_loop |
|
|
3540 | pthread_create (&u->tid, 0, l_run, EV_A); |
|
|
3541 | } |
|
|
3542 | |
|
|
3543 | The callback for the C<ev_async> watcher does nothing: the watcher is used |
|
|
3544 | solely to wake up the event loop so it takes notice of any new watchers |
|
|
3545 | that might have been added: |
|
|
3546 | |
|
|
3547 | static void |
|
|
3548 | async_cb (EV_P_ ev_async *w, int revents) |
|
|
3549 | { |
|
|
3550 | // just used for the side effects |
|
|
3551 | } |
|
|
3552 | |
|
|
3553 | The C<l_release> and C<l_acquire> callbacks simply unlock/lock the mutex |
|
|
3554 | protecting the loop data, respectively. |
|
|
3555 | |
|
|
3556 | static void |
|
|
3557 | l_release (EV_P) |
|
|
3558 | { |
|
|
3559 | userdata *u = ev_userdata (EV_A); |
|
|
3560 | pthread_mutex_unlock (&u->lock); |
|
|
3561 | } |
|
|
3562 | |
|
|
3563 | static void |
|
|
3564 | l_acquire (EV_P) |
|
|
3565 | { |
|
|
3566 | userdata *u = ev_userdata (EV_A); |
|
|
3567 | pthread_mutex_lock (&u->lock); |
|
|
3568 | } |
|
|
3569 | |
|
|
3570 | The event loop thread first acquires the mutex, and then jumps straight |
|
|
3571 | into C<ev_run>: |
|
|
3572 | |
|
|
3573 | void * |
|
|
3574 | l_run (void *thr_arg) |
|
|
3575 | { |
|
|
3576 | struct ev_loop *loop = (struct ev_loop *)thr_arg; |
|
|
3577 | |
|
|
3578 | l_acquire (EV_A); |
|
|
3579 | pthread_setcanceltype (PTHREAD_CANCEL_ASYNCHRONOUS, 0); |
|
|
3580 | ev_run (EV_A_ 0); |
|
|
3581 | l_release (EV_A); |
|
|
3582 | |
|
|
3583 | return 0; |
|
|
3584 | } |
|
|
3585 | |
|
|
3586 | Instead of invoking all pending watchers, the C<l_invoke> callback will |
|
|
3587 | signal the main thread via some unspecified mechanism (signals? pipe |
|
|
3588 | writes? C<Async::Interrupt>?) and then waits until all pending watchers |
|
|
3589 | have been called (in a while loop because a) spurious wakeups are possible |
|
|
3590 | and b) skipping inter-thread-communication when there are no pending |
|
|
3591 | watchers is very beneficial): |
|
|
3592 | |
|
|
3593 | static void |
|
|
3594 | l_invoke (EV_P) |
|
|
3595 | { |
|
|
3596 | userdata *u = ev_userdata (EV_A); |
|
|
3597 | |
|
|
3598 | while (ev_pending_count (EV_A)) |
|
|
3599 | { |
|
|
3600 | wake_up_other_thread_in_some_magic_or_not_so_magic_way (); |
|
|
3601 | pthread_cond_wait (&u->invoke_cv, &u->lock); |
|
|
3602 | } |
|
|
3603 | } |
|
|
3604 | |
|
|
3605 | Now, whenever the main thread gets told to invoke pending watchers, it |
|
|
3606 | will grab the lock, call C<ev_invoke_pending> and then signal the loop |
|
|
3607 | thread to continue: |
|
|
3608 | |
|
|
3609 | static void |
|
|
3610 | real_invoke_pending (EV_P) |
|
|
3611 | { |
|
|
3612 | userdata *u = ev_userdata (EV_A); |
|
|
3613 | |
|
|
3614 | pthread_mutex_lock (&u->lock); |
|
|
3615 | ev_invoke_pending (EV_A); |
|
|
3616 | pthread_cond_signal (&u->invoke_cv); |
|
|
3617 | pthread_mutex_unlock (&u->lock); |
|
|
3618 | } |
|
|
3619 | |
|
|
3620 | Whenever you want to start/stop a watcher or do other modifications to an |
|
|
3621 | event loop, you will now have to lock: |
|
|
3622 | |
|
|
3623 | ev_timer timeout_watcher; |
|
|
3624 | userdata *u = ev_userdata (EV_A); |
|
|
3625 | |
|
|
3626 | ev_timer_init (&timeout_watcher, timeout_cb, 5.5, 0.); |
|
|
3627 | |
|
|
3628 | pthread_mutex_lock (&u->lock); |
|
|
3629 | ev_timer_start (EV_A_ &timeout_watcher); |
|
|
3630 | ev_async_send (EV_A_ &u->async_w); |
|
|
3631 | pthread_mutex_unlock (&u->lock); |
|
|
3632 | |
|
|
3633 | Note that sending the C<ev_async> watcher is required because otherwise |
|
|
3634 | an event loop currently blocking in the kernel will have no knowledge |
|
|
3635 | about the newly added timer. By waking up the loop it will pick up any new |
|
|
3636 | watchers in the next event loop iteration. |
3470 | |
3637 | |
3471 | =back |
3638 | =back |
3472 | |
3639 | |
3473 | |
3640 | |
3474 | =head1 LIBEVENT EMULATION |
3641 | =head1 LIBEVENT EMULATION |
… | |
… | |
4467 | default loop and triggering an C<ev_async> watcher from the default loop |
4634 | default loop and triggering an C<ev_async> watcher from the default loop |
4468 | watcher callback into the event loop interested in the signal. |
4635 | watcher callback into the event loop interested in the signal. |
4469 | |
4636 | |
4470 | =back |
4637 | =back |
4471 | |
4638 | |
4472 | =head4 THREAD LOCKING EXAMPLE |
4639 | See also L<THREAD LOCKING EXAMPLE>. |
4473 | |
|
|
4474 | Here is a fictitious example of how to run an event loop in a different |
|
|
4475 | thread than where callbacks are being invoked and watchers are |
|
|
4476 | created/added/removed. |
|
|
4477 | |
|
|
4478 | For a real-world example, see the C<EV::Loop::Async> perl module, |
|
|
4479 | which uses exactly this technique (which is suited for many high-level |
|
|
4480 | languages). |
|
|
4481 | |
|
|
4482 | The example uses a pthread mutex to protect the loop data, a condition |
|
|
4483 | variable to wait for callback invocations, an async watcher to notify the |
|
|
4484 | event loop thread and an unspecified mechanism to wake up the main thread. |
|
|
4485 | |
|
|
4486 | First, you need to associate some data with the event loop: |
|
|
4487 | |
|
|
4488 | typedef struct { |
|
|
4489 | mutex_t lock; /* global loop lock */ |
|
|
4490 | ev_async async_w; |
|
|
4491 | thread_t tid; |
|
|
4492 | cond_t invoke_cv; |
|
|
4493 | } userdata; |
|
|
4494 | |
|
|
4495 | void prepare_loop (EV_P) |
|
|
4496 | { |
|
|
4497 | // for simplicity, we use a static userdata struct. |
|
|
4498 | static userdata u; |
|
|
4499 | |
|
|
4500 | ev_async_init (&u->async_w, async_cb); |
|
|
4501 | ev_async_start (EV_A_ &u->async_w); |
|
|
4502 | |
|
|
4503 | pthread_mutex_init (&u->lock, 0); |
|
|
4504 | pthread_cond_init (&u->invoke_cv, 0); |
|
|
4505 | |
|
|
4506 | // now associate this with the loop |
|
|
4507 | ev_set_userdata (EV_A_ u); |
|
|
4508 | ev_set_invoke_pending_cb (EV_A_ l_invoke); |
|
|
4509 | ev_set_loop_release_cb (EV_A_ l_release, l_acquire); |
|
|
4510 | |
|
|
4511 | // then create the thread running ev_loop |
|
|
4512 | pthread_create (&u->tid, 0, l_run, EV_A); |
|
|
4513 | } |
|
|
4514 | |
|
|
4515 | The callback for the C<ev_async> watcher does nothing: the watcher is used |
|
|
4516 | solely to wake up the event loop so it takes notice of any new watchers |
|
|
4517 | that might have been added: |
|
|
4518 | |
|
|
4519 | static void |
|
|
4520 | async_cb (EV_P_ ev_async *w, int revents) |
|
|
4521 | { |
|
|
4522 | // just used for the side effects |
|
|
4523 | } |
|
|
4524 | |
|
|
4525 | The C<l_release> and C<l_acquire> callbacks simply unlock/lock the mutex |
|
|
4526 | protecting the loop data, respectively. |
|
|
4527 | |
|
|
4528 | static void |
|
|
4529 | l_release (EV_P) |
|
|
4530 | { |
|
|
4531 | userdata *u = ev_userdata (EV_A); |
|
|
4532 | pthread_mutex_unlock (&u->lock); |
|
|
4533 | } |
|
|
4534 | |
|
|
4535 | static void |
|
|
4536 | l_acquire (EV_P) |
|
|
4537 | { |
|
|
4538 | userdata *u = ev_userdata (EV_A); |
|
|
4539 | pthread_mutex_lock (&u->lock); |
|
|
4540 | } |
|
|
4541 | |
|
|
4542 | The event loop thread first acquires the mutex, and then jumps straight |
|
|
4543 | into C<ev_run>: |
|
|
4544 | |
|
|
4545 | void * |
|
|
4546 | l_run (void *thr_arg) |
|
|
4547 | { |
|
|
4548 | struct ev_loop *loop = (struct ev_loop *)thr_arg; |
|
|
4549 | |
|
|
4550 | l_acquire (EV_A); |
|
|
4551 | pthread_setcanceltype (PTHREAD_CANCEL_ASYNCHRONOUS, 0); |
|
|
4552 | ev_run (EV_A_ 0); |
|
|
4553 | l_release (EV_A); |
|
|
4554 | |
|
|
4555 | return 0; |
|
|
4556 | } |
|
|
4557 | |
|
|
4558 | Instead of invoking all pending watchers, the C<l_invoke> callback will |
|
|
4559 | signal the main thread via some unspecified mechanism (signals? pipe |
|
|
4560 | writes? C<Async::Interrupt>?) and then waits until all pending watchers |
|
|
4561 | have been called (in a while loop because a) spurious wakeups are possible |
|
|
4562 | and b) skipping inter-thread-communication when there are no pending |
|
|
4563 | watchers is very beneficial): |
|
|
4564 | |
|
|
4565 | static void |
|
|
4566 | l_invoke (EV_P) |
|
|
4567 | { |
|
|
4568 | userdata *u = ev_userdata (EV_A); |
|
|
4569 | |
|
|
4570 | while (ev_pending_count (EV_A)) |
|
|
4571 | { |
|
|
4572 | wake_up_other_thread_in_some_magic_or_not_so_magic_way (); |
|
|
4573 | pthread_cond_wait (&u->invoke_cv, &u->lock); |
|
|
4574 | } |
|
|
4575 | } |
|
|
4576 | |
|
|
4577 | Now, whenever the main thread gets told to invoke pending watchers, it |
|
|
4578 | will grab the lock, call C<ev_invoke_pending> and then signal the loop |
|
|
4579 | thread to continue: |
|
|
4580 | |
|
|
4581 | static void |
|
|
4582 | real_invoke_pending (EV_P) |
|
|
4583 | { |
|
|
4584 | userdata *u = ev_userdata (EV_A); |
|
|
4585 | |
|
|
4586 | pthread_mutex_lock (&u->lock); |
|
|
4587 | ev_invoke_pending (EV_A); |
|
|
4588 | pthread_cond_signal (&u->invoke_cv); |
|
|
4589 | pthread_mutex_unlock (&u->lock); |
|
|
4590 | } |
|
|
4591 | |
|
|
4592 | Whenever you want to start/stop a watcher or do other modifications to an |
|
|
4593 | event loop, you will now have to lock: |
|
|
4594 | |
|
|
4595 | ev_timer timeout_watcher; |
|
|
4596 | userdata *u = ev_userdata (EV_A); |
|
|
4597 | |
|
|
4598 | ev_timer_init (&timeout_watcher, timeout_cb, 5.5, 0.); |
|
|
4599 | |
|
|
4600 | pthread_mutex_lock (&u->lock); |
|
|
4601 | ev_timer_start (EV_A_ &timeout_watcher); |
|
|
4602 | ev_async_send (EV_A_ &u->async_w); |
|
|
4603 | pthread_mutex_unlock (&u->lock); |
|
|
4604 | |
|
|
4605 | Note that sending the C<ev_async> watcher is required because otherwise |
|
|
4606 | an event loop currently blocking in the kernel will have no knowledge |
|
|
4607 | about the newly added timer. By waking up the loop it will pick up any new |
|
|
4608 | watchers in the next event loop iteration. |
|
|
4609 | |
4640 | |
4610 | =head3 COROUTINES |
4641 | =head3 COROUTINES |
4611 | |
4642 | |
4612 | Libev is very accommodating to coroutines ("cooperative threads"): |
4643 | Libev is very accommodating to coroutines ("cooperative threads"): |
4613 | libev fully supports nesting calls to its functions from different |
4644 | libev fully supports nesting calls to its functions from different |