… | |
… | |
856 | more often than 100 times per second: |
856 | more often than 100 times per second: |
857 | |
857 | |
858 | ev_set_timeout_collect_interval (EV_DEFAULT_UC_ 0.1); |
858 | ev_set_timeout_collect_interval (EV_DEFAULT_UC_ 0.1); |
859 | ev_set_io_collect_interval (EV_DEFAULT_UC_ 0.01); |
859 | ev_set_io_collect_interval (EV_DEFAULT_UC_ 0.01); |
860 | |
860 | |
|
|
861 | =item ev_invoke_pending (loop) |
|
|
862 | |
|
|
863 | This call will simply invoke all pending watchers while resetting their |
|
|
864 | pending state. Normally, C<ev_loop> does this automatically when required, |
|
|
865 | but when overriding the invoke callback this call comes handy. |
|
|
866 | |
|
|
867 | =item int ev_pending_count (loop) |
|
|
868 | |
|
|
869 | Returns the number of pending watchers - zero indicates that no watchers |
|
|
870 | are pending. |
|
|
871 | |
|
|
872 | =item ev_set_invoke_pending_cb (loop, void (*invoke_pending_cb)(EV_P)) |
|
|
873 | |
|
|
874 | This overrides the invoke pending functionality of the loop: Instead of |
|
|
875 | invoking all pending watchers when there are any, C<ev_loop> will call |
|
|
876 | this callback instead. This is useful, for example, when you want to |
|
|
877 | invoke the actual watchers inside another context (another thread etc.). |
|
|
878 | |
|
|
879 | If you want to reset the callback, use C<ev_invoke_pending> as new |
|
|
880 | callback. |
|
|
881 | |
|
|
882 | =item ev_set_loop_release_cb (loop, void (*release)(EV_P), void (*acquire)(EV_P)) |
|
|
883 | |
|
|
884 | Sometimes you want to share the same loop between multiple threads. This |
|
|
885 | can be done relatively simply by putting mutex_lock/unlock calls around |
|
|
886 | each call to a libev function. |
|
|
887 | |
|
|
888 | However, C<ev_loop> can run an indefinite time, so it is not feasible to |
|
|
889 | wait for it to return. One way around this is to wake up the loop via |
|
|
890 | C<ev_unloop> and C<av_async_send>, another way is to set these I<release> |
|
|
891 | and I<acquire> callbacks on the loop. |
|
|
892 | |
|
|
893 | When set, then C<release> will be called just before the thread is |
|
|
894 | suspended waiting for new events, and C<acquire> is called just |
|
|
895 | afterwards. |
|
|
896 | |
|
|
897 | Ideally, C<release> will just call your mutex_unlock function, and |
|
|
898 | C<acquire> will just call the mutex_lock function again. |
|
|
899 | |
|
|
900 | While event loop modifications are allowed between invocations of |
|
|
901 | C<release> and C<acquire> (that's their only purpose after all), no |
|
|
902 | modifications done will affect the event loop, i.e. adding watchers will |
|
|
903 | have no effect on the set of file descriptors being watched, or the time |
|
|
904 | waited. USe an C<ev_async> watcher to wake up C<ev_loop> when you want it |
|
|
905 | to take note of any changes you made. |
|
|
906 | |
|
|
907 | In theory, threads executing C<ev_loop> will be async-cancel safe between |
|
|
908 | invocations of C<release> and C<acquire>. |
|
|
909 | |
|
|
910 | See also the locking example in the C<THREADS> section later in this |
|
|
911 | document. |
|
|
912 | |
|
|
913 | =item ev_set_userdata (loop, void *data) |
|
|
914 | |
|
|
915 | =item ev_userdata (loop) |
|
|
916 | |
|
|
917 | Set and retrieve a single C<void *> associated with a loop. When |
|
|
918 | C<ev_set_userdata> has never been called, then C<ev_userdata> returns |
|
|
919 | C<0.> |
|
|
920 | |
|
|
921 | These two functions can be used to associate arbitrary data with a loop, |
|
|
922 | and are intended solely for the C<invoke_pending_cb>, C<release> and |
|
|
923 | C<acquire> callbacks described above, but of course can be (ab-)used for |
|
|
924 | any other purpose as well. |
|
|
925 | |
861 | =item ev_loop_verify (loop) |
926 | =item ev_loop_verify (loop) |
862 | |
927 | |
863 | This function only does something when C<EV_VERIFY> support has been |
928 | This function only does something when C<EV_VERIFY> support has been |
864 | compiled in, which is the default for non-minimal builds. It tries to go |
929 | compiled in, which is the default for non-minimal builds. It tries to go |
865 | through all internal structures and checks them for validity. If anything |
930 | through all internal structures and checks them for validity. If anything |
… | |
… | |
1690 | |
1755 | |
1691 | If the event loop is suspended for a long time, you can also force an |
1756 | If the event loop is suspended for a long time, you can also force an |
1692 | update of the time returned by C<ev_now ()> by calling C<ev_now_update |
1757 | update of the time returned by C<ev_now ()> by calling C<ev_now_update |
1693 | ()>. |
1758 | ()>. |
1694 | |
1759 | |
|
|
1760 | =head3 The special problems of suspended animation |
|
|
1761 | |
|
|
1762 | When you leave the server world it is quite customary to hit machines that |
|
|
1763 | can suspend/hibernate - what happens to the clocks during such a suspend? |
|
|
1764 | |
|
|
1765 | Some quick tests made with a Linux 2.6.28 indicate that a suspend freezes |
|
|
1766 | all processes, while the clocks (C<times>, C<CLOCK_MONOTONIC>) continue |
|
|
1767 | to run until the system is suspended, but they will not advance while the |
|
|
1768 | system is suspended. That means, on resume, it will be as if the program |
|
|
1769 | was frozen for a few seconds, but the suspend time will not be counted |
|
|
1770 | towards C<ev_timer> when a monotonic clock source is used. The real time |
|
|
1771 | clock advanced as expected, but if it is used as sole clocksource, then a |
|
|
1772 | long suspend would be detected as a time jump by libev, and timers would |
|
|
1773 | be adjusted accordingly. |
|
|
1774 | |
|
|
1775 | I would not be surprised to see different behaviour in different between |
|
|
1776 | operating systems, OS versions or even different hardware. |
|
|
1777 | |
|
|
1778 | The other form of suspend (job control, or sending a SIGSTOP) will see a |
|
|
1779 | time jump in the monotonic clocks and the realtime clock. If the program |
|
|
1780 | is suspended for a very long time, and monotonic clock sources are in use, |
|
|
1781 | then you can expect C<ev_timer>s to expire as the full suspension time |
|
|
1782 | will be counted towards the timers. When no monotonic clock source is in |
|
|
1783 | use, then libev will again assume a timejump and adjust accordingly. |
|
|
1784 | |
|
|
1785 | It might be beneficial for this latter case to call C<ev_suspend> |
|
|
1786 | and C<ev_resume> in code that handles C<SIGTSTP>, to at least get |
|
|
1787 | deterministic behaviour in this case (you can do nothing against |
|
|
1788 | C<SIGSTOP>). |
|
|
1789 | |
1695 | =head3 Watcher-Specific Functions and Data Members |
1790 | =head3 Watcher-Specific Functions and Data Members |
1696 | |
1791 | |
1697 | =over 4 |
1792 | =over 4 |
1698 | |
1793 | |
1699 | =item ev_timer_init (ev_timer *, callback, ev_tstamp after, ev_tstamp repeat) |
1794 | =item ev_timer_init (ev_timer *, callback, ev_tstamp after, ev_tstamp repeat) |
… | |
… | |
3671 | defined to be C<0>, then they are not. |
3766 | defined to be C<0>, then they are not. |
3672 | |
3767 | |
3673 | =item EV_MINIMAL |
3768 | =item EV_MINIMAL |
3674 | |
3769 | |
3675 | If you need to shave off some kilobytes of code at the expense of some |
3770 | If you need to shave off some kilobytes of code at the expense of some |
3676 | speed, define this symbol to C<1>. Currently this is used to override some |
3771 | speed (but with the full API), define this symbol to C<1>. Currently this |
3677 | inlining decisions, saves roughly 30% code size on amd64. It also selects a |
3772 | is used to override some inlining decisions, saves roughly 30% code size |
3678 | much smaller 2-heap for timer management over the default 4-heap. |
3773 | on amd64. It also selects a much smaller 2-heap for timer management over |
|
|
3774 | the default 4-heap. |
|
|
3775 | |
|
|
3776 | You can save even more by disabling watcher types you do not need |
|
|
3777 | and setting C<EV_MAXPRI> == C<EV_MINPRI>. Also, disabling C<assert> |
|
|
3778 | (C<-DNDEBUG>) will usually reduce code size a lot. |
|
|
3779 | |
|
|
3780 | Defining C<EV_MINIMAL> to C<2> will additionally reduce the core API to |
|
|
3781 | provide a bare-bones event library. See C<ev.h> for details on what parts |
|
|
3782 | of the API are still available, and do not complain if this subset changes |
|
|
3783 | over time. |
3679 | |
3784 | |
3680 | =item EV_PID_HASHSIZE |
3785 | =item EV_PID_HASHSIZE |
3681 | |
3786 | |
3682 | C<ev_child> watchers use a small hash table to distribute workload by |
3787 | C<ev_child> watchers use a small hash table to distribute workload by |
3683 | pid. The default size is C<16> (or C<1> with C<EV_MINIMAL>), usually more |
3788 | pid. The default size is C<16> (or C<1> with C<EV_MINIMAL>), usually more |
… | |
… | |
3869 | default loop and triggering an C<ev_async> watcher from the default loop |
3974 | default loop and triggering an C<ev_async> watcher from the default loop |
3870 | watcher callback into the event loop interested in the signal. |
3975 | watcher callback into the event loop interested in the signal. |
3871 | |
3976 | |
3872 | =back |
3977 | =back |
3873 | |
3978 | |
|
|
3979 | =head4 THREAD LOCKING EXAMPLE |
|
|
3980 | |
|
|
3981 | Here is a fictitious example of how to run an event loop in a different |
|
|
3982 | thread than where callbacks are being invoked and watchers are |
|
|
3983 | created/added/removed. |
|
|
3984 | |
|
|
3985 | For a real-world example, see the C<EV::Loop::Async> perl module, |
|
|
3986 | which uses exactly this technique (which is suited for many high-level |
|
|
3987 | languages). |
|
|
3988 | |
|
|
3989 | The example uses a pthread mutex to protect the loop data, a condition |
|
|
3990 | variable to wait for callback invocations, an async watcher to notify the |
|
|
3991 | event loop thread and an unspecified mechanism to wake up the main thread. |
|
|
3992 | |
|
|
3993 | First, you need to associate some data with the event loop: |
|
|
3994 | |
|
|
3995 | typedef struct { |
|
|
3996 | mutex_t lock; /* global loop lock */ |
|
|
3997 | ev_async async_w; |
|
|
3998 | thread_t tid; |
|
|
3999 | cond_t invoke_cv; |
|
|
4000 | } userdata; |
|
|
4001 | |
|
|
4002 | void prepare_loop (EV_P) |
|
|
4003 | { |
|
|
4004 | // for simplicity, we use a static userdata struct. |
|
|
4005 | static userdata u; |
|
|
4006 | |
|
|
4007 | ev_async_init (&u->async_w, async_cb); |
|
|
4008 | ev_async_start (EV_A_ &u->async_w); |
|
|
4009 | |
|
|
4010 | pthread_mutex_init (&u->lock, 0); |
|
|
4011 | pthread_cond_init (&u->invoke_cv, 0); |
|
|
4012 | |
|
|
4013 | // now associate this with the loop |
|
|
4014 | ev_set_userdata (EV_A_ u); |
|
|
4015 | ev_set_invoke_pending_cb (EV_A_ l_invoke); |
|
|
4016 | ev_set_loop_release_cb (EV_A_ l_release, l_acquire); |
|
|
4017 | |
|
|
4018 | // then create the thread running ev_loop |
|
|
4019 | pthread_create (&u->tid, 0, l_run, EV_A); |
|
|
4020 | } |
|
|
4021 | |
|
|
4022 | The callback for the C<ev_async> watcher does nothing: the watcher is used |
|
|
4023 | solely to wake up the event loop so it takes notice of any new watchers |
|
|
4024 | that might have been added: |
|
|
4025 | |
|
|
4026 | static void |
|
|
4027 | async_cb (EV_P_ ev_async *w, int revents) |
|
|
4028 | { |
|
|
4029 | // just used for the side effects |
|
|
4030 | } |
|
|
4031 | |
|
|
4032 | The C<l_release> and C<l_acquire> callbacks simply unlock/lock the mutex |
|
|
4033 | protecting the loop data, respectively. |
|
|
4034 | |
|
|
4035 | static void |
|
|
4036 | l_release (EV_P) |
|
|
4037 | { |
|
|
4038 | userdata *u = ev_userdata (EV_A); |
|
|
4039 | pthread_mutex_unlock (&u->lock); |
|
|
4040 | } |
|
|
4041 | |
|
|
4042 | static void |
|
|
4043 | l_acquire (EV_P) |
|
|
4044 | { |
|
|
4045 | userdata *u = ev_userdata (EV_A); |
|
|
4046 | pthread_mutex_lock (&u->lock); |
|
|
4047 | } |
|
|
4048 | |
|
|
4049 | The event loop thread first acquires the mutex, and then jumps straight |
|
|
4050 | into C<ev_loop>: |
|
|
4051 | |
|
|
4052 | void * |
|
|
4053 | l_run (void *thr_arg) |
|
|
4054 | { |
|
|
4055 | struct ev_loop *loop = (struct ev_loop *)thr_arg; |
|
|
4056 | |
|
|
4057 | l_acquire (EV_A); |
|
|
4058 | pthread_setcanceltype (PTHREAD_CANCEL_ASYNCHRONOUS, 0); |
|
|
4059 | ev_loop (EV_A_ 0); |
|
|
4060 | l_release (EV_A); |
|
|
4061 | |
|
|
4062 | return 0; |
|
|
4063 | } |
|
|
4064 | |
|
|
4065 | Instead of invoking all pending watchers, the C<l_invoke> callback will |
|
|
4066 | signal the main thread via some unspecified mechanism (signals? pipe |
|
|
4067 | writes? C<Async::Interrupt>?) and then waits until all pending watchers |
|
|
4068 | have been called (in a while loop because a) spurious wakeups are possible |
|
|
4069 | and b) skipping inter-thread-communication when there are no pending |
|
|
4070 | watchers is very beneficial): |
|
|
4071 | |
|
|
4072 | static void |
|
|
4073 | l_invoke (EV_P) |
|
|
4074 | { |
|
|
4075 | userdata *u = ev_userdata (EV_A); |
|
|
4076 | |
|
|
4077 | while (ev_pending_count (EV_A)) |
|
|
4078 | { |
|
|
4079 | wake_up_other_thread_in_some_magic_or_not_so_magic_way (); |
|
|
4080 | pthread_cond_wait (&u->invoke_cv, &u->lock); |
|
|
4081 | } |
|
|
4082 | } |
|
|
4083 | |
|
|
4084 | Now, whenever the main thread gets told to invoke pending watchers, it |
|
|
4085 | will grab the lock, call C<ev_invoke_pending> and then signal the loop |
|
|
4086 | thread to continue: |
|
|
4087 | |
|
|
4088 | static void |
|
|
4089 | real_invoke_pending (EV_P) |
|
|
4090 | { |
|
|
4091 | userdata *u = ev_userdata (EV_A); |
|
|
4092 | |
|
|
4093 | pthread_mutex_lock (&u->lock); |
|
|
4094 | ev_invoke_pending (EV_A); |
|
|
4095 | pthread_cond_signal (&u->invoke_cv); |
|
|
4096 | pthread_mutex_unlock (&u->lock); |
|
|
4097 | } |
|
|
4098 | |
|
|
4099 | Whenever you want to start/stop a watcher or do other modifications to an |
|
|
4100 | event loop, you will now have to lock: |
|
|
4101 | |
|
|
4102 | ev_timer timeout_watcher; |
|
|
4103 | userdata *u = ev_userdata (EV_A); |
|
|
4104 | |
|
|
4105 | ev_timer_init (&timeout_watcher, timeout_cb, 5.5, 0.); |
|
|
4106 | |
|
|
4107 | pthread_mutex_lock (&u->lock); |
|
|
4108 | ev_timer_start (EV_A_ &timeout_watcher); |
|
|
4109 | ev_async_send (EV_A_ &u->async_w); |
|
|
4110 | pthread_mutex_unlock (&u->lock); |
|
|
4111 | |
|
|
4112 | Note that sending the C<ev_async> watcher is required because otherwise |
|
|
4113 | an event loop currently blocking in the kernel will have no knowledge |
|
|
4114 | about the newly added timer. By waking up the loop it will pick up any new |
|
|
4115 | watchers in the next event loop iteration. |
|
|
4116 | |
3874 | =head3 COROUTINES |
4117 | =head3 COROUTINES |
3875 | |
4118 | |
3876 | Libev is very accommodating to coroutines ("cooperative threads"): |
4119 | Libev is very accommodating to coroutines ("cooperative threads"): |
3877 | libev fully supports nesting calls to its functions from different |
4120 | libev fully supports nesting calls to its functions from different |
3878 | coroutines (e.g. you can call C<ev_loop> on the same loop from two |
4121 | coroutines (e.g. you can call C<ev_loop> on the same loop from two |
3879 | different coroutines, and switch freely between both coroutines running the |
4122 | different coroutines, and switch freely between both coroutines running |
3880 | loop, as long as you don't confuse yourself). The only exception is that |
4123 | the loop, as long as you don't confuse yourself). The only exception is |
3881 | you must not do this from C<ev_periodic> reschedule callbacks. |
4124 | that you must not do this from C<ev_periodic> reschedule callbacks. |
3882 | |
4125 | |
3883 | Care has been taken to ensure that libev does not keep local state inside |
4126 | Care has been taken to ensure that libev does not keep local state inside |
3884 | C<ev_loop>, and other calls do not usually allow for coroutine switches as |
4127 | C<ev_loop>, and other calls do not usually allow for coroutine switches as |
3885 | they do not call any callbacks. |
4128 | they do not call any callbacks. |
3886 | |
4129 | |