… | |
… | |
39 | F<README.embed> in the libev distribution. If libev was configured without |
39 | F<README.embed> in the libev distribution. If libev was configured without |
40 | support for multiple event loops, then all functions taking an initial |
40 | support for multiple event loops, then all functions taking an initial |
41 | argument of name C<loop> (which is always of type C<struct ev_loop *>) |
41 | argument of name C<loop> (which is always of type C<struct ev_loop *>) |
42 | will not have this argument. |
42 | will not have this argument. |
43 | |
43 | |
44 | =head1 TIME AND OTHER GLOBAL FUNCTIONS |
44 | =head1 TIME REPRESENTATION |
45 | |
45 | |
46 | Libev represents time as a single floating point number, representing the |
46 | Libev represents time as a single floating point number, representing the |
47 | (fractional) number of seconds since the (POSIX) epoch (somewhere near |
47 | (fractional) number of seconds since the (POSIX) epoch (somewhere near |
48 | the beginning of 1970, details are complicated, don't ask). This type is |
48 | the beginning of 1970, details are complicated, don't ask). This type is |
49 | called C<ev_tstamp>, which is what you should use too. It usually aliases |
49 | called C<ev_tstamp>, which is what you should use too. It usually aliases |
50 | to the double type in C. |
50 | to the double type in C. |
51 | |
51 | |
|
|
52 | =head1 GLOBAL FUNCTIONS |
|
|
53 | |
|
|
54 | These functions can be called anytime, even before initialising the |
|
|
55 | library in any way. |
|
|
56 | |
52 | =over 4 |
57 | =over 4 |
53 | |
58 | |
54 | =item ev_tstamp ev_time () |
59 | =item ev_tstamp ev_time () |
55 | |
60 | |
56 | Returns the current time as libev would use it. |
61 | Returns the current time as libev would use it. Please note that the |
|
|
62 | C<ev_now> function is usually faster and also often returns the timestamp |
|
|
63 | you actually want to know. |
57 | |
64 | |
58 | =item int ev_version_major () |
65 | =item int ev_version_major () |
59 | |
66 | |
60 | =item int ev_version_minor () |
67 | =item int ev_version_minor () |
61 | |
68 | |
… | |
… | |
67 | |
74 | |
68 | Usually, it's a good idea to terminate if the major versions mismatch, |
75 | Usually, it's a good idea to terminate if the major versions mismatch, |
69 | as this indicates an incompatible change. Minor versions are usually |
76 | as this indicates an incompatible change. Minor versions are usually |
70 | compatible to older versions, so a larger minor version alone is usually |
77 | compatible to older versions, so a larger minor version alone is usually |
71 | not a problem. |
78 | not a problem. |
|
|
79 | |
|
|
80 | =item unsigned int ev_supported_backends () |
|
|
81 | |
|
|
82 | Return the set of all backends (i.e. their corresponding C<EV_BACKEND_*> |
|
|
83 | value) compiled into this binary of libev (independent of their |
|
|
84 | availability on the system you are running on). See C<ev_default_loop> for |
|
|
85 | a description of the set values. |
|
|
86 | |
|
|
87 | =item unsigned int ev_recommended_backends () |
|
|
88 | |
|
|
89 | Return the set of all backends compiled into this binary of libev and also |
|
|
90 | recommended for this platform. This set is often smaller than the one |
|
|
91 | returned by C<ev_supported_backends>, as for example kqueue is broken on |
|
|
92 | most BSDs and will not be autodetected unless you explicitly request it |
|
|
93 | (assuming you know what you are doing). This is the set of backends that |
|
|
94 | C<EVFLAG_AUTO> will probe for. |
72 | |
95 | |
73 | =item ev_set_allocator (void *(*cb)(void *ptr, long size)) |
96 | =item ev_set_allocator (void *(*cb)(void *ptr, long size)) |
74 | |
97 | |
75 | Sets the allocation function to use (the prototype is similar to the |
98 | Sets the allocation function to use (the prototype is similar to the |
76 | realloc C function, the semantics are identical). It is used to allocate |
99 | realloc C function, the semantics are identical). It is used to allocate |
… | |
… | |
99 | An event loop is described by a C<struct ev_loop *>. The library knows two |
122 | An event loop is described by a C<struct ev_loop *>. The library knows two |
100 | types of such loops, the I<default> loop, which supports signals and child |
123 | types of such loops, the I<default> loop, which supports signals and child |
101 | events, and dynamically created loops which do not. |
124 | events, and dynamically created loops which do not. |
102 | |
125 | |
103 | If you use threads, a common model is to run the default event loop |
126 | If you use threads, a common model is to run the default event loop |
104 | in your main thread (or in a separate thrad) and for each thread you |
127 | in your main thread (or in a separate thread) and for each thread you |
105 | create, you also create another event loop. Libev itself does no locking |
128 | create, you also create another event loop. Libev itself does no locking |
106 | whatsoever, so if you mix calls to the same event loop in different |
129 | whatsoever, so if you mix calls to the same event loop in different |
107 | threads, make sure you lock (this is usually a bad idea, though, even if |
130 | threads, make sure you lock (this is usually a bad idea, though, even if |
108 | done correctly, because it's hideous and inefficient). |
131 | done correctly, because it's hideous and inefficient). |
109 | |
132 | |
… | |
… | |
112 | =item struct ev_loop *ev_default_loop (unsigned int flags) |
135 | =item struct ev_loop *ev_default_loop (unsigned int flags) |
113 | |
136 | |
114 | This will initialise the default event loop if it hasn't been initialised |
137 | This will initialise the default event loop if it hasn't been initialised |
115 | yet and return it. If the default loop could not be initialised, returns |
138 | yet and return it. If the default loop could not be initialised, returns |
116 | false. If it already was initialised it simply returns it (and ignores the |
139 | false. If it already was initialised it simply returns it (and ignores the |
117 | flags). |
140 | flags. If that is troubling you, check C<ev_backend ()> afterwards). |
118 | |
141 | |
119 | If you don't know what event loop to use, use the one returned from this |
142 | If you don't know what event loop to use, use the one returned from this |
120 | function. |
143 | function. |
121 | |
144 | |
122 | The flags argument can be used to specify special behaviour or specific |
145 | The flags argument can be used to specify special behaviour or specific |
123 | backends to use, and is usually specified as 0 (or EVFLAG_AUTO). |
146 | backends to use, and is usually specified as C<0> (or EVFLAG_AUTO). |
124 | |
147 | |
125 | It supports the following flags: |
148 | It supports the following flags: |
126 | |
149 | |
127 | =over 4 |
150 | =over 4 |
128 | |
151 | |
… | |
… | |
138 | C<LIBEV_FLAGS>. Otherwise (the default), this environment variable will |
161 | C<LIBEV_FLAGS>. Otherwise (the default), this environment variable will |
139 | override the flags completely if it is found in the environment. This is |
162 | override the flags completely if it is found in the environment. This is |
140 | useful to try out specific backends to test their performance, or to work |
163 | useful to try out specific backends to test their performance, or to work |
141 | around bugs. |
164 | around bugs. |
142 | |
165 | |
143 | =item C<EVMETHOD_SELECT> (portable select backend) |
166 | =item C<EVBACKEND_SELECT> (value 1, portable select backend) |
144 | |
167 | |
|
|
168 | This is your standard select(2) backend. Not I<completely> standard, as |
|
|
169 | libev tries to roll its own fd_set with no limits on the number of fds, |
|
|
170 | but if that fails, expect a fairly low limit on the number of fds when |
|
|
171 | using this backend. It doesn't scale too well (O(highest_fd)), but its usually |
|
|
172 | the fastest backend for a low number of fds. |
|
|
173 | |
145 | =item C<EVMETHOD_POLL> (poll backend, available everywhere except on windows) |
174 | =item C<EVBACKEND_POLL> (value 2, poll backend, available everywhere except on windows) |
146 | |
175 | |
147 | =item C<EVMETHOD_EPOLL> (linux only) |
176 | And this is your standard poll(2) backend. It's more complicated than |
|
|
177 | select, but handles sparse fds better and has no artificial limit on the |
|
|
178 | number of fds you can use (except it will slow down considerably with a |
|
|
179 | lot of inactive fds). It scales similarly to select, i.e. O(total_fds). |
148 | |
180 | |
149 | =item C<EVMETHOD_KQUEUE> (some bsds only) |
181 | =item C<EVBACKEND_EPOLL> (value 4, Linux) |
150 | |
182 | |
151 | =item C<EVMETHOD_DEVPOLL> (solaris 8 only) |
183 | For few fds, this backend is a bit little slower than poll and select, |
|
|
184 | but it scales phenomenally better. While poll and select usually scale like |
|
|
185 | O(total_fds) where n is the total number of fds (or the highest fd), epoll scales |
|
|
186 | either O(1) or O(active_fds). |
152 | |
187 | |
153 | =item C<EVMETHOD_PORT> (solaris 10 only) |
188 | While stopping and starting an I/O watcher in the same iteration will |
|
|
189 | result in some caching, there is still a syscall per such incident |
|
|
190 | (because the fd could point to a different file description now), so its |
|
|
191 | best to avoid that. Also, dup()ed file descriptors might not work very |
|
|
192 | well if you register events for both fds. |
|
|
193 | |
|
|
194 | Please note that epoll sometimes generates spurious notifications, so you |
|
|
195 | need to use non-blocking I/O or other means to avoid blocking when no data |
|
|
196 | (or space) is available. |
|
|
197 | |
|
|
198 | =item C<EVBACKEND_KQUEUE> (value 8, most BSD clones) |
|
|
199 | |
|
|
200 | Kqueue deserves special mention, as at the time of this writing, it |
|
|
201 | was broken on all BSDs except NetBSD (usually it doesn't work with |
|
|
202 | anything but sockets and pipes, except on Darwin, where of course its |
|
|
203 | completely useless). For this reason its not being "autodetected" unless |
|
|
204 | you explicitly specify the flags (i.e. you don't use EVFLAG_AUTO). |
|
|
205 | |
|
|
206 | It scales in the same way as the epoll backend, but the interface to the |
|
|
207 | kernel is more efficient (which says nothing about its actual speed, of |
|
|
208 | course). While starting and stopping an I/O watcher does not cause an |
|
|
209 | extra syscall as with epoll, it still adds up to four event changes per |
|
|
210 | incident, so its best to avoid that. |
|
|
211 | |
|
|
212 | =item C<EVBACKEND_DEVPOLL> (value 16, Solaris 8) |
|
|
213 | |
|
|
214 | This is not implemented yet (and might never be). |
|
|
215 | |
|
|
216 | =item C<EVBACKEND_PORT> (value 32, Solaris 10) |
|
|
217 | |
|
|
218 | This uses the Solaris 10 port mechanism. As with everything on Solaris, |
|
|
219 | it's really slow, but it still scales very well (O(active_fds)). |
|
|
220 | |
|
|
221 | Please note that solaris ports can result in a lot of spurious |
|
|
222 | notifications, so you need to use non-blocking I/O or other means to avoid |
|
|
223 | blocking when no data (or space) is available. |
|
|
224 | |
|
|
225 | =item C<EVBACKEND_ALL> |
|
|
226 | |
|
|
227 | Try all backends (even potentially broken ones that wouldn't be tried |
|
|
228 | with C<EVFLAG_AUTO>). Since this is a mask, you can do stuff such as |
|
|
229 | C<EVBACKEND_ALL & ~EVBACKEND_KQUEUE>. |
|
|
230 | |
|
|
231 | =back |
154 | |
232 | |
155 | If one or more of these are ored into the flags value, then only these |
233 | If one or more of these are ored into the flags value, then only these |
156 | backends will be tried (in the reverse order as given here). If one are |
234 | backends will be tried (in the reverse order as given here). If none are |
157 | specified, any backend will do. |
235 | specified, most compiled-in backend will be tried, usually in reverse |
158 | |
236 | order of their flag values :) |
159 | =back |
|
|
160 | |
237 | |
161 | =item struct ev_loop *ev_loop_new (unsigned int flags) |
238 | =item struct ev_loop *ev_loop_new (unsigned int flags) |
162 | |
239 | |
163 | Similar to C<ev_default_loop>, but always creates a new event loop that is |
240 | Similar to C<ev_default_loop>, but always creates a new event loop that is |
164 | always distinct from the default loop. Unlike the default loop, it cannot |
241 | always distinct from the default loop. Unlike the default loop, it cannot |
… | |
… | |
181 | This function reinitialises the kernel state for backends that have |
258 | This function reinitialises the kernel state for backends that have |
182 | one. Despite the name, you can call it anytime, but it makes most sense |
259 | one. Despite the name, you can call it anytime, but it makes most sense |
183 | after forking, in either the parent or child process (or both, but that |
260 | after forking, in either the parent or child process (or both, but that |
184 | again makes little sense). |
261 | again makes little sense). |
185 | |
262 | |
186 | You I<must> call this function after forking if and only if you want to |
263 | You I<must> call this function in the child process after forking if and |
187 | use the event library in both processes. If you just fork+exec, you don't |
264 | only if you want to use the event library in both processes. If you just |
188 | have to call it. |
265 | fork+exec, you don't have to call it. |
189 | |
266 | |
190 | The function itself is quite fast and it's usually not a problem to call |
267 | The function itself is quite fast and it's usually not a problem to call |
191 | it just in case after a fork. To make this easy, the function will fit in |
268 | it just in case after a fork. To make this easy, the function will fit in |
192 | quite nicely into a call to C<pthread_atfork>: |
269 | quite nicely into a call to C<pthread_atfork>: |
193 | |
270 | |
194 | pthread_atfork (0, 0, ev_default_fork); |
271 | pthread_atfork (0, 0, ev_default_fork); |
195 | |
272 | |
|
|
273 | At the moment, C<EVBACKEND_SELECT> and C<EVBACKEND_POLL> are safe to use |
|
|
274 | without calling this function, so if you force one of those backends you |
|
|
275 | do not need to care. |
|
|
276 | |
196 | =item ev_loop_fork (loop) |
277 | =item ev_loop_fork (loop) |
197 | |
278 | |
198 | Like C<ev_default_fork>, but acts on an event loop created by |
279 | Like C<ev_default_fork>, but acts on an event loop created by |
199 | C<ev_loop_new>. Yes, you have to call this on every allocated event loop |
280 | C<ev_loop_new>. Yes, you have to call this on every allocated event loop |
200 | after fork, and how you do this is entirely your own problem. |
281 | after fork, and how you do this is entirely your own problem. |
201 | |
282 | |
202 | =item unsigned int ev_method (loop) |
283 | =item unsigned int ev_backend (loop) |
203 | |
284 | |
204 | Returns one of the C<EVMETHOD_*> flags indicating the event backend in |
285 | Returns one of the C<EVBACKEND_*> flags indicating the event backend in |
205 | use. |
286 | use. |
206 | |
287 | |
207 | =item ev_tstamp ev_now (loop) |
288 | =item ev_tstamp ev_now (loop) |
208 | |
289 | |
209 | Returns the current "event loop time", which is the time the event loop |
290 | Returns the current "event loop time", which is the time the event loop |
… | |
… | |
232 | |
313 | |
233 | This flags value could be used to implement alternative looping |
314 | This flags value could be used to implement alternative looping |
234 | constructs, but the C<prepare> and C<check> watchers provide a better and |
315 | constructs, but the C<prepare> and C<check> watchers provide a better and |
235 | more generic mechanism. |
316 | more generic mechanism. |
236 | |
317 | |
|
|
318 | Here are the gory details of what ev_loop does: |
|
|
319 | |
|
|
320 | 1. If there are no active watchers (reference count is zero), return. |
|
|
321 | 2. Queue and immediately call all prepare watchers. |
|
|
322 | 3. If we have been forked, recreate the kernel state. |
|
|
323 | 4. Update the kernel state with all outstanding changes. |
|
|
324 | 5. Update the "event loop time". |
|
|
325 | 6. Calculate for how long to block. |
|
|
326 | 7. Block the process, waiting for events. |
|
|
327 | 8. Update the "event loop time" and do time jump handling. |
|
|
328 | 9. Queue all outstanding timers. |
|
|
329 | 10. Queue all outstanding periodics. |
|
|
330 | 11. If no events are pending now, queue all idle watchers. |
|
|
331 | 12. Queue all check watchers. |
|
|
332 | 13. Call all queued watchers in reverse order (i.e. check watchers first). |
|
|
333 | 14. If ev_unloop has been called or EVLOOP_ONESHOT or EVLOOP_NONBLOCK |
|
|
334 | was used, return, otherwise continue with step #1. |
|
|
335 | |
237 | =item ev_unloop (loop, how) |
336 | =item ev_unloop (loop, how) |
238 | |
337 | |
239 | Can be used to make a call to C<ev_loop> return early (but only after it |
338 | Can be used to make a call to C<ev_loop> return early (but only after it |
240 | has processed all outstanding events). The C<how> argument must be either |
339 | has processed all outstanding events). The C<how> argument must be either |
241 | C<EVUNLOOP_ONCE>, which will make the innermost C<ev_loop> call return, or |
340 | C<EVUNLOOP_ONE>, which will make the innermost C<ev_loop> call return, or |
242 | C<EVUNLOOP_ALL>, which will make all nested C<ev_loop> calls return. |
341 | C<EVUNLOOP_ALL>, which will make all nested C<ev_loop> calls return. |
243 | |
342 | |
244 | =item ev_ref (loop) |
343 | =item ev_ref (loop) |
245 | |
344 | |
246 | =item ev_unref (loop) |
345 | =item ev_unref (loop) |
… | |
… | |
297 | *) >>), and you can stop watching for events at any time by calling the |
396 | *) >>), and you can stop watching for events at any time by calling the |
298 | corresponding stop function (C<< ev_<type>_stop (loop, watcher *) >>. |
397 | corresponding stop function (C<< ev_<type>_stop (loop, watcher *) >>. |
299 | |
398 | |
300 | As long as your watcher is active (has been started but not stopped) you |
399 | As long as your watcher is active (has been started but not stopped) you |
301 | must not touch the values stored in it. Most specifically you must never |
400 | must not touch the values stored in it. Most specifically you must never |
302 | reinitialise it or call its set method. |
401 | reinitialise it or call its set macro. |
303 | |
402 | |
304 | You cna check whether an event is active by calling the C<ev_is_active |
403 | You can check whether an event is active by calling the C<ev_is_active |
305 | (watcher *)> macro. To see whether an event is outstanding (but the |
404 | (watcher *)> macro. To see whether an event is outstanding (but the |
306 | callback for it has not been called yet) you cna use the C<ev_is_pending |
405 | callback for it has not been called yet) you can use the C<ev_is_pending |
307 | (watcher *)> macro. |
406 | (watcher *)> macro. |
308 | |
407 | |
309 | Each and every callback receives the event loop pointer as first, the |
408 | Each and every callback receives the event loop pointer as first, the |
310 | registered watcher structure as second, and a bitset of received events as |
409 | registered watcher structure as second, and a bitset of received events as |
311 | third argument. |
410 | third argument. |
312 | |
411 | |
313 | The rceeived events usually include a single bit per event type received |
412 | The received events usually include a single bit per event type received |
314 | (you can receive multiple events at the same time). The possible bit masks |
413 | (you can receive multiple events at the same time). The possible bit masks |
315 | are: |
414 | are: |
316 | |
415 | |
317 | =over 4 |
416 | =over 4 |
318 | |
417 | |
… | |
… | |
372 | =back |
471 | =back |
373 | |
472 | |
374 | =head2 ASSOCIATING CUSTOM DATA WITH A WATCHER |
473 | =head2 ASSOCIATING CUSTOM DATA WITH A WATCHER |
375 | |
474 | |
376 | Each watcher has, by default, a member C<void *data> that you can change |
475 | Each watcher has, by default, a member C<void *data> that you can change |
377 | and read at any time, libev will completely ignore it. This cna be used |
476 | and read at any time, libev will completely ignore it. This can be used |
378 | to associate arbitrary data with your watcher. If you need more data and |
477 | to associate arbitrary data with your watcher. If you need more data and |
379 | don't want to allocate memory and store a pointer to it in that data |
478 | don't want to allocate memory and store a pointer to it in that data |
380 | member, you can also "subclass" the watcher type and provide your own |
479 | member, you can also "subclass" the watcher type and provide your own |
381 | data: |
480 | data: |
382 | |
481 | |
… | |
… | |
404 | =head1 WATCHER TYPES |
503 | =head1 WATCHER TYPES |
405 | |
504 | |
406 | This section describes each watcher in detail, but will not repeat |
505 | This section describes each watcher in detail, but will not repeat |
407 | information given in the last section. |
506 | information given in the last section. |
408 | |
507 | |
409 | =head2 C<ev_io> - is my file descriptor readable or writable |
508 | =head2 C<ev_io> - is this file descriptor readable or writable |
410 | |
509 | |
411 | I/O watchers check whether a file descriptor is readable or writable |
510 | I/O watchers check whether a file descriptor is readable or writable |
412 | in each iteration of the event loop (This behaviour is called |
511 | in each iteration of the event loop (This behaviour is called |
413 | level-triggering because you keep receiving events as long as the |
512 | level-triggering because you keep receiving events as long as the |
414 | condition persists. Remember you cna stop the watcher if you don't want to |
513 | condition persists. Remember you can stop the watcher if you don't want to |
415 | act on the event and neither want to receive future events). |
514 | act on the event and neither want to receive future events). |
416 | |
515 | |
417 | In general you can register as many read and/or write event watchers oer |
516 | In general you can register as many read and/or write event watchers per |
418 | fd as you want (as long as you don't confuse yourself). Setting all file |
517 | fd as you want (as long as you don't confuse yourself). Setting all file |
419 | descriptors to non-blocking mode is also usually a good idea (but not |
518 | descriptors to non-blocking mode is also usually a good idea (but not |
420 | required if you know what you are doing). |
519 | required if you know what you are doing). |
421 | |
520 | |
422 | You have to be careful with dup'ed file descriptors, though. Some backends |
521 | You have to be careful with dup'ed file descriptors, though. Some backends |
423 | (the linux epoll backend is a notable example) cannot handle dup'ed file |
522 | (the linux epoll backend is a notable example) cannot handle dup'ed file |
424 | descriptors correctly if you register interest in two or more fds pointing |
523 | descriptors correctly if you register interest in two or more fds pointing |
425 | to the same file/socket etc. description. |
524 | to the same underlying file/socket etc. description (that is, they share |
|
|
525 | the same underlying "file open"). |
426 | |
526 | |
427 | If you must do this, then force the use of a known-to-be-good backend |
527 | If you must do this, then force the use of a known-to-be-good backend |
428 | (at the time of this writing, this includes only EVMETHOD_SELECT and |
528 | (at the time of this writing, this includes only C<EVBACKEND_SELECT> and |
429 | EVMETHOD_POLL). |
529 | C<EVBACKEND_POLL>). |
430 | |
530 | |
431 | =over 4 |
531 | =over 4 |
432 | |
532 | |
433 | =item ev_io_init (ev_io *, callback, int fd, int events) |
533 | =item ev_io_init (ev_io *, callback, int fd, int events) |
434 | |
534 | |
… | |
… | |
436 | |
536 | |
437 | Configures an C<ev_io> watcher. The fd is the file descriptor to rceeive |
537 | Configures an C<ev_io> watcher. The fd is the file descriptor to rceeive |
438 | events for and events is either C<EV_READ>, C<EV_WRITE> or C<EV_READ | |
538 | events for and events is either C<EV_READ>, C<EV_WRITE> or C<EV_READ | |
439 | EV_WRITE> to receive the given events. |
539 | EV_WRITE> to receive the given events. |
440 | |
540 | |
|
|
541 | Please note that most of the more scalable backend mechanisms (for example |
|
|
542 | epoll and solaris ports) can result in spurious readyness notifications |
|
|
543 | for file descriptors, so you practically need to use non-blocking I/O (and |
|
|
544 | treat callback invocation as hint only), or retest separately with a safe |
|
|
545 | interface before doing I/O (XLib can do this), or force the use of either |
|
|
546 | C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>, which don't suffer from this |
|
|
547 | problem. Also note that it is quite easy to have your callback invoked |
|
|
548 | when the readyness condition is no longer valid even when employing |
|
|
549 | typical ways of handling events, so its a good idea to use non-blocking |
|
|
550 | I/O unconditionally. |
|
|
551 | |
441 | =back |
552 | =back |
442 | |
553 | |
443 | =head2 C<ev_timer> - relative and optionally recurring timeouts |
554 | =head2 C<ev_timer> - relative and optionally recurring timeouts |
444 | |
555 | |
445 | Timer watchers are simple relative timers that generate an event after a |
556 | Timer watchers are simple relative timers that generate an event after a |
446 | given time, and optionally repeating in regular intervals after that. |
557 | given time, and optionally repeating in regular intervals after that. |
447 | |
558 | |
448 | The timers are based on real time, that is, if you register an event that |
559 | The timers are based on real time, that is, if you register an event that |
449 | times out after an hour and youreset your system clock to last years |
560 | times out after an hour and you reset your system clock to last years |
450 | time, it will still time out after (roughly) and hour. "Roughly" because |
561 | time, it will still time out after (roughly) and hour. "Roughly" because |
451 | detecting time jumps is hard, and soem inaccuracies are unavoidable (the |
562 | detecting time jumps is hard, and some inaccuracies are unavoidable (the |
452 | monotonic clock option helps a lot here). |
563 | monotonic clock option helps a lot here). |
453 | |
564 | |
454 | The relative timeouts are calculated relative to the C<ev_now ()> |
565 | The relative timeouts are calculated relative to the C<ev_now ()> |
455 | time. This is usually the right thing as this timestamp refers to the time |
566 | time. This is usually the right thing as this timestamp refers to the time |
456 | of the event triggering whatever timeout you are modifying/starting. If |
567 | of the event triggering whatever timeout you are modifying/starting. If |
457 | you suspect event processing to be delayed and you *need* to base the timeout |
568 | you suspect event processing to be delayed and you I<need> to base the timeout |
458 | ion the current time, use something like this to adjust for this: |
569 | on the current time, use something like this to adjust for this: |
459 | |
570 | |
460 | ev_timer_set (&timer, after + ev_now () - ev_time (), 0.); |
571 | ev_timer_set (&timer, after + ev_now () - ev_time (), 0.); |
|
|
572 | |
|
|
573 | The callback is guarenteed to be invoked only when its timeout has passed, |
|
|
574 | but if multiple timers become ready during the same loop iteration then |
|
|
575 | order of execution is undefined. |
461 | |
576 | |
462 | =over 4 |
577 | =over 4 |
463 | |
578 | |
464 | =item ev_timer_init (ev_timer *, callback, ev_tstamp after, ev_tstamp repeat) |
579 | =item ev_timer_init (ev_timer *, callback, ev_tstamp after, ev_tstamp repeat) |
465 | |
580 | |
… | |
… | |
471 | later, again, and again, until stopped manually. |
586 | later, again, and again, until stopped manually. |
472 | |
587 | |
473 | The timer itself will do a best-effort at avoiding drift, that is, if you |
588 | The timer itself will do a best-effort at avoiding drift, that is, if you |
474 | configure a timer to trigger every 10 seconds, then it will trigger at |
589 | configure a timer to trigger every 10 seconds, then it will trigger at |
475 | exactly 10 second intervals. If, however, your program cannot keep up with |
590 | exactly 10 second intervals. If, however, your program cannot keep up with |
476 | the timer (ecause it takes longer than those 10 seconds to do stuff) the |
591 | the timer (because it takes longer than those 10 seconds to do stuff) the |
477 | timer will not fire more than once per event loop iteration. |
592 | timer will not fire more than once per event loop iteration. |
478 | |
593 | |
479 | =item ev_timer_again (loop) |
594 | =item ev_timer_again (loop) |
480 | |
595 | |
481 | This will act as if the timer timed out and restart it again if it is |
596 | This will act as if the timer timed out and restart it again if it is |
… | |
… | |
495 | state where you do not expect data to travel on the socket, you can stop |
610 | state where you do not expect data to travel on the socket, you can stop |
496 | the timer, and again will automatically restart it if need be. |
611 | the timer, and again will automatically restart it if need be. |
497 | |
612 | |
498 | =back |
613 | =back |
499 | |
614 | |
500 | =head2 C<ev_periodic> - to cron or not to cron it |
615 | =head2 C<ev_periodic> - to cron or not to cron |
501 | |
616 | |
502 | Periodic watchers are also timers of a kind, but they are very versatile |
617 | Periodic watchers are also timers of a kind, but they are very versatile |
503 | (and unfortunately a bit complex). |
618 | (and unfortunately a bit complex). |
504 | |
619 | |
505 | Unlike C<ev_timer>'s, they are not based on real time (or relative time) |
620 | Unlike C<ev_timer>'s, they are not based on real time (or relative time) |
… | |
… | |
512 | again). |
627 | again). |
513 | |
628 | |
514 | They can also be used to implement vastly more complex timers, such as |
629 | They can also be used to implement vastly more complex timers, such as |
515 | triggering an event on eahc midnight, local time. |
630 | triggering an event on eahc midnight, local time. |
516 | |
631 | |
|
|
632 | As with timers, the callback is guarenteed to be invoked only when the |
|
|
633 | time (C<at>) has been passed, but if multiple periodic timers become ready |
|
|
634 | during the same loop iteration then order of execution is undefined. |
|
|
635 | |
517 | =over 4 |
636 | =over 4 |
518 | |
637 | |
519 | =item ev_periodic_init (ev_periodic *, callback, ev_tstamp at, ev_tstamp interval, reschedule_cb) |
638 | =item ev_periodic_init (ev_periodic *, callback, ev_tstamp at, ev_tstamp interval, reschedule_cb) |
520 | |
639 | |
521 | =item ev_periodic_set (ev_periodic *, ev_tstamp after, ev_tstamp repeat, reschedule_cb) |
640 | =item ev_periodic_set (ev_periodic *, ev_tstamp after, ev_tstamp repeat, reschedule_cb) |
522 | |
641 | |
523 | Lots of arguments, lets sort it out... There are basically three modes of |
642 | Lots of arguments, lets sort it out... There are basically three modes of |
524 | operation, and we will explain them from simplest to complex: |
643 | operation, and we will explain them from simplest to complex: |
525 | |
|
|
526 | |
644 | |
527 | =over 4 |
645 | =over 4 |
528 | |
646 | |
529 | =item * absolute timer (interval = reschedule_cb = 0) |
647 | =item * absolute timer (interval = reschedule_cb = 0) |
530 | |
648 | |
… | |
… | |
544 | |
662 | |
545 | ev_periodic_set (&periodic, 0., 3600., 0); |
663 | ev_periodic_set (&periodic, 0., 3600., 0); |
546 | |
664 | |
547 | This doesn't mean there will always be 3600 seconds in between triggers, |
665 | This doesn't mean there will always be 3600 seconds in between triggers, |
548 | but only that the the callback will be called when the system time shows a |
666 | but only that the the callback will be called when the system time shows a |
549 | full hour (UTC), or more correct, when the system time is evenly divisible |
667 | full hour (UTC), or more correctly, when the system time is evenly divisible |
550 | by 3600. |
668 | by 3600. |
551 | |
669 | |
552 | Another way to think about it (for the mathematically inclined) is that |
670 | Another way to think about it (for the mathematically inclined) is that |
553 | C<ev_periodic> will try to run the callback in this mode at the next possible |
671 | C<ev_periodic> will try to run the callback in this mode at the next possible |
554 | time where C<time = at (mod interval)>, regardless of any time jumps. |
672 | time where C<time = at (mod interval)>, regardless of any time jumps. |
… | |
… | |
558 | In this mode the values for C<interval> and C<at> are both being |
676 | In this mode the values for C<interval> and C<at> are both being |
559 | ignored. Instead, each time the periodic watcher gets scheduled, the |
677 | ignored. Instead, each time the periodic watcher gets scheduled, the |
560 | reschedule callback will be called with the watcher as first, and the |
678 | reschedule callback will be called with the watcher as first, and the |
561 | current time as second argument. |
679 | current time as second argument. |
562 | |
680 | |
563 | NOTE: I<This callback MUST NOT stop or destroy the periodic or any other |
681 | NOTE: I<This callback MUST NOT stop or destroy any periodic watcher, |
564 | periodic watcher, ever, or make any event loop modificstions>. If you need |
682 | ever, or make any event loop modifications>. If you need to stop it, |
565 | to stop it, return 1e30 (or so, fudge fudge) and stop it afterwards. |
683 | return C<now + 1e30> (or so, fudge fudge) and stop it afterwards (e.g. by |
|
|
684 | starting a prepare watcher). |
566 | |
685 | |
567 | Its prototype is c<ev_tstamp (*reschedule_cb)(struct ev_periodic *w, |
686 | Its prototype is C<ev_tstamp (*reschedule_cb)(struct ev_periodic *w, |
568 | ev_tstamp now)>, e.g.: |
687 | ev_tstamp now)>, e.g.: |
569 | |
688 | |
570 | static ev_tstamp my_rescheduler (struct ev_periodic *w, ev_tstamp now) |
689 | static ev_tstamp my_rescheduler (struct ev_periodic *w, ev_tstamp now) |
571 | { |
690 | { |
572 | return now + 60.; |
691 | return now + 60.; |
… | |
… | |
575 | It must return the next time to trigger, based on the passed time value |
694 | It must return the next time to trigger, based on the passed time value |
576 | (that is, the lowest time value larger than to the second argument). It |
695 | (that is, the lowest time value larger than to the second argument). It |
577 | will usually be called just before the callback will be triggered, but |
696 | will usually be called just before the callback will be triggered, but |
578 | might be called at other times, too. |
697 | might be called at other times, too. |
579 | |
698 | |
|
|
699 | NOTE: I<< This callback must always return a time that is later than the |
|
|
700 | passed C<now> value >>. Not even C<now> itself will do, it I<must> be larger. |
|
|
701 | |
580 | This can be used to create very complex timers, such as a timer that |
702 | This can be used to create very complex timers, such as a timer that |
581 | triggers on each midnight, local time. To do this, you would calculate the |
703 | triggers on each midnight, local time. To do this, you would calculate the |
582 | next midnight after C<now> and return the timestamp value for this. How you do this |
704 | next midnight after C<now> and return the timestamp value for this. How |
583 | is, again, up to you (but it is not trivial). |
705 | you do this is, again, up to you (but it is not trivial, which is the main |
|
|
706 | reason I omitted it as an example). |
584 | |
707 | |
585 | =back |
708 | =back |
586 | |
709 | |
587 | =item ev_periodic_again (loop, ev_periodic *) |
710 | =item ev_periodic_again (loop, ev_periodic *) |
588 | |
711 | |
… | |
… | |
598 | Signal watchers will trigger an event when the process receives a specific |
721 | Signal watchers will trigger an event when the process receives a specific |
599 | signal one or more times. Even though signals are very asynchronous, libev |
722 | signal one or more times. Even though signals are very asynchronous, libev |
600 | will try it's best to deliver signals synchronously, i.e. as part of the |
723 | will try it's best to deliver signals synchronously, i.e. as part of the |
601 | normal event processing, like any other event. |
724 | normal event processing, like any other event. |
602 | |
725 | |
603 | You cna configure as many watchers as you like per signal. Only when the |
726 | You can configure as many watchers as you like per signal. Only when the |
604 | first watcher gets started will libev actually register a signal watcher |
727 | first watcher gets started will libev actually register a signal watcher |
605 | with the kernel (thus it coexists with your own signal handlers as long |
728 | with the kernel (thus it coexists with your own signal handlers as long |
606 | as you don't register any with libev). Similarly, when the last signal |
729 | as you don't register any with libev). Similarly, when the last signal |
607 | watcher for a signal is stopped libev will reset the signal handler to |
730 | watcher for a signal is stopped libev will reset the signal handler to |
608 | SIG_DFL (regardless of what it was set to before). |
731 | SIG_DFL (regardless of what it was set to before). |
… | |
… | |
630 | =item ev_child_set (ev_child *, int pid) |
753 | =item ev_child_set (ev_child *, int pid) |
631 | |
754 | |
632 | Configures the watcher to wait for status changes of process C<pid> (or |
755 | Configures the watcher to wait for status changes of process C<pid> (or |
633 | I<any> process if C<pid> is specified as C<0>). The callback can look |
756 | I<any> process if C<pid> is specified as C<0>). The callback can look |
634 | at the C<rstatus> member of the C<ev_child> watcher structure to see |
757 | at the C<rstatus> member of the C<ev_child> watcher structure to see |
635 | the status word (use the macros from C<sys/wait.h>). The C<rpid> member |
758 | the status word (use the macros from C<sys/wait.h> and see your systems |
636 | contains the pid of the process causing the status change. |
759 | C<waitpid> documentation). The C<rpid> member contains the pid of the |
|
|
760 | process causing the status change. |
637 | |
761 | |
638 | =back |
762 | =back |
639 | |
763 | |
640 | =head2 C<ev_idle> - when you've got nothing better to do |
764 | =head2 C<ev_idle> - when you've got nothing better to do |
641 | |
765 | |
642 | Idle watchers trigger events when there are no other I/O or timer (or |
766 | Idle watchers trigger events when there are no other events are pending |
643 | periodic) events pending. That is, as long as your process is busy |
767 | (prepare, check and other idle watchers do not count). That is, as long |
644 | handling sockets or timeouts it will not be called. But when your process |
768 | as your process is busy handling sockets or timeouts (or even signals, |
645 | is idle all idle watchers are being called again and again - until |
769 | imagine) it will not be triggered. But when your process is idle all idle |
|
|
770 | watchers are being called again and again, once per event loop iteration - |
646 | stopped, that is, or your process receives more events. |
771 | until stopped, that is, or your process receives more events and becomes |
|
|
772 | busy. |
647 | |
773 | |
648 | The most noteworthy effect is that as long as any idle watchers are |
774 | The most noteworthy effect is that as long as any idle watchers are |
649 | active, the process will not block when waiting for new events. |
775 | active, the process will not block when waiting for new events. |
650 | |
776 | |
651 | Apart from keeping your process non-blocking (which is a useful |
777 | Apart from keeping your process non-blocking (which is a useful |
… | |
… | |
661 | kind. There is a C<ev_idle_set> macro, but using it is utterly pointless, |
787 | kind. There is a C<ev_idle_set> macro, but using it is utterly pointless, |
662 | believe me. |
788 | believe me. |
663 | |
789 | |
664 | =back |
790 | =back |
665 | |
791 | |
666 | =head2 prepare and check - your hooks into the event loop |
792 | =head2 C<ev_prepare> and C<ev_check> - customise your event loop |
667 | |
793 | |
668 | Prepare and check watchers usually (but not always) are used in |
794 | Prepare and check watchers are usually (but not always) used in tandem: |
669 | tandom. Prepare watchers get invoked before the process blocks and check |
795 | prepare watchers get invoked before the process blocks and check watchers |
670 | watchers afterwards. |
796 | afterwards. |
671 | |
797 | |
672 | Their main purpose is to integrate other event mechanisms into libev. This |
798 | Their main purpose is to integrate other event mechanisms into libev. This |
673 | could be used, for example, to track variable changes, implement your own |
799 | could be used, for example, to track variable changes, implement your own |
674 | watchers, integrate net-snmp or a coroutine library and lots more. |
800 | watchers, integrate net-snmp or a coroutine library and lots more. |
675 | |
801 | |
676 | This is done by examining in each prepare call which file descriptors need |
802 | This is done by examining in each prepare call which file descriptors need |
677 | to be watched by the other library, registering C<ev_io> watchers for them |
803 | to be watched by the other library, registering C<ev_io> watchers for |
678 | and starting an C<ev_timer> watcher for any timeouts (many libraries provide |
804 | them and starting an C<ev_timer> watcher for any timeouts (many libraries |
679 | just this functionality). Then, in the check watcher you check for any |
805 | provide just this functionality). Then, in the check watcher you check for |
680 | events that occured (by making your callbacks set soem flags for example) |
806 | any events that occured (by checking the pending status of all watchers |
681 | and call back into the library. |
807 | and stopping them) and call back into the library. The I/O and timer |
|
|
808 | callbacks will never actually be called (but must be valid nevertheless, |
|
|
809 | because you never know, you know?). |
682 | |
810 | |
683 | As another example, the perl Coro module uses these hooks to integrate |
811 | As another example, the Perl Coro module uses these hooks to integrate |
684 | coroutines into libev programs, by yielding to other active coroutines |
812 | coroutines into libev programs, by yielding to other active coroutines |
685 | during each prepare and only letting the process block if no coroutines |
813 | during each prepare and only letting the process block if no coroutines |
686 | are ready to run. |
814 | are ready to run (it's actually more complicated: it only runs coroutines |
|
|
815 | with priority higher than or equal to the event loop and one coroutine |
|
|
816 | of lower priority, but only once, using idle watchers to keep the event |
|
|
817 | loop from blocking if lower-priority coroutines are active, thus mapping |
|
|
818 | low-priority coroutines to idle/background tasks). |
687 | |
819 | |
688 | =over 4 |
820 | =over 4 |
689 | |
821 | |
690 | =item ev_prepare_init (ev_prepare *, callback) |
822 | =item ev_prepare_init (ev_prepare *, callback) |
691 | |
823 | |
692 | =item ev_check_init (ev_check *, callback) |
824 | =item ev_check_init (ev_check *, callback) |
693 | |
825 | |
694 | Initialises and configures the prepare or check watcher - they have no |
826 | Initialises and configures the prepare or check watcher - they have no |
695 | parameters of any kind. There are C<ev_prepare_set> and C<ev_check_set> |
827 | parameters of any kind. There are C<ev_prepare_set> and C<ev_check_set> |
696 | macros, but using them is utterly, utterly pointless. |
828 | macros, but using them is utterly, utterly and completely pointless. |
697 | |
829 | |
698 | =back |
830 | =back |
699 | |
831 | |
700 | =head1 OTHER FUNCTIONS |
832 | =head1 OTHER FUNCTIONS |
701 | |
833 | |
702 | There are some other fucntions of possible interest. Described. Here. Now. |
834 | There are some other functions of possible interest. Described. Here. Now. |
703 | |
835 | |
704 | =over 4 |
836 | =over 4 |
705 | |
837 | |
706 | =item ev_once (loop, int fd, int events, ev_tstamp timeout, callback) |
838 | =item ev_once (loop, int fd, int events, ev_tstamp timeout, callback) |
707 | |
839 | |
708 | This function combines a simple timer and an I/O watcher, calls your |
840 | This function combines a simple timer and an I/O watcher, calls your |
709 | callback on whichever event happens first and automatically stop both |
841 | callback on whichever event happens first and automatically stop both |
710 | watchers. This is useful if you want to wait for a single event on an fd |
842 | watchers. This is useful if you want to wait for a single event on an fd |
711 | or timeout without havign to allocate/configure/start/stop/free one or |
843 | or timeout without having to allocate/configure/start/stop/free one or |
712 | more watchers yourself. |
844 | more watchers yourself. |
713 | |
845 | |
714 | If C<fd> is less than 0, then no I/O watcher will be started and events is |
846 | If C<fd> is less than 0, then no I/O watcher will be started and events |
715 | ignored. Otherwise, an C<ev_io> watcher for the given C<fd> and C<events> set |
847 | is being ignored. Otherwise, an C<ev_io> watcher for the given C<fd> and |
716 | will be craeted and started. |
848 | C<events> set will be craeted and started. |
717 | |
849 | |
718 | If C<timeout> is less than 0, then no timeout watcher will be |
850 | If C<timeout> is less than 0, then no timeout watcher will be |
719 | started. Otherwise an C<ev_timer> watcher with after = C<timeout> (and repeat |
851 | started. Otherwise an C<ev_timer> watcher with after = C<timeout> (and |
720 | = 0) will be started. |
852 | repeat = 0) will be started. While C<0> is a valid timeout, it is of |
|
|
853 | dubious value. |
721 | |
854 | |
722 | The callback has the type C<void (*cb)(int revents, void *arg)> and |
855 | The callback has the type C<void (*cb)(int revents, void *arg)> and gets |
723 | gets passed an events set (normally a combination of C<EV_ERROR>, C<EV_READ>, |
856 | passed an C<revents> set like normal event callbacks (a combination of |
724 | C<EV_WRITE> or C<EV_TIMEOUT>) and the C<arg> value passed to C<ev_once>: |
857 | C<EV_ERROR>, C<EV_READ>, C<EV_WRITE> or C<EV_TIMEOUT>) and the C<arg> |
|
|
858 | value passed to C<ev_once>: |
725 | |
859 | |
726 | static void stdin_ready (int revents, void *arg) |
860 | static void stdin_ready (int revents, void *arg) |
727 | { |
861 | { |
728 | if (revents & EV_TIMEOUT) |
862 | if (revents & EV_TIMEOUT) |
729 | /* doh, nothing entered */ |
863 | /* doh, nothing entered */; |
730 | else if (revents & EV_READ) |
864 | else if (revents & EV_READ) |
731 | /* stdin might have data for us, joy! */ |
865 | /* stdin might have data for us, joy! */; |
732 | } |
866 | } |
733 | |
867 | |
734 | ev_once (STDIN_FILENO, EV_READm 10., stdin_ready, 0); |
868 | ev_once (STDIN_FILENO, EV_READ, 10., stdin_ready, 0); |
735 | |
869 | |
736 | =item ev_feed_event (loop, watcher, int events) |
870 | =item ev_feed_event (loop, watcher, int events) |
737 | |
871 | |
738 | Feeds the given event set into the event loop, as if the specified event |
872 | Feeds the given event set into the event loop, as if the specified event |
739 | has happened for the specified watcher (which must be a pointer to an |
873 | had happened for the specified watcher (which must be a pointer to an |
740 | initialised but not necessarily active event watcher). |
874 | initialised but not necessarily started event watcher). |
741 | |
875 | |
742 | =item ev_feed_fd_event (loop, int fd, int revents) |
876 | =item ev_feed_fd_event (loop, int fd, int revents) |
743 | |
877 | |
744 | Feed an event on the given fd, as if a file descriptor backend detected it. |
878 | Feed an event on the given fd, as if a file descriptor backend detected |
|
|
879 | the given events it. |
745 | |
880 | |
746 | =item ev_feed_signal_event (loop, int signum) |
881 | =item ev_feed_signal_event (loop, int signum) |
747 | |
882 | |
748 | Feed an event as if the given signal occured (loop must be the default loop!). |
883 | Feed an event as if the given signal occured (loop must be the default loop!). |
749 | |
884 | |
750 | =back |
885 | =back |
751 | |
886 | |
|
|
887 | =head1 LIBEVENT EMULATION |
|
|
888 | |
|
|
889 | Libev offers a compatibility emulation layer for libevent. It cannot |
|
|
890 | emulate the internals of libevent, so here are some usage hints: |
|
|
891 | |
|
|
892 | =over 4 |
|
|
893 | |
|
|
894 | =item * Use it by including <event.h>, as usual. |
|
|
895 | |
|
|
896 | =item * The following members are fully supported: ev_base, ev_callback, |
|
|
897 | ev_arg, ev_fd, ev_res, ev_events. |
|
|
898 | |
|
|
899 | =item * Avoid using ev_flags and the EVLIST_*-macros, while it is |
|
|
900 | maintained by libev, it does not work exactly the same way as in libevent (consider |
|
|
901 | it a private API). |
|
|
902 | |
|
|
903 | =item * Priorities are not currently supported. Initialising priorities |
|
|
904 | will fail and all watchers will have the same priority, even though there |
|
|
905 | is an ev_pri field. |
|
|
906 | |
|
|
907 | =item * Other members are not supported. |
|
|
908 | |
|
|
909 | =item * The libev emulation is I<not> ABI compatible to libevent, you need |
|
|
910 | to use the libev header file and library. |
|
|
911 | |
|
|
912 | =back |
|
|
913 | |
|
|
914 | =head1 C++ SUPPORT |
|
|
915 | |
|
|
916 | TBD. |
|
|
917 | |
752 | =head1 AUTHOR |
918 | =head1 AUTHOR |
753 | |
919 | |
754 | Marc Lehmann <libev@schmorp.de>. |
920 | Marc Lehmann <libev@schmorp.de>. |
755 | |
921 | |