… | |
… | |
49 | called C<ev_tstamp>, which is what you should use too. It usually aliases |
49 | called C<ev_tstamp>, which is what you should use too. It usually aliases |
50 | to the double type in C. |
50 | to the double type in C. |
51 | |
51 | |
52 | =head1 GLOBAL FUNCTIONS |
52 | =head1 GLOBAL FUNCTIONS |
53 | |
53 | |
|
|
54 | These functions can be called anytime, even before initialising the |
|
|
55 | library in any way. |
|
|
56 | |
54 | =over 4 |
57 | =over 4 |
55 | |
58 | |
56 | =item ev_tstamp ev_time () |
59 | =item ev_tstamp ev_time () |
57 | |
60 | |
58 | Returns the current time as libev would use it. |
61 | Returns the current time as libev would use it. Please note that the |
|
|
62 | C<ev_now> function is usually faster and also often returns the timestamp |
|
|
63 | you actually want to know. |
59 | |
64 | |
60 | =item int ev_version_major () |
65 | =item int ev_version_major () |
61 | |
66 | |
62 | =item int ev_version_minor () |
67 | =item int ev_version_minor () |
63 | |
68 | |
… | |
… | |
69 | |
74 | |
70 | Usually, it's a good idea to terminate if the major versions mismatch, |
75 | Usually, it's a good idea to terminate if the major versions mismatch, |
71 | as this indicates an incompatible change. Minor versions are usually |
76 | as this indicates an incompatible change. Minor versions are usually |
72 | compatible to older versions, so a larger minor version alone is usually |
77 | compatible to older versions, so a larger minor version alone is usually |
73 | not a problem. |
78 | not a problem. |
|
|
79 | |
|
|
80 | =item unsigned int ev_supported_backends () |
|
|
81 | |
|
|
82 | Return the set of all backends (i.e. their corresponding C<EV_BACKEND_*> |
|
|
83 | value) compiled into this binary of libev (independent of their |
|
|
84 | availability on the system you are running on). See C<ev_default_loop> for |
|
|
85 | a description of the set values. |
|
|
86 | |
|
|
87 | =item unsigned int ev_recommended_backends () |
|
|
88 | |
|
|
89 | Return the set of all backends compiled into this binary of libev and also |
|
|
90 | recommended for this platform. This set is often smaller than the one |
|
|
91 | returned by C<ev_supported_backends>, as for example kqueue is broken on |
|
|
92 | most BSDs and will not be autodetected unless you explicitly request it |
|
|
93 | (assuming you know what you are doing). This is the set of backends that |
|
|
94 | C<EVFLAG_AUTO> will probe for. |
74 | |
95 | |
75 | =item ev_set_allocator (void *(*cb)(void *ptr, long size)) |
96 | =item ev_set_allocator (void *(*cb)(void *ptr, long size)) |
76 | |
97 | |
77 | Sets the allocation function to use (the prototype is similar to the |
98 | Sets the allocation function to use (the prototype is similar to the |
78 | realloc C function, the semantics are identical). It is used to allocate |
99 | realloc C function, the semantics are identical). It is used to allocate |
… | |
… | |
114 | =item struct ev_loop *ev_default_loop (unsigned int flags) |
135 | =item struct ev_loop *ev_default_loop (unsigned int flags) |
115 | |
136 | |
116 | This will initialise the default event loop if it hasn't been initialised |
137 | This will initialise the default event loop if it hasn't been initialised |
117 | yet and return it. If the default loop could not be initialised, returns |
138 | yet and return it. If the default loop could not be initialised, returns |
118 | false. If it already was initialised it simply returns it (and ignores the |
139 | false. If it already was initialised it simply returns it (and ignores the |
119 | flags). |
140 | flags. If that is troubling you, check C<ev_backend ()> afterwards). |
120 | |
141 | |
121 | If you don't know what event loop to use, use the one returned from this |
142 | If you don't know what event loop to use, use the one returned from this |
122 | function. |
143 | function. |
123 | |
144 | |
124 | The flags argument can be used to specify special behaviour or specific |
145 | The flags argument can be used to specify special behaviour or specific |
125 | backends to use, and is usually specified as 0 (or EVFLAG_AUTO). |
146 | backends to use, and is usually specified as C<0> (or EVFLAG_AUTO). |
126 | |
147 | |
127 | It supports the following flags: |
148 | It supports the following flags: |
128 | |
149 | |
129 | =over 4 |
150 | =over 4 |
130 | |
151 | |
… | |
… | |
140 | C<LIBEV_FLAGS>. Otherwise (the default), this environment variable will |
161 | C<LIBEV_FLAGS>. Otherwise (the default), this environment variable will |
141 | override the flags completely if it is found in the environment. This is |
162 | override the flags completely if it is found in the environment. This is |
142 | useful to try out specific backends to test their performance, or to work |
163 | useful to try out specific backends to test their performance, or to work |
143 | around bugs. |
164 | around bugs. |
144 | |
165 | |
145 | =item C<EVMETHOD_SELECT> (portable select backend) |
166 | =item C<EVBACKEND_SELECT> (value 1, portable select backend) |
146 | |
167 | |
|
|
168 | This is your standard select(2) backend. Not I<completely> standard, as |
|
|
169 | libev tries to roll its own fd_set with no limits on the number of fds, |
|
|
170 | but if that fails, expect a fairly low limit on the number of fds when |
|
|
171 | using this backend. It doesn't scale too well (O(highest_fd)), but its usually |
|
|
172 | the fastest backend for a low number of fds. |
|
|
173 | |
147 | =item C<EVMETHOD_POLL> (poll backend, available everywhere except on windows) |
174 | =item C<EVBACKEND_POLL> (value 2, poll backend, available everywhere except on windows) |
148 | |
175 | |
149 | =item C<EVMETHOD_EPOLL> (linux only) |
176 | And this is your standard poll(2) backend. It's more complicated than |
|
|
177 | select, but handles sparse fds better and has no artificial limit on the |
|
|
178 | number of fds you can use (except it will slow down considerably with a |
|
|
179 | lot of inactive fds). It scales similarly to select, i.e. O(total_fds). |
150 | |
180 | |
151 | =item C<EVMETHOD_KQUEUE> (some bsds only) |
181 | =item C<EVBACKEND_EPOLL> (value 4, Linux) |
152 | |
182 | |
153 | =item C<EVMETHOD_DEVPOLL> (solaris 8 only) |
183 | For few fds, this backend is a bit little slower than poll and select, |
|
|
184 | but it scales phenomenally better. While poll and select usually scale like |
|
|
185 | O(total_fds) where n is the total number of fds (or the highest fd), epoll scales |
|
|
186 | either O(1) or O(active_fds). |
154 | |
187 | |
155 | =item C<EVMETHOD_PORT> (solaris 10 only) |
188 | While stopping and starting an I/O watcher in the same iteration will |
|
|
189 | result in some caching, there is still a syscall per such incident |
|
|
190 | (because the fd could point to a different file description now), so its |
|
|
191 | best to avoid that. Also, dup()ed file descriptors might not work very |
|
|
192 | well if you register events for both fds. |
|
|
193 | |
|
|
194 | =item C<EVBACKEND_KQUEUE> (value 8, most BSD clones) |
|
|
195 | |
|
|
196 | Kqueue deserves special mention, as at the time of this writing, it |
|
|
197 | was broken on all BSDs except NetBSD (usually it doesn't work with |
|
|
198 | anything but sockets and pipes, except on Darwin, where of course its |
|
|
199 | completely useless). For this reason its not being "autodetected" unless |
|
|
200 | you explicitly specify the flags (i.e. you don't use EVFLAG_AUTO). |
|
|
201 | |
|
|
202 | It scales in the same way as the epoll backend, but the interface to the |
|
|
203 | kernel is more efficient (which says nothing about its actual speed, of |
|
|
204 | course). While starting and stopping an I/O watcher does not cause an |
|
|
205 | extra syscall as with epoll, it still adds up to four event changes per |
|
|
206 | incident, so its best to avoid that. |
|
|
207 | |
|
|
208 | =item C<EVBACKEND_DEVPOLL> (value 16, Solaris 8) |
|
|
209 | |
|
|
210 | This is not implemented yet (and might never be). |
|
|
211 | |
|
|
212 | =item C<EVBACKEND_PORT> (value 32, Solaris 10) |
|
|
213 | |
|
|
214 | This uses the Solaris 10 port mechanism. As with everything on Solaris, |
|
|
215 | it's really slow, but it still scales very well (O(active_fds)). |
|
|
216 | |
|
|
217 | =item C<EVBACKEND_ALL> |
|
|
218 | |
|
|
219 | Try all backends (even potentially broken ones that wouldn't be tried |
|
|
220 | with C<EVFLAG_AUTO>). Since this is a mask, you can do stuff such as |
|
|
221 | C<EVBACKEND_ALL & ~EVBACKEND_KQUEUE>. |
|
|
222 | |
|
|
223 | =back |
156 | |
224 | |
157 | If one or more of these are ored into the flags value, then only these |
225 | If one or more of these are ored into the flags value, then only these |
158 | backends will be tried (in the reverse order as given here). If one are |
226 | backends will be tried (in the reverse order as given here). If none are |
159 | specified, any backend will do. |
227 | specified, most compiled-in backend will be tried, usually in reverse |
160 | |
228 | order of their flag values :) |
161 | =back |
|
|
162 | |
229 | |
163 | =item struct ev_loop *ev_loop_new (unsigned int flags) |
230 | =item struct ev_loop *ev_loop_new (unsigned int flags) |
164 | |
231 | |
165 | Similar to C<ev_default_loop>, but always creates a new event loop that is |
232 | Similar to C<ev_default_loop>, but always creates a new event loop that is |
166 | always distinct from the default loop. Unlike the default loop, it cannot |
233 | always distinct from the default loop. Unlike the default loop, it cannot |
… | |
… | |
183 | This function reinitialises the kernel state for backends that have |
250 | This function reinitialises the kernel state for backends that have |
184 | one. Despite the name, you can call it anytime, but it makes most sense |
251 | one. Despite the name, you can call it anytime, but it makes most sense |
185 | after forking, in either the parent or child process (or both, but that |
252 | after forking, in either the parent or child process (or both, but that |
186 | again makes little sense). |
253 | again makes little sense). |
187 | |
254 | |
188 | You I<must> call this function after forking if and only if you want to |
255 | You I<must> call this function in the child process after forking if and |
189 | use the event library in both processes. If you just fork+exec, you don't |
256 | only if you want to use the event library in both processes. If you just |
190 | have to call it. |
257 | fork+exec, you don't have to call it. |
191 | |
258 | |
192 | The function itself is quite fast and it's usually not a problem to call |
259 | The function itself is quite fast and it's usually not a problem to call |
193 | it just in case after a fork. To make this easy, the function will fit in |
260 | it just in case after a fork. To make this easy, the function will fit in |
194 | quite nicely into a call to C<pthread_atfork>: |
261 | quite nicely into a call to C<pthread_atfork>: |
195 | |
262 | |
196 | pthread_atfork (0, 0, ev_default_fork); |
263 | pthread_atfork (0, 0, ev_default_fork); |
197 | |
264 | |
|
|
265 | At the moment, C<EVBACKEND_SELECT> and C<EVBACKEND_POLL> are safe to use |
|
|
266 | without calling this function, so if you force one of those backends you |
|
|
267 | do not need to care. |
|
|
268 | |
198 | =item ev_loop_fork (loop) |
269 | =item ev_loop_fork (loop) |
199 | |
270 | |
200 | Like C<ev_default_fork>, but acts on an event loop created by |
271 | Like C<ev_default_fork>, but acts on an event loop created by |
201 | C<ev_loop_new>. Yes, you have to call this on every allocated event loop |
272 | C<ev_loop_new>. Yes, you have to call this on every allocated event loop |
202 | after fork, and how you do this is entirely your own problem. |
273 | after fork, and how you do this is entirely your own problem. |
203 | |
274 | |
204 | =item unsigned int ev_method (loop) |
275 | =item unsigned int ev_backend (loop) |
205 | |
276 | |
206 | Returns one of the C<EVMETHOD_*> flags indicating the event backend in |
277 | Returns one of the C<EVBACKEND_*> flags indicating the event backend in |
207 | use. |
278 | use. |
208 | |
279 | |
209 | =item ev_tstamp ev_now (loop) |
280 | =item ev_tstamp ev_now (loop) |
210 | |
281 | |
211 | Returns the current "event loop time", which is the time the event loop |
282 | Returns the current "event loop time", which is the time the event loop |
… | |
… | |
234 | |
305 | |
235 | This flags value could be used to implement alternative looping |
306 | This flags value could be used to implement alternative looping |
236 | constructs, but the C<prepare> and C<check> watchers provide a better and |
307 | constructs, but the C<prepare> and C<check> watchers provide a better and |
237 | more generic mechanism. |
308 | more generic mechanism. |
238 | |
309 | |
|
|
310 | Here are the gory details of what ev_loop does: |
|
|
311 | |
|
|
312 | 1. If there are no active watchers (reference count is zero), return. |
|
|
313 | 2. Queue and immediately call all prepare watchers. |
|
|
314 | 3. If we have been forked, recreate the kernel state. |
|
|
315 | 4. Update the kernel state with all outstanding changes. |
|
|
316 | 5. Update the "event loop time". |
|
|
317 | 6. Calculate for how long to block. |
|
|
318 | 7. Block the process, waiting for events. |
|
|
319 | 8. Update the "event loop time" and do time jump handling. |
|
|
320 | 9. Queue all outstanding timers. |
|
|
321 | 10. Queue all outstanding periodics. |
|
|
322 | 11. If no events are pending now, queue all idle watchers. |
|
|
323 | 12. Queue all check watchers. |
|
|
324 | 13. Call all queued watchers in reverse order (i.e. check watchers first). |
|
|
325 | 14. If ev_unloop has been called or EVLOOP_ONESHOT or EVLOOP_NONBLOCK |
|
|
326 | was used, return, otherwise continue with step #1. |
|
|
327 | |
239 | =item ev_unloop (loop, how) |
328 | =item ev_unloop (loop, how) |
240 | |
329 | |
241 | Can be used to make a call to C<ev_loop> return early (but only after it |
330 | Can be used to make a call to C<ev_loop> return early (but only after it |
242 | has processed all outstanding events). The C<how> argument must be either |
331 | has processed all outstanding events). The C<how> argument must be either |
243 | C<EVUNLOOP_ONCE>, which will make the innermost C<ev_loop> call return, or |
332 | C<EVUNLOOP_ONE>, which will make the innermost C<ev_loop> call return, or |
244 | C<EVUNLOOP_ALL>, which will make all nested C<ev_loop> calls return. |
333 | C<EVUNLOOP_ALL>, which will make all nested C<ev_loop> calls return. |
245 | |
334 | |
246 | =item ev_ref (loop) |
335 | =item ev_ref (loop) |
247 | |
336 | |
248 | =item ev_unref (loop) |
337 | =item ev_unref (loop) |
… | |
… | |
299 | *) >>), and you can stop watching for events at any time by calling the |
388 | *) >>), and you can stop watching for events at any time by calling the |
300 | corresponding stop function (C<< ev_<type>_stop (loop, watcher *) >>. |
389 | corresponding stop function (C<< ev_<type>_stop (loop, watcher *) >>. |
301 | |
390 | |
302 | As long as your watcher is active (has been started but not stopped) you |
391 | As long as your watcher is active (has been started but not stopped) you |
303 | must not touch the values stored in it. Most specifically you must never |
392 | must not touch the values stored in it. Most specifically you must never |
304 | reinitialise it or call its set method. |
393 | reinitialise it or call its set macro. |
305 | |
394 | |
306 | You can check whether an event is active by calling the C<ev_is_active |
395 | You can check whether an event is active by calling the C<ev_is_active |
307 | (watcher *)> macro. To see whether an event is outstanding (but the |
396 | (watcher *)> macro. To see whether an event is outstanding (but the |
308 | callback for it has not been called yet) you can use the C<ev_is_pending |
397 | callback for it has not been called yet) you can use the C<ev_is_pending |
309 | (watcher *)> macro. |
398 | (watcher *)> macro. |
… | |
… | |
414 | in each iteration of the event loop (This behaviour is called |
503 | in each iteration of the event loop (This behaviour is called |
415 | level-triggering because you keep receiving events as long as the |
504 | level-triggering because you keep receiving events as long as the |
416 | condition persists. Remember you can stop the watcher if you don't want to |
505 | condition persists. Remember you can stop the watcher if you don't want to |
417 | act on the event and neither want to receive future events). |
506 | act on the event and neither want to receive future events). |
418 | |
507 | |
419 | In general you can register as many read and/or write event watchers oer |
508 | In general you can register as many read and/or write event watchers per |
420 | fd as you want (as long as you don't confuse yourself). Setting all file |
509 | fd as you want (as long as you don't confuse yourself). Setting all file |
421 | descriptors to non-blocking mode is also usually a good idea (but not |
510 | descriptors to non-blocking mode is also usually a good idea (but not |
422 | required if you know what you are doing). |
511 | required if you know what you are doing). |
423 | |
512 | |
424 | You have to be careful with dup'ed file descriptors, though. Some backends |
513 | You have to be careful with dup'ed file descriptors, though. Some backends |
425 | (the linux epoll backend is a notable example) cannot handle dup'ed file |
514 | (the linux epoll backend is a notable example) cannot handle dup'ed file |
426 | descriptors correctly if you register interest in two or more fds pointing |
515 | descriptors correctly if you register interest in two or more fds pointing |
427 | to the same file/socket etc. description. |
516 | to the same underlying file/socket etc. description (that is, they share |
|
|
517 | the same underlying "file open"). |
428 | |
518 | |
429 | If you must do this, then force the use of a known-to-be-good backend |
519 | If you must do this, then force the use of a known-to-be-good backend |
430 | (at the time of this writing, this includes only EVMETHOD_SELECT and |
520 | (at the time of this writing, this includes only C<EVBACKEND_SELECT> and |
431 | EVMETHOD_POLL). |
521 | C<EVBACKEND_POLL>). |
432 | |
522 | |
433 | =over 4 |
523 | =over 4 |
434 | |
524 | |
435 | =item ev_io_init (ev_io *, callback, int fd, int events) |
525 | =item ev_io_init (ev_io *, callback, int fd, int events) |
436 | |
526 | |
… | |
… | |
446 | |
536 | |
447 | Timer watchers are simple relative timers that generate an event after a |
537 | Timer watchers are simple relative timers that generate an event after a |
448 | given time, and optionally repeating in regular intervals after that. |
538 | given time, and optionally repeating in regular intervals after that. |
449 | |
539 | |
450 | The timers are based on real time, that is, if you register an event that |
540 | The timers are based on real time, that is, if you register an event that |
451 | times out after an hour and youreset your system clock to last years |
541 | times out after an hour and you reset your system clock to last years |
452 | time, it will still time out after (roughly) and hour. "Roughly" because |
542 | time, it will still time out after (roughly) and hour. "Roughly" because |
453 | detecting time jumps is hard, and soem inaccuracies are unavoidable (the |
543 | detecting time jumps is hard, and some inaccuracies are unavoidable (the |
454 | monotonic clock option helps a lot here). |
544 | monotonic clock option helps a lot here). |
455 | |
545 | |
456 | The relative timeouts are calculated relative to the C<ev_now ()> |
546 | The relative timeouts are calculated relative to the C<ev_now ()> |
457 | time. This is usually the right thing as this timestamp refers to the time |
547 | time. This is usually the right thing as this timestamp refers to the time |
458 | of the event triggering whatever timeout you are modifying/starting. If |
548 | of the event triggering whatever timeout you are modifying/starting. If |
459 | you suspect event processing to be delayed and you *need* to base the timeout |
549 | you suspect event processing to be delayed and you I<need> to base the timeout |
460 | ion the current time, use something like this to adjust for this: |
550 | on the current time, use something like this to adjust for this: |
461 | |
551 | |
462 | ev_timer_set (&timer, after + ev_now () - ev_time (), 0.); |
552 | ev_timer_set (&timer, after + ev_now () - ev_time (), 0.); |
|
|
553 | |
|
|
554 | The callback is guarenteed to be invoked only when its timeout has passed, |
|
|
555 | but if multiple timers become ready during the same loop iteration then |
|
|
556 | order of execution is undefined. |
463 | |
557 | |
464 | =over 4 |
558 | =over 4 |
465 | |
559 | |
466 | =item ev_timer_init (ev_timer *, callback, ev_tstamp after, ev_tstamp repeat) |
560 | =item ev_timer_init (ev_timer *, callback, ev_tstamp after, ev_tstamp repeat) |
467 | |
561 | |
… | |
… | |
473 | later, again, and again, until stopped manually. |
567 | later, again, and again, until stopped manually. |
474 | |
568 | |
475 | The timer itself will do a best-effort at avoiding drift, that is, if you |
569 | The timer itself will do a best-effort at avoiding drift, that is, if you |
476 | configure a timer to trigger every 10 seconds, then it will trigger at |
570 | configure a timer to trigger every 10 seconds, then it will trigger at |
477 | exactly 10 second intervals. If, however, your program cannot keep up with |
571 | exactly 10 second intervals. If, however, your program cannot keep up with |
478 | the timer (ecause it takes longer than those 10 seconds to do stuff) the |
572 | the timer (because it takes longer than those 10 seconds to do stuff) the |
479 | timer will not fire more than once per event loop iteration. |
573 | timer will not fire more than once per event loop iteration. |
480 | |
574 | |
481 | =item ev_timer_again (loop) |
575 | =item ev_timer_again (loop) |
482 | |
576 | |
483 | This will act as if the timer timed out and restart it again if it is |
577 | This will act as if the timer timed out and restart it again if it is |
… | |
… | |
514 | again). |
608 | again). |
515 | |
609 | |
516 | They can also be used to implement vastly more complex timers, such as |
610 | They can also be used to implement vastly more complex timers, such as |
517 | triggering an event on eahc midnight, local time. |
611 | triggering an event on eahc midnight, local time. |
518 | |
612 | |
|
|
613 | As with timers, the callback is guarenteed to be invoked only when the |
|
|
614 | time (C<at>) has been passed, but if multiple periodic timers become ready |
|
|
615 | during the same loop iteration then order of execution is undefined. |
|
|
616 | |
519 | =over 4 |
617 | =over 4 |
520 | |
618 | |
521 | =item ev_periodic_init (ev_periodic *, callback, ev_tstamp at, ev_tstamp interval, reschedule_cb) |
619 | =item ev_periodic_init (ev_periodic *, callback, ev_tstamp at, ev_tstamp interval, reschedule_cb) |
522 | |
620 | |
523 | =item ev_periodic_set (ev_periodic *, ev_tstamp after, ev_tstamp repeat, reschedule_cb) |
621 | =item ev_periodic_set (ev_periodic *, ev_tstamp after, ev_tstamp repeat, reschedule_cb) |
524 | |
622 | |
525 | Lots of arguments, lets sort it out... There are basically three modes of |
623 | Lots of arguments, lets sort it out... There are basically three modes of |
526 | operation, and we will explain them from simplest to complex: |
624 | operation, and we will explain them from simplest to complex: |
527 | |
|
|
528 | |
625 | |
529 | =over 4 |
626 | =over 4 |
530 | |
627 | |
531 | =item * absolute timer (interval = reschedule_cb = 0) |
628 | =item * absolute timer (interval = reschedule_cb = 0) |
532 | |
629 | |
… | |
… | |
560 | In this mode the values for C<interval> and C<at> are both being |
657 | In this mode the values for C<interval> and C<at> are both being |
561 | ignored. Instead, each time the periodic watcher gets scheduled, the |
658 | ignored. Instead, each time the periodic watcher gets scheduled, the |
562 | reschedule callback will be called with the watcher as first, and the |
659 | reschedule callback will be called with the watcher as first, and the |
563 | current time as second argument. |
660 | current time as second argument. |
564 | |
661 | |
565 | NOTE: I<This callback MUST NOT stop or destroy the periodic or any other |
662 | NOTE: I<This callback MUST NOT stop or destroy any periodic watcher, |
566 | periodic watcher, ever, or make any event loop modifications>. If you need |
663 | ever, or make any event loop modifications>. If you need to stop it, |
567 | to stop it, return C<now + 1e30> (or so, fudge fudge) and stop it afterwards. |
664 | return C<now + 1e30> (or so, fudge fudge) and stop it afterwards (e.g. by |
568 | |
665 | starting a prepare watcher). |
569 | Also, I<< this callback must always return a time that is later than the |
|
|
570 | passed C<now> value >>. Not even C<now> itself will be ok. |
|
|
571 | |
666 | |
572 | Its prototype is C<ev_tstamp (*reschedule_cb)(struct ev_periodic *w, |
667 | Its prototype is C<ev_tstamp (*reschedule_cb)(struct ev_periodic *w, |
573 | ev_tstamp now)>, e.g.: |
668 | ev_tstamp now)>, e.g.: |
574 | |
669 | |
575 | static ev_tstamp my_rescheduler (struct ev_periodic *w, ev_tstamp now) |
670 | static ev_tstamp my_rescheduler (struct ev_periodic *w, ev_tstamp now) |
… | |
… | |
580 | It must return the next time to trigger, based on the passed time value |
675 | It must return the next time to trigger, based on the passed time value |
581 | (that is, the lowest time value larger than to the second argument). It |
676 | (that is, the lowest time value larger than to the second argument). It |
582 | will usually be called just before the callback will be triggered, but |
677 | will usually be called just before the callback will be triggered, but |
583 | might be called at other times, too. |
678 | might be called at other times, too. |
584 | |
679 | |
|
|
680 | NOTE: I<< This callback must always return a time that is later than the |
|
|
681 | passed C<now> value >>. Not even C<now> itself will do, it I<must> be larger. |
|
|
682 | |
585 | This can be used to create very complex timers, such as a timer that |
683 | This can be used to create very complex timers, such as a timer that |
586 | triggers on each midnight, local time. To do this, you would calculate the |
684 | triggers on each midnight, local time. To do this, you would calculate the |
587 | next midnight after C<now> and return the timestamp value for this. How you do this |
685 | next midnight after C<now> and return the timestamp value for this. How |
588 | is, again, up to you (but it is not trivial). |
686 | you do this is, again, up to you (but it is not trivial, which is the main |
|
|
687 | reason I omitted it as an example). |
589 | |
688 | |
590 | =back |
689 | =back |
591 | |
690 | |
592 | =item ev_periodic_again (loop, ev_periodic *) |
691 | =item ev_periodic_again (loop, ev_periodic *) |
593 | |
692 | |
… | |
… | |
672 | =back |
771 | =back |
673 | |
772 | |
674 | =head2 C<ev_prepare> and C<ev_check> - customise your event loop |
773 | =head2 C<ev_prepare> and C<ev_check> - customise your event loop |
675 | |
774 | |
676 | Prepare and check watchers are usually (but not always) used in tandem: |
775 | Prepare and check watchers are usually (but not always) used in tandem: |
677 | Prepare watchers get invoked before the process blocks and check watchers |
776 | prepare watchers get invoked before the process blocks and check watchers |
678 | afterwards. |
777 | afterwards. |
679 | |
778 | |
680 | Their main purpose is to integrate other event mechanisms into libev. This |
779 | Their main purpose is to integrate other event mechanisms into libev. This |
681 | could be used, for example, to track variable changes, implement your own |
780 | could be used, for example, to track variable changes, implement your own |
682 | watchers, integrate net-snmp or a coroutine library and lots more. |
781 | watchers, integrate net-snmp or a coroutine library and lots more. |
… | |
… | |
685 | to be watched by the other library, registering C<ev_io> watchers for |
784 | to be watched by the other library, registering C<ev_io> watchers for |
686 | them and starting an C<ev_timer> watcher for any timeouts (many libraries |
785 | them and starting an C<ev_timer> watcher for any timeouts (many libraries |
687 | provide just this functionality). Then, in the check watcher you check for |
786 | provide just this functionality). Then, in the check watcher you check for |
688 | any events that occured (by checking the pending status of all watchers |
787 | any events that occured (by checking the pending status of all watchers |
689 | and stopping them) and call back into the library. The I/O and timer |
788 | and stopping them) and call back into the library. The I/O and timer |
690 | callbacks will never actually be called (but must be valid neverthelles, |
789 | callbacks will never actually be called (but must be valid nevertheless, |
691 | because you never know, you know?). |
790 | because you never know, you know?). |
692 | |
791 | |
693 | As another example, the Perl Coro module uses these hooks to integrate |
792 | As another example, the Perl Coro module uses these hooks to integrate |
694 | coroutines into libev programs, by yielding to other active coroutines |
793 | coroutines into libev programs, by yielding to other active coroutines |
695 | during each prepare and only letting the process block if no coroutines |
794 | during each prepare and only letting the process block if no coroutines |
696 | are ready to run (its actually more complicated, it only runs coroutines |
795 | are ready to run (it's actually more complicated: it only runs coroutines |
697 | with priority higher than the event loop and one lower priority once, |
796 | with priority higher than or equal to the event loop and one coroutine |
698 | using idle watchers to keep the event loop from blocking if lower-priority |
797 | of lower priority, but only once, using idle watchers to keep the event |
699 | coroutines exist, thus mapping low-priority coroutines to idle/background |
798 | loop from blocking if lower-priority coroutines are active, thus mapping |
700 | tasks). |
799 | low-priority coroutines to idle/background tasks). |
701 | |
800 | |
702 | =over 4 |
801 | =over 4 |
703 | |
802 | |
704 | =item ev_prepare_init (ev_prepare *, callback) |
803 | =item ev_prepare_init (ev_prepare *, callback) |
705 | |
804 | |
… | |
… | |
720 | =item ev_once (loop, int fd, int events, ev_tstamp timeout, callback) |
819 | =item ev_once (loop, int fd, int events, ev_tstamp timeout, callback) |
721 | |
820 | |
722 | This function combines a simple timer and an I/O watcher, calls your |
821 | This function combines a simple timer and an I/O watcher, calls your |
723 | callback on whichever event happens first and automatically stop both |
822 | callback on whichever event happens first and automatically stop both |
724 | watchers. This is useful if you want to wait for a single event on an fd |
823 | watchers. This is useful if you want to wait for a single event on an fd |
725 | or timeout without havign to allocate/configure/start/stop/free one or |
824 | or timeout without having to allocate/configure/start/stop/free one or |
726 | more watchers yourself. |
825 | more watchers yourself. |
727 | |
826 | |
728 | If C<fd> is less than 0, then no I/O watcher will be started and events |
827 | If C<fd> is less than 0, then no I/O watcher will be started and events |
729 | is being ignored. Otherwise, an C<ev_io> watcher for the given C<fd> and |
828 | is being ignored. Otherwise, an C<ev_io> watcher for the given C<fd> and |
730 | C<events> set will be craeted and started. |
829 | C<events> set will be craeted and started. |
… | |
… | |
733 | started. Otherwise an C<ev_timer> watcher with after = C<timeout> (and |
832 | started. Otherwise an C<ev_timer> watcher with after = C<timeout> (and |
734 | repeat = 0) will be started. While C<0> is a valid timeout, it is of |
833 | repeat = 0) will be started. While C<0> is a valid timeout, it is of |
735 | dubious value. |
834 | dubious value. |
736 | |
835 | |
737 | The callback has the type C<void (*cb)(int revents, void *arg)> and gets |
836 | The callback has the type C<void (*cb)(int revents, void *arg)> and gets |
738 | passed an events set like normal event callbacks (with a combination of |
837 | passed an C<revents> set like normal event callbacks (a combination of |
739 | C<EV_ERROR>, C<EV_READ>, C<EV_WRITE> or C<EV_TIMEOUT>) and the C<arg> |
838 | C<EV_ERROR>, C<EV_READ>, C<EV_WRITE> or C<EV_TIMEOUT>) and the C<arg> |
740 | value passed to C<ev_once>: |
839 | value passed to C<ev_once>: |
741 | |
840 | |
742 | static void stdin_ready (int revents, void *arg) |
841 | static void stdin_ready (int revents, void *arg) |
743 | { |
842 | { |
… | |
… | |
764 | |
863 | |
765 | Feed an event as if the given signal occured (loop must be the default loop!). |
864 | Feed an event as if the given signal occured (loop must be the default loop!). |
766 | |
865 | |
767 | =back |
866 | =back |
768 | |
867 | |
|
|
868 | =head1 LIBEVENT EMULATION |
|
|
869 | |
|
|
870 | Libev offers a compatibility emulation layer for libevent. It cannot |
|
|
871 | emulate the internals of libevent, so here are some usage hints: |
|
|
872 | |
|
|
873 | =over 4 |
|
|
874 | |
|
|
875 | =item * Use it by including <event.h>, as usual. |
|
|
876 | |
|
|
877 | =item * The following members are fully supported: ev_base, ev_callback, |
|
|
878 | ev_arg, ev_fd, ev_res, ev_events. |
|
|
879 | |
|
|
880 | =item * Avoid using ev_flags and the EVLIST_*-macros, while it is |
|
|
881 | maintained by libev, it does not work exactly the same way as in libevent (consider |
|
|
882 | it a private API). |
|
|
883 | |
|
|
884 | =item * Priorities are not currently supported. Initialising priorities |
|
|
885 | will fail and all watchers will have the same priority, even though there |
|
|
886 | is an ev_pri field. |
|
|
887 | |
|
|
888 | =item * Other members are not supported. |
|
|
889 | |
|
|
890 | =item * The libev emulation is I<not> ABI compatible to libevent, you need |
|
|
891 | to use the libev header file and library. |
|
|
892 | |
|
|
893 | =back |
|
|
894 | |
|
|
895 | =head1 C++ SUPPORT |
|
|
896 | |
|
|
897 | TBD. |
|
|
898 | |
769 | =head1 AUTHOR |
899 | =head1 AUTHOR |
770 | |
900 | |
771 | Marc Lehmann <libev@schmorp.de>. |
901 | Marc Lehmann <libev@schmorp.de>. |
772 | |
902 | |