… | |
… | |
45 | |
45 | |
46 | Libev represents time as a single floating point number, representing the |
46 | Libev represents time as a single floating point number, representing the |
47 | (fractional) number of seconds since the (POSIX) epoch (somewhere near |
47 | (fractional) number of seconds since the (POSIX) epoch (somewhere near |
48 | the beginning of 1970, details are complicated, don't ask). This type is |
48 | the beginning of 1970, details are complicated, don't ask). This type is |
49 | called C<ev_tstamp>, which is what you should use too. It usually aliases |
49 | called C<ev_tstamp>, which is what you should use too. It usually aliases |
50 | to the double type in C. |
50 | to the C<double> type in C, and when you need to do any calculations on |
|
|
51 | it, you should treat it as such. |
|
|
52 | |
51 | |
53 | |
52 | =head1 GLOBAL FUNCTIONS |
54 | =head1 GLOBAL FUNCTIONS |
53 | |
55 | |
54 | These functions can be called anytime, even before initialising the |
56 | These functions can be called anytime, even before initialising the |
55 | library in any way. |
57 | library in any way. |
… | |
… | |
75 | Usually, it's a good idea to terminate if the major versions mismatch, |
77 | Usually, it's a good idea to terminate if the major versions mismatch, |
76 | as this indicates an incompatible change. Minor versions are usually |
78 | as this indicates an incompatible change. Minor versions are usually |
77 | compatible to older versions, so a larger minor version alone is usually |
79 | compatible to older versions, so a larger minor version alone is usually |
78 | not a problem. |
80 | not a problem. |
79 | |
81 | |
|
|
82 | Example: make sure we haven't accidentally been linked against the wrong |
|
|
83 | version: |
|
|
84 | |
|
|
85 | assert (("libev version mismatch", |
|
|
86 | ev_version_major () == EV_VERSION_MAJOR |
|
|
87 | && ev_version_minor () >= EV_VERSION_MINOR)); |
|
|
88 | |
|
|
89 | =item unsigned int ev_supported_backends () |
|
|
90 | |
|
|
91 | Return the set of all backends (i.e. their corresponding C<EV_BACKEND_*> |
|
|
92 | value) compiled into this binary of libev (independent of their |
|
|
93 | availability on the system you are running on). See C<ev_default_loop> for |
|
|
94 | a description of the set values. |
|
|
95 | |
|
|
96 | Example: make sure we have the epoll method, because yeah this is cool and |
|
|
97 | a must have and can we have a torrent of it please!!!11 |
|
|
98 | |
|
|
99 | assert (("sorry, no epoll, no sex", |
|
|
100 | ev_supported_backends () & EVBACKEND_EPOLL)); |
|
|
101 | |
|
|
102 | =item unsigned int ev_recommended_backends () |
|
|
103 | |
|
|
104 | Return the set of all backends compiled into this binary of libev and also |
|
|
105 | recommended for this platform. This set is often smaller than the one |
|
|
106 | returned by C<ev_supported_backends>, as for example kqueue is broken on |
|
|
107 | most BSDs and will not be autodetected unless you explicitly request it |
|
|
108 | (assuming you know what you are doing). This is the set of backends that |
|
|
109 | libev will probe for if you specify no backends explicitly. |
|
|
110 | |
|
|
111 | =item unsigned int ev_embeddable_backends () |
|
|
112 | |
|
|
113 | Returns the set of backends that are embeddable in other event loops. This |
|
|
114 | is the theoretical, all-platform, value. To find which backends |
|
|
115 | might be supported on the current system, you would need to look at |
|
|
116 | C<ev_embeddable_backends () & ev_supported_backends ()>, likewise for |
|
|
117 | recommended ones. |
|
|
118 | |
|
|
119 | See the description of C<ev_embed> watchers for more info. |
|
|
120 | |
80 | =item ev_set_allocator (void *(*cb)(void *ptr, long size)) |
121 | =item ev_set_allocator (void *(*cb)(void *ptr, long size)) |
81 | |
122 | |
82 | Sets the allocation function to use (the prototype is similar to the |
123 | Sets the allocation function to use (the prototype is similar to the |
83 | realloc C function, the semantics are identical). It is used to allocate |
124 | realloc C function, the semantics are identical). It is used to allocate |
84 | and free memory (no surprises here). If it returns zero when memory |
125 | and free memory (no surprises here). If it returns zero when memory |
… | |
… | |
86 | destructive action. The default is your system realloc function. |
127 | destructive action. The default is your system realloc function. |
87 | |
128 | |
88 | You could override this function in high-availability programs to, say, |
129 | You could override this function in high-availability programs to, say, |
89 | free some memory if it cannot allocate memory, to use a special allocator, |
130 | free some memory if it cannot allocate memory, to use a special allocator, |
90 | or even to sleep a while and retry until some memory is available. |
131 | or even to sleep a while and retry until some memory is available. |
|
|
132 | |
|
|
133 | Example: replace the libev allocator with one that waits a bit and then |
|
|
134 | retries: better than mine). |
|
|
135 | |
|
|
136 | static void * |
|
|
137 | persistent_realloc (void *ptr, long size) |
|
|
138 | { |
|
|
139 | for (;;) |
|
|
140 | { |
|
|
141 | void *newptr = realloc (ptr, size); |
|
|
142 | |
|
|
143 | if (newptr) |
|
|
144 | return newptr; |
|
|
145 | |
|
|
146 | sleep (60); |
|
|
147 | } |
|
|
148 | } |
|
|
149 | |
|
|
150 | ... |
|
|
151 | ev_set_allocator (persistent_realloc); |
91 | |
152 | |
92 | =item ev_set_syserr_cb (void (*cb)(const char *msg)); |
153 | =item ev_set_syserr_cb (void (*cb)(const char *msg)); |
93 | |
154 | |
94 | Set the callback function to call on a retryable syscall error (such |
155 | Set the callback function to call on a retryable syscall error (such |
95 | as failed select, poll, epoll_wait). The message is a printable string |
156 | as failed select, poll, epoll_wait). The message is a printable string |
… | |
… | |
97 | callback is set, then libev will expect it to remedy the sitution, no |
158 | callback is set, then libev will expect it to remedy the sitution, no |
98 | matter what, when it returns. That is, libev will generally retry the |
159 | matter what, when it returns. That is, libev will generally retry the |
99 | requested operation, or, if the condition doesn't go away, do bad stuff |
160 | requested operation, or, if the condition doesn't go away, do bad stuff |
100 | (such as abort). |
161 | (such as abort). |
101 | |
162 | |
|
|
163 | Example: do the same thing as libev does internally: |
|
|
164 | |
|
|
165 | static void |
|
|
166 | fatal_error (const char *msg) |
|
|
167 | { |
|
|
168 | perror (msg); |
|
|
169 | abort (); |
|
|
170 | } |
|
|
171 | |
|
|
172 | ... |
|
|
173 | ev_set_syserr_cb (fatal_error); |
|
|
174 | |
102 | =back |
175 | =back |
103 | |
176 | |
104 | =head1 FUNCTIONS CONTROLLING THE EVENT LOOP |
177 | =head1 FUNCTIONS CONTROLLING THE EVENT LOOP |
105 | |
178 | |
106 | An event loop is described by a C<struct ev_loop *>. The library knows two |
179 | An event loop is described by a C<struct ev_loop *>. The library knows two |
… | |
… | |
119 | =item struct ev_loop *ev_default_loop (unsigned int flags) |
192 | =item struct ev_loop *ev_default_loop (unsigned int flags) |
120 | |
193 | |
121 | This will initialise the default event loop if it hasn't been initialised |
194 | This will initialise the default event loop if it hasn't been initialised |
122 | yet and return it. If the default loop could not be initialised, returns |
195 | yet and return it. If the default loop could not be initialised, returns |
123 | false. If it already was initialised it simply returns it (and ignores the |
196 | false. If it already was initialised it simply returns it (and ignores the |
124 | flags). |
197 | flags. If that is troubling you, check C<ev_backend ()> afterwards). |
125 | |
198 | |
126 | If you don't know what event loop to use, use the one returned from this |
199 | If you don't know what event loop to use, use the one returned from this |
127 | function. |
200 | function. |
128 | |
201 | |
129 | The flags argument can be used to specify special behaviour or specific |
202 | The flags argument can be used to specify special behaviour or specific |
130 | backends to use, and is usually specified as 0 (or EVFLAG_AUTO). |
203 | backends to use, and is usually specified as C<0> (or C<EVFLAG_AUTO>). |
131 | |
204 | |
132 | It supports the following flags: |
205 | The following flags are supported: |
133 | |
206 | |
134 | =over 4 |
207 | =over 4 |
135 | |
208 | |
136 | =item C<EVFLAG_AUTO> |
209 | =item C<EVFLAG_AUTO> |
137 | |
210 | |
… | |
… | |
145 | C<LIBEV_FLAGS>. Otherwise (the default), this environment variable will |
218 | C<LIBEV_FLAGS>. Otherwise (the default), this environment variable will |
146 | override the flags completely if it is found in the environment. This is |
219 | override the flags completely if it is found in the environment. This is |
147 | useful to try out specific backends to test their performance, or to work |
220 | useful to try out specific backends to test their performance, or to work |
148 | around bugs. |
221 | around bugs. |
149 | |
222 | |
150 | =item C<EVMETHOD_SELECT> (value 1, portable select backend) |
223 | =item C<EVBACKEND_SELECT> (value 1, portable select backend) |
151 | |
224 | |
152 | This is your standard select(2) backend. Not I<completely> standard, as |
225 | This is your standard select(2) backend. Not I<completely> standard, as |
153 | libev tries to roll its own fd_set with no limits on the number of fds, |
226 | libev tries to roll its own fd_set with no limits on the number of fds, |
154 | but if that fails, expect a fairly low limit on the number of fds when |
227 | but if that fails, expect a fairly low limit on the number of fds when |
155 | using this backend. It doesn't scale too well (O(highest_fd)), but its usually |
228 | using this backend. It doesn't scale too well (O(highest_fd)), but its usually |
156 | the fastest backend for a low number of fds. |
229 | the fastest backend for a low number of fds. |
157 | |
230 | |
158 | =item C<EVMETHOD_POLL> (value 2, poll backend, available everywhere except on windows) |
231 | =item C<EVBACKEND_POLL> (value 2, poll backend, available everywhere except on windows) |
159 | |
232 | |
160 | And this is your standard poll(2) backend. It's more complicated than |
233 | And this is your standard poll(2) backend. It's more complicated than |
161 | select, but handles sparse fds better and has no artificial limit on the |
234 | select, but handles sparse fds better and has no artificial limit on the |
162 | number of fds you can use (except it will slow down considerably with a |
235 | number of fds you can use (except it will slow down considerably with a |
163 | lot of inactive fds). It scales similarly to select, i.e. O(total_fds). |
236 | lot of inactive fds). It scales similarly to select, i.e. O(total_fds). |
164 | |
237 | |
165 | =item C<EVMETHOD_EPOLL> (value 4, Linux) |
238 | =item C<EVBACKEND_EPOLL> (value 4, Linux) |
166 | |
239 | |
167 | For few fds, this backend is a bit little slower than poll and select, |
240 | For few fds, this backend is a bit little slower than poll and select, |
168 | but it scales phenomenally better. While poll and select usually scale like |
241 | but it scales phenomenally better. While poll and select usually scale like |
169 | O(total_fds) where n is the total number of fds (or the highest fd), epoll scales |
242 | O(total_fds) where n is the total number of fds (or the highest fd), epoll scales |
170 | either O(1) or O(active_fds). |
243 | either O(1) or O(active_fds). |
… | |
… | |
173 | result in some caching, there is still a syscall per such incident |
246 | result in some caching, there is still a syscall per such incident |
174 | (because the fd could point to a different file description now), so its |
247 | (because the fd could point to a different file description now), so its |
175 | best to avoid that. Also, dup()ed file descriptors might not work very |
248 | best to avoid that. Also, dup()ed file descriptors might not work very |
176 | well if you register events for both fds. |
249 | well if you register events for both fds. |
177 | |
250 | |
|
|
251 | Please note that epoll sometimes generates spurious notifications, so you |
|
|
252 | need to use non-blocking I/O or other means to avoid blocking when no data |
|
|
253 | (or space) is available. |
|
|
254 | |
178 | =item C<EVMETHOD_KQUEUE> (value 8, most BSD clones) |
255 | =item C<EVBACKEND_KQUEUE> (value 8, most BSD clones) |
179 | |
256 | |
180 | Kqueue deserves special mention, as at the time of this writing, it |
257 | Kqueue deserves special mention, as at the time of this writing, it |
181 | was broken on all BSDs except NetBSD (usually it doesn't work with |
258 | was broken on all BSDs except NetBSD (usually it doesn't work with |
182 | anything but sockets and pipes, except on Darwin, where of course its |
259 | anything but sockets and pipes, except on Darwin, where of course its |
183 | completely useless). For this reason its not being "autodetected" unless |
260 | completely useless). For this reason its not being "autodetected" |
184 | you explicitly specify the flags (i.e. you don't use EVFLAG_AUTO). |
261 | unless you explicitly specify it explicitly in the flags (i.e. using |
|
|
262 | C<EVBACKEND_KQUEUE>). |
185 | |
263 | |
186 | It scales in the same way as the epoll backend, but the interface to the |
264 | It scales in the same way as the epoll backend, but the interface to the |
187 | kernel is more efficient (which says nothing about its actual speed, of |
265 | kernel is more efficient (which says nothing about its actual speed, of |
188 | course). While starting and stopping an I/O watcher does not cause an |
266 | course). While starting and stopping an I/O watcher does not cause an |
189 | extra syscall as with epoll, it still adds up to four event changes per |
267 | extra syscall as with epoll, it still adds up to four event changes per |
190 | incident, so its best to avoid that. |
268 | incident, so its best to avoid that. |
191 | |
269 | |
192 | =item C<EVMETHOD_DEVPOLL> (value 16, Solaris 8) |
270 | =item C<EVBACKEND_DEVPOLL> (value 16, Solaris 8) |
193 | |
271 | |
194 | This is not implemented yet (and might never be). |
272 | This is not implemented yet (and might never be). |
195 | |
273 | |
196 | =item C<EVMETHOD_PORT> (value 32, Solaris 10) |
274 | =item C<EVBACKEND_PORT> (value 32, Solaris 10) |
197 | |
275 | |
198 | This uses the Solaris 10 port mechanism. As with everything on Solaris, |
276 | This uses the Solaris 10 port mechanism. As with everything on Solaris, |
199 | it's really slow, but it still scales very well (O(active_fds)). |
277 | it's really slow, but it still scales very well (O(active_fds)). |
200 | |
278 | |
|
|
279 | Please note that solaris ports can result in a lot of spurious |
|
|
280 | notifications, so you need to use non-blocking I/O or other means to avoid |
|
|
281 | blocking when no data (or space) is available. |
|
|
282 | |
201 | =item C<EVMETHOD_ALL> |
283 | =item C<EVBACKEND_ALL> |
202 | |
284 | |
203 | Try all backends (even potentially broken ones that wouldn't be tried |
285 | Try all backends (even potentially broken ones that wouldn't be tried |
204 | with C<EVFLAG_AUTO>). Since this is a mask, you can do stuff such as |
286 | with C<EVFLAG_AUTO>). Since this is a mask, you can do stuff such as |
205 | C<EVMETHOD_ALL & ~EVMETHOD_KQUEUE>. |
287 | C<EVBACKEND_ALL & ~EVBACKEND_KQUEUE>. |
206 | |
288 | |
207 | =back |
289 | =back |
208 | |
290 | |
209 | If one or more of these are ored into the flags value, then only these |
291 | If one or more of these are ored into the flags value, then only these |
210 | backends will be tried (in the reverse order as given here). If none are |
292 | backends will be tried (in the reverse order as given here). If none are |
211 | specified, most compiled-in backend will be tried, usually in reverse |
293 | specified, most compiled-in backend will be tried, usually in reverse |
212 | order of their flag values :) |
294 | order of their flag values :) |
213 | |
295 | |
|
|
296 | The most typical usage is like this: |
|
|
297 | |
|
|
298 | if (!ev_default_loop (0)) |
|
|
299 | fatal ("could not initialise libev, bad $LIBEV_FLAGS in environment?"); |
|
|
300 | |
|
|
301 | Restrict libev to the select and poll backends, and do not allow |
|
|
302 | environment settings to be taken into account: |
|
|
303 | |
|
|
304 | ev_default_loop (EVBACKEND_POLL | EVBACKEND_SELECT | EVFLAG_NOENV); |
|
|
305 | |
|
|
306 | Use whatever libev has to offer, but make sure that kqueue is used if |
|
|
307 | available (warning, breaks stuff, best use only with your own private |
|
|
308 | event loop and only if you know the OS supports your types of fds): |
|
|
309 | |
|
|
310 | ev_default_loop (ev_recommended_backends () | EVBACKEND_KQUEUE); |
|
|
311 | |
214 | =item struct ev_loop *ev_loop_new (unsigned int flags) |
312 | =item struct ev_loop *ev_loop_new (unsigned int flags) |
215 | |
313 | |
216 | Similar to C<ev_default_loop>, but always creates a new event loop that is |
314 | Similar to C<ev_default_loop>, but always creates a new event loop that is |
217 | always distinct from the default loop. Unlike the default loop, it cannot |
315 | always distinct from the default loop. Unlike the default loop, it cannot |
218 | handle signal and child watchers, and attempts to do so will be greeted by |
316 | handle signal and child watchers, and attempts to do so will be greeted by |
219 | undefined behaviour (or a failed assertion if assertions are enabled). |
317 | undefined behaviour (or a failed assertion if assertions are enabled). |
220 | |
318 | |
|
|
319 | Example: try to create a event loop that uses epoll and nothing else. |
|
|
320 | |
|
|
321 | struct ev_loop *epoller = ev_loop_new (EVBACKEND_EPOLL | EVFLAG_NOENV); |
|
|
322 | if (!epoller) |
|
|
323 | fatal ("no epoll found here, maybe it hides under your chair"); |
|
|
324 | |
221 | =item ev_default_destroy () |
325 | =item ev_default_destroy () |
222 | |
326 | |
223 | Destroys the default loop again (frees all memory and kernel state |
327 | Destroys the default loop again (frees all memory and kernel state |
224 | etc.). This stops all registered event watchers (by not touching them in |
328 | etc.). This stops all registered event watchers (by not touching them in |
225 | any way whatsoever, although you cannot rely on this :). |
329 | any way whatsoever, although you cannot rely on this :). |
… | |
… | |
244 | it just in case after a fork. To make this easy, the function will fit in |
348 | it just in case after a fork. To make this easy, the function will fit in |
245 | quite nicely into a call to C<pthread_atfork>: |
349 | quite nicely into a call to C<pthread_atfork>: |
246 | |
350 | |
247 | pthread_atfork (0, 0, ev_default_fork); |
351 | pthread_atfork (0, 0, ev_default_fork); |
248 | |
352 | |
|
|
353 | At the moment, C<EVBACKEND_SELECT> and C<EVBACKEND_POLL> are safe to use |
|
|
354 | without calling this function, so if you force one of those backends you |
|
|
355 | do not need to care. |
|
|
356 | |
249 | =item ev_loop_fork (loop) |
357 | =item ev_loop_fork (loop) |
250 | |
358 | |
251 | Like C<ev_default_fork>, but acts on an event loop created by |
359 | Like C<ev_default_fork>, but acts on an event loop created by |
252 | C<ev_loop_new>. Yes, you have to call this on every allocated event loop |
360 | C<ev_loop_new>. Yes, you have to call this on every allocated event loop |
253 | after fork, and how you do this is entirely your own problem. |
361 | after fork, and how you do this is entirely your own problem. |
254 | |
362 | |
255 | =item unsigned int ev_method (loop) |
363 | =item unsigned int ev_backend (loop) |
256 | |
364 | |
257 | Returns one of the C<EVMETHOD_*> flags indicating the event backend in |
365 | Returns one of the C<EVBACKEND_*> flags indicating the event backend in |
258 | use. |
366 | use. |
259 | |
367 | |
260 | =item ev_tstamp ev_now (loop) |
368 | =item ev_tstamp ev_now (loop) |
261 | |
369 | |
262 | Returns the current "event loop time", which is the time the event loop |
370 | Returns the current "event loop time", which is the time the event loop |
263 | got events and started processing them. This timestamp does not change |
371 | received events and started processing them. This timestamp does not |
264 | as long as callbacks are being processed, and this is also the base time |
372 | change as long as callbacks are being processed, and this is also the base |
265 | used for relative timers. You can treat it as the timestamp of the event |
373 | time used for relative timers. You can treat it as the timestamp of the |
266 | occuring (or more correctly, the mainloop finding out about it). |
374 | event occuring (or more correctly, libev finding out about it). |
267 | |
375 | |
268 | =item ev_loop (loop, int flags) |
376 | =item ev_loop (loop, int flags) |
269 | |
377 | |
270 | Finally, this is it, the event handler. This function usually is called |
378 | Finally, this is it, the event handler. This function usually is called |
271 | after you initialised all your watchers and you want to start handling |
379 | after you initialised all your watchers and you want to start handling |
272 | events. |
380 | events. |
273 | |
381 | |
274 | If the flags argument is specified as 0, it will not return until either |
382 | If the flags argument is specified as C<0>, it will not return until |
275 | no event watchers are active anymore or C<ev_unloop> was called. |
383 | either no event watchers are active anymore or C<ev_unloop> was called. |
|
|
384 | |
|
|
385 | Please note that an explicit C<ev_unloop> is usually better than |
|
|
386 | relying on all watchers to be stopped when deciding when a program has |
|
|
387 | finished (especially in interactive programs), but having a program that |
|
|
388 | automatically loops as long as it has to and no longer by virtue of |
|
|
389 | relying on its watchers stopping correctly is a thing of beauty. |
276 | |
390 | |
277 | A flags value of C<EVLOOP_NONBLOCK> will look for new events, will handle |
391 | A flags value of C<EVLOOP_NONBLOCK> will look for new events, will handle |
278 | those events and any outstanding ones, but will not block your process in |
392 | those events and any outstanding ones, but will not block your process in |
279 | case there are no events and will return after one iteration of the loop. |
393 | case there are no events and will return after one iteration of the loop. |
280 | |
394 | |
281 | A flags value of C<EVLOOP_ONESHOT> will look for new events (waiting if |
395 | A flags value of C<EVLOOP_ONESHOT> will look for new events (waiting if |
282 | neccessary) and will handle those and any outstanding ones. It will block |
396 | neccessary) and will handle those and any outstanding ones. It will block |
283 | your process until at least one new event arrives, and will return after |
397 | your process until at least one new event arrives, and will return after |
284 | one iteration of the loop. |
398 | one iteration of the loop. This is useful if you are waiting for some |
|
|
399 | external event in conjunction with something not expressible using other |
|
|
400 | libev watchers. However, a pair of C<ev_prepare>/C<ev_check> watchers is |
|
|
401 | usually a better approach for this kind of thing. |
285 | |
402 | |
286 | This flags value could be used to implement alternative looping |
|
|
287 | constructs, but the C<prepare> and C<check> watchers provide a better and |
|
|
288 | more generic mechanism. |
|
|
289 | |
|
|
290 | Here are the gory details of what ev_loop does: |
403 | Here are the gory details of what C<ev_loop> does: |
291 | |
404 | |
292 | 1. If there are no active watchers (reference count is zero), return. |
405 | * If there are no active watchers (reference count is zero), return. |
293 | 2. Queue and immediately call all prepare watchers. |
406 | - Queue prepare watchers and then call all outstanding watchers. |
294 | 3. If we have been forked, recreate the kernel state. |
407 | - If we have been forked, recreate the kernel state. |
295 | 4. Update the kernel state with all outstanding changes. |
408 | - Update the kernel state with all outstanding changes. |
296 | 5. Update the "event loop time". |
409 | - Update the "event loop time". |
297 | 6. Calculate for how long to block. |
410 | - Calculate for how long to block. |
298 | 7. Block the process, waiting for events. |
411 | - Block the process, waiting for any events. |
|
|
412 | - Queue all outstanding I/O (fd) events. |
299 | 8. Update the "event loop time" and do time jump handling. |
413 | - Update the "event loop time" and do time jump handling. |
300 | 9. Queue all outstanding timers. |
414 | - Queue all outstanding timers. |
301 | 10. Queue all outstanding periodics. |
415 | - Queue all outstanding periodics. |
302 | 11. If no events are pending now, queue all idle watchers. |
416 | - If no events are pending now, queue all idle watchers. |
303 | 12. Queue all check watchers. |
417 | - Queue all check watchers. |
304 | 13. Call all queued watchers in reverse order (i.e. check watchers first). |
418 | - Call all queued watchers in reverse order (i.e. check watchers first). |
|
|
419 | Signals and child watchers are implemented as I/O watchers, and will |
|
|
420 | be handled here by queueing them when their watcher gets executed. |
305 | 14. If ev_unloop has been called or EVLOOP_ONESHOT or EVLOOP_NONBLOCK |
421 | - If ev_unloop has been called or EVLOOP_ONESHOT or EVLOOP_NONBLOCK |
306 | was used, return, otherwise continue with step #1. |
422 | were used, return, otherwise continue with step *. |
|
|
423 | |
|
|
424 | Example: queue some jobs and then loop until no events are outsanding |
|
|
425 | anymore. |
|
|
426 | |
|
|
427 | ... queue jobs here, make sure they register event watchers as long |
|
|
428 | ... as they still have work to do (even an idle watcher will do..) |
|
|
429 | ev_loop (my_loop, 0); |
|
|
430 | ... jobs done. yeah! |
307 | |
431 | |
308 | =item ev_unloop (loop, how) |
432 | =item ev_unloop (loop, how) |
309 | |
433 | |
310 | Can be used to make a call to C<ev_loop> return early (but only after it |
434 | Can be used to make a call to C<ev_loop> return early (but only after it |
311 | has processed all outstanding events). The C<how> argument must be either |
435 | has processed all outstanding events). The C<how> argument must be either |
… | |
… | |
325 | visible to the libev user and should not keep C<ev_loop> from exiting if |
449 | visible to the libev user and should not keep C<ev_loop> from exiting if |
326 | no event watchers registered by it are active. It is also an excellent |
450 | no event watchers registered by it are active. It is also an excellent |
327 | way to do this for generic recurring timers or from within third-party |
451 | way to do this for generic recurring timers or from within third-party |
328 | libraries. Just remember to I<unref after start> and I<ref before stop>. |
452 | libraries. Just remember to I<unref after start> and I<ref before stop>. |
329 | |
453 | |
|
|
454 | Example: create a signal watcher, but keep it from keeping C<ev_loop> |
|
|
455 | running when nothing else is active. |
|
|
456 | |
|
|
457 | struct dv_signal exitsig; |
|
|
458 | ev_signal_init (&exitsig, sig_cb, SIGINT); |
|
|
459 | ev_signal_start (myloop, &exitsig); |
|
|
460 | evf_unref (myloop); |
|
|
461 | |
|
|
462 | Example: for some weird reason, unregister the above signal handler again. |
|
|
463 | |
|
|
464 | ev_ref (myloop); |
|
|
465 | ev_signal_stop (myloop, &exitsig); |
|
|
466 | |
330 | =back |
467 | =back |
331 | |
468 | |
332 | =head1 ANATOMY OF A WATCHER |
469 | =head1 ANATOMY OF A WATCHER |
333 | |
470 | |
334 | A watcher is a structure that you create and register to record your |
471 | A watcher is a structure that you create and register to record your |
… | |
… | |
368 | *) >>), and you can stop watching for events at any time by calling the |
505 | *) >>), and you can stop watching for events at any time by calling the |
369 | corresponding stop function (C<< ev_<type>_stop (loop, watcher *) >>. |
506 | corresponding stop function (C<< ev_<type>_stop (loop, watcher *) >>. |
370 | |
507 | |
371 | As long as your watcher is active (has been started but not stopped) you |
508 | As long as your watcher is active (has been started but not stopped) you |
372 | must not touch the values stored in it. Most specifically you must never |
509 | must not touch the values stored in it. Most specifically you must never |
373 | reinitialise it or call its set method. |
510 | reinitialise it or call its C<set> macro. |
374 | |
|
|
375 | You can check whether an event is active by calling the C<ev_is_active |
|
|
376 | (watcher *)> macro. To see whether an event is outstanding (but the |
|
|
377 | callback for it has not been called yet) you can use the C<ev_is_pending |
|
|
378 | (watcher *)> macro. |
|
|
379 | |
511 | |
380 | Each and every callback receives the event loop pointer as first, the |
512 | Each and every callback receives the event loop pointer as first, the |
381 | registered watcher structure as second, and a bitset of received events as |
513 | registered watcher structure as second, and a bitset of received events as |
382 | third argument. |
514 | third argument. |
383 | |
515 | |
… | |
… | |
440 | with the error from read() or write(). This will not work in multithreaded |
572 | with the error from read() or write(). This will not work in multithreaded |
441 | programs, though, so beware. |
573 | programs, though, so beware. |
442 | |
574 | |
443 | =back |
575 | =back |
444 | |
576 | |
|
|
577 | =head2 SUMMARY OF GENERIC WATCHER FUNCTIONS |
|
|
578 | |
|
|
579 | In the following description, C<TYPE> stands for the watcher type, |
|
|
580 | e.g. C<timer> for C<ev_timer> watchers and C<io> for C<ev_io> watchers. |
|
|
581 | |
|
|
582 | =over 4 |
|
|
583 | |
|
|
584 | =item C<ev_init> (ev_TYPE *watcher, callback) |
|
|
585 | |
|
|
586 | This macro initialises the generic portion of a watcher. The contents |
|
|
587 | of the watcher object can be arbitrary (so C<malloc> will do). Only |
|
|
588 | the generic parts of the watcher are initialised, you I<need> to call |
|
|
589 | the type-specific C<ev_TYPE_set> macro afterwards to initialise the |
|
|
590 | type-specific parts. For each type there is also a C<ev_TYPE_init> macro |
|
|
591 | which rolls both calls into one. |
|
|
592 | |
|
|
593 | You can reinitialise a watcher at any time as long as it has been stopped |
|
|
594 | (or never started) and there are no pending events outstanding. |
|
|
595 | |
|
|
596 | The callbakc is always of type C<void (*)(ev_loop *loop, ev_TYPE *watcher, |
|
|
597 | int revents)>. |
|
|
598 | |
|
|
599 | =item C<ev_TYPE_set> (ev_TYPE *, [args]) |
|
|
600 | |
|
|
601 | This macro initialises the type-specific parts of a watcher. You need to |
|
|
602 | call C<ev_init> at least once before you call this macro, but you can |
|
|
603 | call C<ev_TYPE_set> any number of times. You must not, however, call this |
|
|
604 | macro on a watcher that is active (it can be pending, however, which is a |
|
|
605 | difference to the C<ev_init> macro). |
|
|
606 | |
|
|
607 | Although some watcher types do not have type-specific arguments |
|
|
608 | (e.g. C<ev_prepare>) you still need to call its C<set> macro. |
|
|
609 | |
|
|
610 | =item C<ev_TYPE_init> (ev_TYPE *watcher, callback, [args]) |
|
|
611 | |
|
|
612 | This convinience macro rolls both C<ev_init> and C<ev_TYPE_set> macro |
|
|
613 | calls into a single call. This is the most convinient method to initialise |
|
|
614 | a watcher. The same limitations apply, of course. |
|
|
615 | |
|
|
616 | =item C<ev_TYPE_start> (loop *, ev_TYPE *watcher) |
|
|
617 | |
|
|
618 | Starts (activates) the given watcher. Only active watchers will receive |
|
|
619 | events. If the watcher is already active nothing will happen. |
|
|
620 | |
|
|
621 | =item C<ev_TYPE_stop> (loop *, ev_TYPE *watcher) |
|
|
622 | |
|
|
623 | Stops the given watcher again (if active) and clears the pending |
|
|
624 | status. It is possible that stopped watchers are pending (for example, |
|
|
625 | non-repeating timers are being stopped when they become pending), but |
|
|
626 | C<ev_TYPE_stop> ensures that the watcher is neither active nor pending. If |
|
|
627 | you want to free or reuse the memory used by the watcher it is therefore a |
|
|
628 | good idea to always call its C<ev_TYPE_stop> function. |
|
|
629 | |
|
|
630 | =item bool ev_is_active (ev_TYPE *watcher) |
|
|
631 | |
|
|
632 | Returns a true value iff the watcher is active (i.e. it has been started |
|
|
633 | and not yet been stopped). As long as a watcher is active you must not modify |
|
|
634 | it. |
|
|
635 | |
|
|
636 | =item bool ev_is_pending (ev_TYPE *watcher) |
|
|
637 | |
|
|
638 | Returns a true value iff the watcher is pending, (i.e. it has outstanding |
|
|
639 | events but its callback has not yet been invoked). As long as a watcher |
|
|
640 | is pending (but not active) you must not call an init function on it (but |
|
|
641 | C<ev_TYPE_set> is safe) and you must make sure the watcher is available to |
|
|
642 | libev (e.g. you cnanot C<free ()> it). |
|
|
643 | |
|
|
644 | =item callback = ev_cb (ev_TYPE *watcher) |
|
|
645 | |
|
|
646 | Returns the callback currently set on the watcher. |
|
|
647 | |
|
|
648 | =item ev_cb_set (ev_TYPE *watcher, callback) |
|
|
649 | |
|
|
650 | Change the callback. You can change the callback at virtually any time |
|
|
651 | (modulo threads). |
|
|
652 | |
|
|
653 | =back |
|
|
654 | |
|
|
655 | |
445 | =head2 ASSOCIATING CUSTOM DATA WITH A WATCHER |
656 | =head2 ASSOCIATING CUSTOM DATA WITH A WATCHER |
446 | |
657 | |
447 | Each watcher has, by default, a member C<void *data> that you can change |
658 | Each watcher has, by default, a member C<void *data> that you can change |
448 | and read at any time, libev will completely ignore it. This can be used |
659 | and read at any time, libev will completely ignore it. This can be used |
449 | to associate arbitrary data with your watcher. If you need more data and |
660 | to associate arbitrary data with your watcher. If you need more data and |
… | |
… | |
475 | =head1 WATCHER TYPES |
686 | =head1 WATCHER TYPES |
476 | |
687 | |
477 | This section describes each watcher in detail, but will not repeat |
688 | This section describes each watcher in detail, but will not repeat |
478 | information given in the last section. |
689 | information given in the last section. |
479 | |
690 | |
|
|
691 | |
480 | =head2 C<ev_io> - is this file descriptor readable or writable |
692 | =head2 C<ev_io> - is this file descriptor readable or writable |
481 | |
693 | |
482 | I/O watchers check whether a file descriptor is readable or writable |
694 | I/O watchers check whether a file descriptor is readable or writable |
483 | in each iteration of the event loop (This behaviour is called |
695 | in each iteration of the event loop (This behaviour is called |
484 | level-triggering because you keep receiving events as long as the |
696 | level-triggering because you keep receiving events as long as the |
… | |
… | |
495 | descriptors correctly if you register interest in two or more fds pointing |
707 | descriptors correctly if you register interest in two or more fds pointing |
496 | to the same underlying file/socket etc. description (that is, they share |
708 | to the same underlying file/socket etc. description (that is, they share |
497 | the same underlying "file open"). |
709 | the same underlying "file open"). |
498 | |
710 | |
499 | If you must do this, then force the use of a known-to-be-good backend |
711 | If you must do this, then force the use of a known-to-be-good backend |
500 | (at the time of this writing, this includes only EVMETHOD_SELECT and |
712 | (at the time of this writing, this includes only C<EVBACKEND_SELECT> and |
501 | EVMETHOD_POLL). |
713 | C<EVBACKEND_POLL>). |
502 | |
714 | |
503 | =over 4 |
715 | =over 4 |
504 | |
716 | |
505 | =item ev_io_init (ev_io *, callback, int fd, int events) |
717 | =item ev_io_init (ev_io *, callback, int fd, int events) |
506 | |
718 | |
… | |
… | |
508 | |
720 | |
509 | Configures an C<ev_io> watcher. The fd is the file descriptor to rceeive |
721 | Configures an C<ev_io> watcher. The fd is the file descriptor to rceeive |
510 | events for and events is either C<EV_READ>, C<EV_WRITE> or C<EV_READ | |
722 | events for and events is either C<EV_READ>, C<EV_WRITE> or C<EV_READ | |
511 | EV_WRITE> to receive the given events. |
723 | EV_WRITE> to receive the given events. |
512 | |
724 | |
|
|
725 | Please note that most of the more scalable backend mechanisms (for example |
|
|
726 | epoll and solaris ports) can result in spurious readyness notifications |
|
|
727 | for file descriptors, so you practically need to use non-blocking I/O (and |
|
|
728 | treat callback invocation as hint only), or retest separately with a safe |
|
|
729 | interface before doing I/O (XLib can do this), or force the use of either |
|
|
730 | C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>, which don't suffer from this |
|
|
731 | problem. Also note that it is quite easy to have your callback invoked |
|
|
732 | when the readyness condition is no longer valid even when employing |
|
|
733 | typical ways of handling events, so its a good idea to use non-blocking |
|
|
734 | I/O unconditionally. |
|
|
735 | |
513 | =back |
736 | =back |
|
|
737 | |
|
|
738 | Example: call C<stdin_readable_cb> when STDIN_FILENO has become, well |
|
|
739 | readable, but only once. Since it is likely line-buffered, you could |
|
|
740 | attempt to read a whole line in the callback: |
|
|
741 | |
|
|
742 | static void |
|
|
743 | stdin_readable_cb (struct ev_loop *loop, struct ev_io *w, int revents) |
|
|
744 | { |
|
|
745 | ev_io_stop (loop, w); |
|
|
746 | .. read from stdin here (or from w->fd) and haqndle any I/O errors |
|
|
747 | } |
|
|
748 | |
|
|
749 | ... |
|
|
750 | struct ev_loop *loop = ev_default_init (0); |
|
|
751 | struct ev_io stdin_readable; |
|
|
752 | ev_io_init (&stdin_readable, stdin_readable_cb, STDIN_FILENO, EV_READ); |
|
|
753 | ev_io_start (loop, &stdin_readable); |
|
|
754 | ev_loop (loop, 0); |
|
|
755 | |
514 | |
756 | |
515 | =head2 C<ev_timer> - relative and optionally recurring timeouts |
757 | =head2 C<ev_timer> - relative and optionally recurring timeouts |
516 | |
758 | |
517 | Timer watchers are simple relative timers that generate an event after a |
759 | Timer watchers are simple relative timers that generate an event after a |
518 | given time, and optionally repeating in regular intervals after that. |
760 | given time, and optionally repeating in regular intervals after that. |
… | |
… | |
571 | state where you do not expect data to travel on the socket, you can stop |
813 | state where you do not expect data to travel on the socket, you can stop |
572 | the timer, and again will automatically restart it if need be. |
814 | the timer, and again will automatically restart it if need be. |
573 | |
815 | |
574 | =back |
816 | =back |
575 | |
817 | |
|
|
818 | Example: create a timer that fires after 60 seconds. |
|
|
819 | |
|
|
820 | static void |
|
|
821 | one_minute_cb (struct ev_loop *loop, struct ev_timer *w, int revents) |
|
|
822 | { |
|
|
823 | .. one minute over, w is actually stopped right here |
|
|
824 | } |
|
|
825 | |
|
|
826 | struct ev_timer mytimer; |
|
|
827 | ev_timer_init (&mytimer, one_minute_cb, 60., 0.); |
|
|
828 | ev_timer_start (loop, &mytimer); |
|
|
829 | |
|
|
830 | Example: create a timeout timer that times out after 10 seconds of |
|
|
831 | inactivity. |
|
|
832 | |
|
|
833 | static void |
|
|
834 | timeout_cb (struct ev_loop *loop, struct ev_timer *w, int revents) |
|
|
835 | { |
|
|
836 | .. ten seconds without any activity |
|
|
837 | } |
|
|
838 | |
|
|
839 | struct ev_timer mytimer; |
|
|
840 | ev_timer_init (&mytimer, timeout_cb, 0., 10.); /* note, only repeat used */ |
|
|
841 | ev_timer_again (&mytimer); /* start timer */ |
|
|
842 | ev_loop (loop, 0); |
|
|
843 | |
|
|
844 | // and in some piece of code that gets executed on any "activity": |
|
|
845 | // reset the timeout to start ticking again at 10 seconds |
|
|
846 | ev_timer_again (&mytimer); |
|
|
847 | |
|
|
848 | |
576 | =head2 C<ev_periodic> - to cron or not to cron |
849 | =head2 C<ev_periodic> - to cron or not to cron |
577 | |
850 | |
578 | Periodic watchers are also timers of a kind, but they are very versatile |
851 | Periodic watchers are also timers of a kind, but they are very versatile |
579 | (and unfortunately a bit complex). |
852 | (and unfortunately a bit complex). |
580 | |
853 | |
… | |
… | |
675 | a different time than the last time it was called (e.g. in a crond like |
948 | a different time than the last time it was called (e.g. in a crond like |
676 | program when the crontabs have changed). |
949 | program when the crontabs have changed). |
677 | |
950 | |
678 | =back |
951 | =back |
679 | |
952 | |
|
|
953 | Example: call a callback every hour, or, more precisely, whenever the |
|
|
954 | system clock is divisible by 3600. The callback invocation times have |
|
|
955 | potentially a lot of jittering, but good long-term stability. |
|
|
956 | |
|
|
957 | static void |
|
|
958 | clock_cb (struct ev_loop *loop, struct ev_io *w, int revents) |
|
|
959 | { |
|
|
960 | ... its now a full hour (UTC, or TAI or whatever your clock follows) |
|
|
961 | } |
|
|
962 | |
|
|
963 | struct ev_periodic hourly_tick; |
|
|
964 | ev_periodic_init (&hourly_tick, clock_cb, 0., 3600., 0); |
|
|
965 | ev_periodic_start (loop, &hourly_tick); |
|
|
966 | |
|
|
967 | Example: the same as above, but use a reschedule callback to do it: |
|
|
968 | |
|
|
969 | #include <math.h> |
|
|
970 | |
|
|
971 | static ev_tstamp |
|
|
972 | my_scheduler_cb (struct ev_periodic *w, ev_tstamp now) |
|
|
973 | { |
|
|
974 | return fmod (now, 3600.) + 3600.; |
|
|
975 | } |
|
|
976 | |
|
|
977 | ev_periodic_init (&hourly_tick, clock_cb, 0., 0., my_scheduler_cb); |
|
|
978 | |
|
|
979 | Example: call a callback every hour, starting now: |
|
|
980 | |
|
|
981 | struct ev_periodic hourly_tick; |
|
|
982 | ev_periodic_init (&hourly_tick, clock_cb, |
|
|
983 | fmod (ev_now (loop), 3600.), 3600., 0); |
|
|
984 | ev_periodic_start (loop, &hourly_tick); |
|
|
985 | |
|
|
986 | |
680 | =head2 C<ev_signal> - signal me when a signal gets signalled |
987 | =head2 C<ev_signal> - signal me when a signal gets signalled |
681 | |
988 | |
682 | Signal watchers will trigger an event when the process receives a specific |
989 | Signal watchers will trigger an event when the process receives a specific |
683 | signal one or more times. Even though signals are very asynchronous, libev |
990 | signal one or more times. Even though signals are very asynchronous, libev |
684 | will try it's best to deliver signals synchronously, i.e. as part of the |
991 | will try it's best to deliver signals synchronously, i.e. as part of the |
… | |
… | |
700 | Configures the watcher to trigger on the given signal number (usually one |
1007 | Configures the watcher to trigger on the given signal number (usually one |
701 | of the C<SIGxxx> constants). |
1008 | of the C<SIGxxx> constants). |
702 | |
1009 | |
703 | =back |
1010 | =back |
704 | |
1011 | |
|
|
1012 | |
705 | =head2 C<ev_child> - wait for pid status changes |
1013 | =head2 C<ev_child> - wait for pid status changes |
706 | |
1014 | |
707 | Child watchers trigger when your process receives a SIGCHLD in response to |
1015 | Child watchers trigger when your process receives a SIGCHLD in response to |
708 | some child status changes (most typically when a child of yours dies). |
1016 | some child status changes (most typically when a child of yours dies). |
709 | |
1017 | |
… | |
… | |
719 | the status word (use the macros from C<sys/wait.h> and see your systems |
1027 | the status word (use the macros from C<sys/wait.h> and see your systems |
720 | C<waitpid> documentation). The C<rpid> member contains the pid of the |
1028 | C<waitpid> documentation). The C<rpid> member contains the pid of the |
721 | process causing the status change. |
1029 | process causing the status change. |
722 | |
1030 | |
723 | =back |
1031 | =back |
|
|
1032 | |
|
|
1033 | Example: try to exit cleanly on SIGINT and SIGTERM. |
|
|
1034 | |
|
|
1035 | static void |
|
|
1036 | sigint_cb (struct ev_loop *loop, struct ev_signal *w, int revents) |
|
|
1037 | { |
|
|
1038 | ev_unloop (loop, EVUNLOOP_ALL); |
|
|
1039 | } |
|
|
1040 | |
|
|
1041 | struct ev_signal signal_watcher; |
|
|
1042 | ev_signal_init (&signal_watcher, sigint_cb, SIGINT); |
|
|
1043 | ev_signal_start (loop, &sigint_cb); |
|
|
1044 | |
724 | |
1045 | |
725 | =head2 C<ev_idle> - when you've got nothing better to do |
1046 | =head2 C<ev_idle> - when you've got nothing better to do |
726 | |
1047 | |
727 | Idle watchers trigger events when there are no other events are pending |
1048 | Idle watchers trigger events when there are no other events are pending |
728 | (prepare, check and other idle watchers do not count). That is, as long |
1049 | (prepare, check and other idle watchers do not count). That is, as long |
… | |
… | |
748 | kind. There is a C<ev_idle_set> macro, but using it is utterly pointless, |
1069 | kind. There is a C<ev_idle_set> macro, but using it is utterly pointless, |
749 | believe me. |
1070 | believe me. |
750 | |
1071 | |
751 | =back |
1072 | =back |
752 | |
1073 | |
|
|
1074 | Example: dynamically allocate an C<ev_idle>, start it, and in the |
|
|
1075 | callback, free it. Alos, use no error checking, as usual. |
|
|
1076 | |
|
|
1077 | static void |
|
|
1078 | idle_cb (struct ev_loop *loop, struct ev_idle *w, int revents) |
|
|
1079 | { |
|
|
1080 | free (w); |
|
|
1081 | // now do something you wanted to do when the program has |
|
|
1082 | // no longer asnything immediate to do. |
|
|
1083 | } |
|
|
1084 | |
|
|
1085 | struct ev_idle *idle_watcher = malloc (sizeof (struct ev_idle)); |
|
|
1086 | ev_idle_init (idle_watcher, idle_cb); |
|
|
1087 | ev_idle_start (loop, idle_cb); |
|
|
1088 | |
|
|
1089 | |
753 | =head2 C<ev_prepare> and C<ev_check> - customise your event loop |
1090 | =head2 C<ev_prepare> and C<ev_check> - customise your event loop |
754 | |
1091 | |
755 | Prepare and check watchers are usually (but not always) used in tandem: |
1092 | Prepare and check watchers are usually (but not always) used in tandem: |
756 | prepare watchers get invoked before the process blocks and check watchers |
1093 | prepare watchers get invoked before the process blocks and check watchers |
757 | afterwards. |
1094 | afterwards. |
758 | |
1095 | |
759 | Their main purpose is to integrate other event mechanisms into libev. This |
1096 | Their main purpose is to integrate other event mechanisms into libev and |
760 | could be used, for example, to track variable changes, implement your own |
1097 | their use is somewhat advanced. This could be used, for example, to track |
761 | watchers, integrate net-snmp or a coroutine library and lots more. |
1098 | variable changes, implement your own watchers, integrate net-snmp or a |
|
|
1099 | coroutine library and lots more. |
762 | |
1100 | |
763 | This is done by examining in each prepare call which file descriptors need |
1101 | This is done by examining in each prepare call which file descriptors need |
764 | to be watched by the other library, registering C<ev_io> watchers for |
1102 | to be watched by the other library, registering C<ev_io> watchers for |
765 | them and starting an C<ev_timer> watcher for any timeouts (many libraries |
1103 | them and starting an C<ev_timer> watcher for any timeouts (many libraries |
766 | provide just this functionality). Then, in the check watcher you check for |
1104 | provide just this functionality). Then, in the check watcher you check for |
… | |
… | |
788 | parameters of any kind. There are C<ev_prepare_set> and C<ev_check_set> |
1126 | parameters of any kind. There are C<ev_prepare_set> and C<ev_check_set> |
789 | macros, but using them is utterly, utterly and completely pointless. |
1127 | macros, but using them is utterly, utterly and completely pointless. |
790 | |
1128 | |
791 | =back |
1129 | =back |
792 | |
1130 | |
|
|
1131 | Example: *TODO*. |
|
|
1132 | |
|
|
1133 | |
|
|
1134 | =head2 C<ev_embed> - when one backend isn't enough |
|
|
1135 | |
|
|
1136 | This is a rather advanced watcher type that lets you embed one event loop |
|
|
1137 | into another (currently only C<ev_io> events are supported in the embedded |
|
|
1138 | loop, other types of watchers might be handled in a delayed or incorrect |
|
|
1139 | fashion and must not be used). |
|
|
1140 | |
|
|
1141 | There are primarily two reasons you would want that: work around bugs and |
|
|
1142 | prioritise I/O. |
|
|
1143 | |
|
|
1144 | As an example for a bug workaround, the kqueue backend might only support |
|
|
1145 | sockets on some platform, so it is unusable as generic backend, but you |
|
|
1146 | still want to make use of it because you have many sockets and it scales |
|
|
1147 | so nicely. In this case, you would create a kqueue-based loop and embed it |
|
|
1148 | into your default loop (which might use e.g. poll). Overall operation will |
|
|
1149 | be a bit slower because first libev has to poll and then call kevent, but |
|
|
1150 | at least you can use both at what they are best. |
|
|
1151 | |
|
|
1152 | As for prioritising I/O: rarely you have the case where some fds have |
|
|
1153 | to be watched and handled very quickly (with low latency), and even |
|
|
1154 | priorities and idle watchers might have too much overhead. In this case |
|
|
1155 | you would put all the high priority stuff in one loop and all the rest in |
|
|
1156 | a second one, and embed the second one in the first. |
|
|
1157 | |
|
|
1158 | As long as the watcher is active, the callback will be invoked every time |
|
|
1159 | there might be events pending in the embedded loop. The callback must then |
|
|
1160 | call C<ev_embed_sweep (mainloop, watcher)> to make a single sweep and invoke |
|
|
1161 | their callbacks (you could also start an idle watcher to give the embedded |
|
|
1162 | loop strictly lower priority for example). You can also set the callback |
|
|
1163 | to C<0>, in which case the embed watcher will automatically execute the |
|
|
1164 | embedded loop sweep. |
|
|
1165 | |
|
|
1166 | As long as the watcher is started it will automatically handle events. The |
|
|
1167 | callback will be invoked whenever some events have been handled. You can |
|
|
1168 | set the callback to C<0> to avoid having to specify one if you are not |
|
|
1169 | interested in that. |
|
|
1170 | |
|
|
1171 | Also, there have not currently been made special provisions for forking: |
|
|
1172 | when you fork, you not only have to call C<ev_loop_fork> on both loops, |
|
|
1173 | but you will also have to stop and restart any C<ev_embed> watchers |
|
|
1174 | yourself. |
|
|
1175 | |
|
|
1176 | Unfortunately, not all backends are embeddable, only the ones returned by |
|
|
1177 | C<ev_embeddable_backends> are, which, unfortunately, does not include any |
|
|
1178 | portable one. |
|
|
1179 | |
|
|
1180 | So when you want to use this feature you will always have to be prepared |
|
|
1181 | that you cannot get an embeddable loop. The recommended way to get around |
|
|
1182 | this is to have a separate variables for your embeddable loop, try to |
|
|
1183 | create it, and if that fails, use the normal loop for everything: |
|
|
1184 | |
|
|
1185 | struct ev_loop *loop_hi = ev_default_init (0); |
|
|
1186 | struct ev_loop *loop_lo = 0; |
|
|
1187 | struct ev_embed embed; |
|
|
1188 | |
|
|
1189 | // see if there is a chance of getting one that works |
|
|
1190 | // (remember that a flags value of 0 means autodetection) |
|
|
1191 | loop_lo = ev_embeddable_backends () & ev_recommended_backends () |
|
|
1192 | ? ev_loop_new (ev_embeddable_backends () & ev_recommended_backends ()) |
|
|
1193 | : 0; |
|
|
1194 | |
|
|
1195 | // if we got one, then embed it, otherwise default to loop_hi |
|
|
1196 | if (loop_lo) |
|
|
1197 | { |
|
|
1198 | ev_embed_init (&embed, 0, loop_lo); |
|
|
1199 | ev_embed_start (loop_hi, &embed); |
|
|
1200 | } |
|
|
1201 | else |
|
|
1202 | loop_lo = loop_hi; |
|
|
1203 | |
|
|
1204 | =over 4 |
|
|
1205 | |
|
|
1206 | =item ev_embed_init (ev_embed *, callback, struct ev_loop *embedded_loop) |
|
|
1207 | |
|
|
1208 | =item ev_embed_set (ev_embed *, callback, struct ev_loop *embedded_loop) |
|
|
1209 | |
|
|
1210 | Configures the watcher to embed the given loop, which must be |
|
|
1211 | embeddable. If the callback is C<0>, then C<ev_embed_sweep> will be |
|
|
1212 | invoked automatically, otherwise it is the responsibility of the callback |
|
|
1213 | to invoke it (it will continue to be called until the sweep has been done, |
|
|
1214 | if you do not want thta, you need to temporarily stop the embed watcher). |
|
|
1215 | |
|
|
1216 | =item ev_embed_sweep (loop, ev_embed *) |
|
|
1217 | |
|
|
1218 | Make a single, non-blocking sweep over the embedded loop. This works |
|
|
1219 | similarly to C<ev_loop (embedded_loop, EVLOOP_NONBLOCK)>, but in the most |
|
|
1220 | apropriate way for embedded loops. |
|
|
1221 | |
|
|
1222 | =back |
|
|
1223 | |
|
|
1224 | |
793 | =head1 OTHER FUNCTIONS |
1225 | =head1 OTHER FUNCTIONS |
794 | |
1226 | |
795 | There are some other functions of possible interest. Described. Here. Now. |
1227 | There are some other functions of possible interest. Described. Here. Now. |
796 | |
1228 | |
797 | =over 4 |
1229 | =over 4 |
… | |
… | |
826 | /* stdin might have data for us, joy! */; |
1258 | /* stdin might have data for us, joy! */; |
827 | } |
1259 | } |
828 | |
1260 | |
829 | ev_once (STDIN_FILENO, EV_READ, 10., stdin_ready, 0); |
1261 | ev_once (STDIN_FILENO, EV_READ, 10., stdin_ready, 0); |
830 | |
1262 | |
831 | =item ev_feed_event (loop, watcher, int events) |
1263 | =item ev_feed_event (ev_loop *, watcher *, int revents) |
832 | |
1264 | |
833 | Feeds the given event set into the event loop, as if the specified event |
1265 | Feeds the given event set into the event loop, as if the specified event |
834 | had happened for the specified watcher (which must be a pointer to an |
1266 | had happened for the specified watcher (which must be a pointer to an |
835 | initialised but not necessarily started event watcher). |
1267 | initialised but not necessarily started event watcher). |
836 | |
1268 | |
837 | =item ev_feed_fd_event (loop, int fd, int revents) |
1269 | =item ev_feed_fd_event (ev_loop *, int fd, int revents) |
838 | |
1270 | |
839 | Feed an event on the given fd, as if a file descriptor backend detected |
1271 | Feed an event on the given fd, as if a file descriptor backend detected |
840 | the given events it. |
1272 | the given events it. |
841 | |
1273 | |
842 | =item ev_feed_signal_event (loop, int signum) |
1274 | =item ev_feed_signal_event (ev_loop *loop, int signum) |
843 | |
1275 | |
844 | Feed an event as if the given signal occured (loop must be the default loop!). |
1276 | Feed an event as if the given signal occured (C<loop> must be the default |
|
|
1277 | loop!). |
845 | |
1278 | |
846 | =back |
1279 | =back |
|
|
1280 | |
847 | |
1281 | |
848 | =head1 LIBEVENT EMULATION |
1282 | =head1 LIBEVENT EMULATION |
849 | |
1283 | |
850 | Libev offers a compatibility emulation layer for libevent. It cannot |
1284 | Libev offers a compatibility emulation layer for libevent. It cannot |
851 | emulate the internals of libevent, so here are some usage hints: |
1285 | emulate the internals of libevent, so here are some usage hints: |