… | |
… | |
75 | Usually, it's a good idea to terminate if the major versions mismatch, |
75 | Usually, it's a good idea to terminate if the major versions mismatch, |
76 | as this indicates an incompatible change. Minor versions are usually |
76 | as this indicates an incompatible change. Minor versions are usually |
77 | compatible to older versions, so a larger minor version alone is usually |
77 | compatible to older versions, so a larger minor version alone is usually |
78 | not a problem. |
78 | not a problem. |
79 | |
79 | |
|
|
80 | =item unsigned int ev_supported_backends () |
|
|
81 | |
|
|
82 | Return the set of all backends (i.e. their corresponding C<EV_BACKEND_*> |
|
|
83 | value) compiled into this binary of libev (independent of their |
|
|
84 | availability on the system you are running on). See C<ev_default_loop> for |
|
|
85 | a description of the set values. |
|
|
86 | |
|
|
87 | =item unsigned int ev_recommended_backends () |
|
|
88 | |
|
|
89 | Return the set of all backends compiled into this binary of libev and also |
|
|
90 | recommended for this platform. This set is often smaller than the one |
|
|
91 | returned by C<ev_supported_backends>, as for example kqueue is broken on |
|
|
92 | most BSDs and will not be autodetected unless you explicitly request it |
|
|
93 | (assuming you know what you are doing). This is the set of backends that |
|
|
94 | libev will probe for if you specify no backends explicitly. |
|
|
95 | |
80 | =item ev_set_allocator (void *(*cb)(void *ptr, long size)) |
96 | =item ev_set_allocator (void *(*cb)(void *ptr, long size)) |
81 | |
97 | |
82 | Sets the allocation function to use (the prototype is similar to the |
98 | Sets the allocation function to use (the prototype is similar to the |
83 | realloc C function, the semantics are identical). It is used to allocate |
99 | realloc C function, the semantics are identical). It is used to allocate |
84 | and free memory (no surprises here). If it returns zero when memory |
100 | and free memory (no surprises here). If it returns zero when memory |
… | |
… | |
119 | =item struct ev_loop *ev_default_loop (unsigned int flags) |
135 | =item struct ev_loop *ev_default_loop (unsigned int flags) |
120 | |
136 | |
121 | This will initialise the default event loop if it hasn't been initialised |
137 | This will initialise the default event loop if it hasn't been initialised |
122 | yet and return it. If the default loop could not be initialised, returns |
138 | yet and return it. If the default loop could not be initialised, returns |
123 | false. If it already was initialised it simply returns it (and ignores the |
139 | false. If it already was initialised it simply returns it (and ignores the |
124 | flags). |
140 | flags. If that is troubling you, check C<ev_backend ()> afterwards). |
125 | |
141 | |
126 | If you don't know what event loop to use, use the one returned from this |
142 | If you don't know what event loop to use, use the one returned from this |
127 | function. |
143 | function. |
128 | |
144 | |
129 | The flags argument can be used to specify special behaviour or specific |
145 | The flags argument can be used to specify special behaviour or specific |
130 | backends to use, and is usually specified as 0 (or EVFLAG_AUTO). |
146 | backends to use, and is usually specified as C<0> (or C<EVFLAG_AUTO>). |
131 | |
147 | |
132 | It supports the following flags: |
148 | The following flags are supported: |
133 | |
149 | |
134 | =over 4 |
150 | =over 4 |
135 | |
151 | |
136 | =item C<EVFLAG_AUTO> |
152 | =item C<EVFLAG_AUTO> |
137 | |
153 | |
… | |
… | |
145 | C<LIBEV_FLAGS>. Otherwise (the default), this environment variable will |
161 | C<LIBEV_FLAGS>. Otherwise (the default), this environment variable will |
146 | override the flags completely if it is found in the environment. This is |
162 | override the flags completely if it is found in the environment. This is |
147 | useful to try out specific backends to test their performance, or to work |
163 | useful to try out specific backends to test their performance, or to work |
148 | around bugs. |
164 | around bugs. |
149 | |
165 | |
150 | =item C<EVMETHOD_SELECT> (portable select backend) |
166 | =item C<EVBACKEND_SELECT> (value 1, portable select backend) |
151 | |
167 | |
|
|
168 | This is your standard select(2) backend. Not I<completely> standard, as |
|
|
169 | libev tries to roll its own fd_set with no limits on the number of fds, |
|
|
170 | but if that fails, expect a fairly low limit on the number of fds when |
|
|
171 | using this backend. It doesn't scale too well (O(highest_fd)), but its usually |
|
|
172 | the fastest backend for a low number of fds. |
|
|
173 | |
152 | =item C<EVMETHOD_POLL> (poll backend, available everywhere except on windows) |
174 | =item C<EVBACKEND_POLL> (value 2, poll backend, available everywhere except on windows) |
153 | |
175 | |
154 | =item C<EVMETHOD_EPOLL> (linux only) |
176 | And this is your standard poll(2) backend. It's more complicated than |
|
|
177 | select, but handles sparse fds better and has no artificial limit on the |
|
|
178 | number of fds you can use (except it will slow down considerably with a |
|
|
179 | lot of inactive fds). It scales similarly to select, i.e. O(total_fds). |
155 | |
180 | |
156 | =item C<EVMETHOD_KQUEUE> (some bsds only) |
181 | =item C<EVBACKEND_EPOLL> (value 4, Linux) |
157 | |
182 | |
158 | =item C<EVMETHOD_DEVPOLL> (solaris 8 only) |
183 | For few fds, this backend is a bit little slower than poll and select, |
|
|
184 | but it scales phenomenally better. While poll and select usually scale like |
|
|
185 | O(total_fds) where n is the total number of fds (or the highest fd), epoll scales |
|
|
186 | either O(1) or O(active_fds). |
159 | |
187 | |
160 | =item C<EVMETHOD_PORT> (solaris 10 only) |
188 | While stopping and starting an I/O watcher in the same iteration will |
|
|
189 | result in some caching, there is still a syscall per such incident |
|
|
190 | (because the fd could point to a different file description now), so its |
|
|
191 | best to avoid that. Also, dup()ed file descriptors might not work very |
|
|
192 | well if you register events for both fds. |
|
|
193 | |
|
|
194 | Please note that epoll sometimes generates spurious notifications, so you |
|
|
195 | need to use non-blocking I/O or other means to avoid blocking when no data |
|
|
196 | (or space) is available. |
|
|
197 | |
|
|
198 | =item C<EVBACKEND_KQUEUE> (value 8, most BSD clones) |
|
|
199 | |
|
|
200 | Kqueue deserves special mention, as at the time of this writing, it |
|
|
201 | was broken on all BSDs except NetBSD (usually it doesn't work with |
|
|
202 | anything but sockets and pipes, except on Darwin, where of course its |
|
|
203 | completely useless). For this reason its not being "autodetected" |
|
|
204 | unless you explicitly specify it explicitly in the flags (i.e. using |
|
|
205 | C<EVBACKEND_KQUEUE>). |
|
|
206 | |
|
|
207 | It scales in the same way as the epoll backend, but the interface to the |
|
|
208 | kernel is more efficient (which says nothing about its actual speed, of |
|
|
209 | course). While starting and stopping an I/O watcher does not cause an |
|
|
210 | extra syscall as with epoll, it still adds up to four event changes per |
|
|
211 | incident, so its best to avoid that. |
|
|
212 | |
|
|
213 | =item C<EVBACKEND_DEVPOLL> (value 16, Solaris 8) |
|
|
214 | |
|
|
215 | This is not implemented yet (and might never be). |
|
|
216 | |
|
|
217 | =item C<EVBACKEND_PORT> (value 32, Solaris 10) |
|
|
218 | |
|
|
219 | This uses the Solaris 10 port mechanism. As with everything on Solaris, |
|
|
220 | it's really slow, but it still scales very well (O(active_fds)). |
|
|
221 | |
|
|
222 | Please note that solaris ports can result in a lot of spurious |
|
|
223 | notifications, so you need to use non-blocking I/O or other means to avoid |
|
|
224 | blocking when no data (or space) is available. |
|
|
225 | |
|
|
226 | =item C<EVBACKEND_ALL> |
|
|
227 | |
|
|
228 | Try all backends (even potentially broken ones that wouldn't be tried |
|
|
229 | with C<EVFLAG_AUTO>). Since this is a mask, you can do stuff such as |
|
|
230 | C<EVBACKEND_ALL & ~EVBACKEND_KQUEUE>. |
|
|
231 | |
|
|
232 | =back |
161 | |
233 | |
162 | If one or more of these are ored into the flags value, then only these |
234 | If one or more of these are ored into the flags value, then only these |
163 | backends will be tried (in the reverse order as given here). If one are |
235 | backends will be tried (in the reverse order as given here). If none are |
164 | specified, any backend will do. |
236 | specified, most compiled-in backend will be tried, usually in reverse |
|
|
237 | order of their flag values :) |
165 | |
238 | |
166 | =back |
239 | The most typical usage is like this: |
|
|
240 | |
|
|
241 | if (!ev_default_loop (0)) |
|
|
242 | fatal ("could not initialise libev, bad $LIBEV_FLAGS in environment?"); |
|
|
243 | |
|
|
244 | Restrict libev to the select and poll backends, and do not allow |
|
|
245 | environment settings to be taken into account: |
|
|
246 | |
|
|
247 | ev_default_loop (EVBACKEND_POLL | EVBACKEND_SELECT | EVFLAG_NOENV); |
|
|
248 | |
|
|
249 | Use whatever libev has to offer, but make sure that kqueue is used if |
|
|
250 | available (warning, breaks stuff, best use only with your own private |
|
|
251 | event loop and only if you know the OS supports your types of fds): |
|
|
252 | |
|
|
253 | ev_default_loop (ev_recommended_backends () | EVBACKEND_KQUEUE); |
167 | |
254 | |
168 | =item struct ev_loop *ev_loop_new (unsigned int flags) |
255 | =item struct ev_loop *ev_loop_new (unsigned int flags) |
169 | |
256 | |
170 | Similar to C<ev_default_loop>, but always creates a new event loop that is |
257 | Similar to C<ev_default_loop>, but always creates a new event loop that is |
171 | always distinct from the default loop. Unlike the default loop, it cannot |
258 | always distinct from the default loop. Unlike the default loop, it cannot |
… | |
… | |
188 | This function reinitialises the kernel state for backends that have |
275 | This function reinitialises the kernel state for backends that have |
189 | one. Despite the name, you can call it anytime, but it makes most sense |
276 | one. Despite the name, you can call it anytime, but it makes most sense |
190 | after forking, in either the parent or child process (or both, but that |
277 | after forking, in either the parent or child process (or both, but that |
191 | again makes little sense). |
278 | again makes little sense). |
192 | |
279 | |
193 | You I<must> call this function after forking if and only if you want to |
280 | You I<must> call this function in the child process after forking if and |
194 | use the event library in both processes. If you just fork+exec, you don't |
281 | only if you want to use the event library in both processes. If you just |
195 | have to call it. |
282 | fork+exec, you don't have to call it. |
196 | |
283 | |
197 | The function itself is quite fast and it's usually not a problem to call |
284 | The function itself is quite fast and it's usually not a problem to call |
198 | it just in case after a fork. To make this easy, the function will fit in |
285 | it just in case after a fork. To make this easy, the function will fit in |
199 | quite nicely into a call to C<pthread_atfork>: |
286 | quite nicely into a call to C<pthread_atfork>: |
200 | |
287 | |
201 | pthread_atfork (0, 0, ev_default_fork); |
288 | pthread_atfork (0, 0, ev_default_fork); |
202 | |
289 | |
|
|
290 | At the moment, C<EVBACKEND_SELECT> and C<EVBACKEND_POLL> are safe to use |
|
|
291 | without calling this function, so if you force one of those backends you |
|
|
292 | do not need to care. |
|
|
293 | |
203 | =item ev_loop_fork (loop) |
294 | =item ev_loop_fork (loop) |
204 | |
295 | |
205 | Like C<ev_default_fork>, but acts on an event loop created by |
296 | Like C<ev_default_fork>, but acts on an event loop created by |
206 | C<ev_loop_new>. Yes, you have to call this on every allocated event loop |
297 | C<ev_loop_new>. Yes, you have to call this on every allocated event loop |
207 | after fork, and how you do this is entirely your own problem. |
298 | after fork, and how you do this is entirely your own problem. |
208 | |
299 | |
209 | =item unsigned int ev_method (loop) |
300 | =item unsigned int ev_backend (loop) |
210 | |
301 | |
211 | Returns one of the C<EVMETHOD_*> flags indicating the event backend in |
302 | Returns one of the C<EVBACKEND_*> flags indicating the event backend in |
212 | use. |
303 | use. |
213 | |
304 | |
214 | =item ev_tstamp ev_now (loop) |
305 | =item ev_tstamp ev_now (loop) |
215 | |
306 | |
216 | Returns the current "event loop time", which is the time the event loop |
307 | Returns the current "event loop time", which is the time the event loop |
… | |
… | |
223 | |
314 | |
224 | Finally, this is it, the event handler. This function usually is called |
315 | Finally, this is it, the event handler. This function usually is called |
225 | after you initialised all your watchers and you want to start handling |
316 | after you initialised all your watchers and you want to start handling |
226 | events. |
317 | events. |
227 | |
318 | |
228 | If the flags argument is specified as 0, it will not return until either |
319 | If the flags argument is specified as C<0>, it will not return until |
229 | no event watchers are active anymore or C<ev_unloop> was called. |
320 | either no event watchers are active anymore or C<ev_unloop> was called. |
230 | |
321 | |
231 | A flags value of C<EVLOOP_NONBLOCK> will look for new events, will handle |
322 | A flags value of C<EVLOOP_NONBLOCK> will look for new events, will handle |
232 | those events and any outstanding ones, but will not block your process in |
323 | those events and any outstanding ones, but will not block your process in |
233 | case there are no events and will return after one iteration of the loop. |
324 | case there are no events and will return after one iteration of the loop. |
234 | |
325 | |
235 | A flags value of C<EVLOOP_ONESHOT> will look for new events (waiting if |
326 | A flags value of C<EVLOOP_ONESHOT> will look for new events (waiting if |
236 | neccessary) and will handle those and any outstanding ones. It will block |
327 | neccessary) and will handle those and any outstanding ones. It will block |
237 | your process until at least one new event arrives, and will return after |
328 | your process until at least one new event arrives, and will return after |
238 | one iteration of the loop. |
329 | one iteration of the loop. This is useful if you are waiting for some |
|
|
330 | external event in conjunction with something not expressible using other |
|
|
331 | libev watchers. However, a pair of C<ev_prepare>/C<ev_check> watchers is |
|
|
332 | usually a better approach for this kind of thing. |
239 | |
333 | |
240 | This flags value could be used to implement alternative looping |
|
|
241 | constructs, but the C<prepare> and C<check> watchers provide a better and |
|
|
242 | more generic mechanism. |
|
|
243 | |
|
|
244 | Here are the gory details of what ev_loop does: |
334 | Here are the gory details of what C<ev_loop> does: |
245 | |
335 | |
246 | 1. If there are no active watchers (reference count is zero), return. |
336 | * If there are no active watchers (reference count is zero), return. |
247 | 2. Queue and immediately call all prepare watchers. |
337 | - Queue prepare watchers and then call all outstanding watchers. |
248 | 3. If we have been forked, recreate the kernel state. |
338 | - If we have been forked, recreate the kernel state. |
249 | 4. Update the kernel state with all outstanding changes. |
339 | - Update the kernel state with all outstanding changes. |
250 | 5. Update the "event loop time". |
340 | - Update the "event loop time". |
251 | 6. Calculate for how long to block. |
341 | - Calculate for how long to block. |
252 | 7. Block the process, waiting for events. |
342 | - Block the process, waiting for any events. |
|
|
343 | - Queue all outstanding I/O (fd) events. |
253 | 8. Update the "event loop time" and do time jump handling. |
344 | - Update the "event loop time" and do time jump handling. |
254 | 9. Queue all outstanding timers. |
345 | - Queue all outstanding timers. |
255 | 10. Queue all outstanding periodics. |
346 | - Queue all outstanding periodics. |
256 | 11. If no events are pending now, queue all idle watchers. |
347 | - If no events are pending now, queue all idle watchers. |
257 | 12. Queue all check watchers. |
348 | - Queue all check watchers. |
258 | 13. Call all queued watchers in reverse order (i.e. check watchers first). |
349 | - Call all queued watchers in reverse order (i.e. check watchers first). |
|
|
350 | Signals and child watchers are implemented as I/O watchers, and will |
|
|
351 | be handled here by queueing them when their watcher gets executed. |
259 | 14. If ev_unloop has been called or EVLOOP_ONESHOT or EVLOOP_NONBLOCK |
352 | - If ev_unloop has been called or EVLOOP_ONESHOT or EVLOOP_NONBLOCK |
260 | was used, return, otherwise continue with step #1. |
353 | were used, return, otherwise continue with step *. |
261 | |
354 | |
262 | =item ev_unloop (loop, how) |
355 | =item ev_unloop (loop, how) |
263 | |
356 | |
264 | Can be used to make a call to C<ev_loop> return early (but only after it |
357 | Can be used to make a call to C<ev_loop> return early (but only after it |
265 | has processed all outstanding events). The C<how> argument must be either |
358 | has processed all outstanding events). The C<how> argument must be either |
… | |
… | |
322 | *) >>), and you can stop watching for events at any time by calling the |
415 | *) >>), and you can stop watching for events at any time by calling the |
323 | corresponding stop function (C<< ev_<type>_stop (loop, watcher *) >>. |
416 | corresponding stop function (C<< ev_<type>_stop (loop, watcher *) >>. |
324 | |
417 | |
325 | As long as your watcher is active (has been started but not stopped) you |
418 | As long as your watcher is active (has been started but not stopped) you |
326 | must not touch the values stored in it. Most specifically you must never |
419 | must not touch the values stored in it. Most specifically you must never |
327 | reinitialise it or call its set method. |
420 | reinitialise it or call its set macro. |
328 | |
421 | |
329 | You can check whether an event is active by calling the C<ev_is_active |
422 | You can check whether an event is active by calling the C<ev_is_active |
330 | (watcher *)> macro. To see whether an event is outstanding (but the |
423 | (watcher *)> macro. To see whether an event is outstanding (but the |
331 | callback for it has not been called yet) you can use the C<ev_is_pending |
424 | callback for it has not been called yet) you can use the C<ev_is_pending |
332 | (watcher *)> macro. |
425 | (watcher *)> macro. |
… | |
… | |
449 | descriptors correctly if you register interest in two or more fds pointing |
542 | descriptors correctly if you register interest in two or more fds pointing |
450 | to the same underlying file/socket etc. description (that is, they share |
543 | to the same underlying file/socket etc. description (that is, they share |
451 | the same underlying "file open"). |
544 | the same underlying "file open"). |
452 | |
545 | |
453 | If you must do this, then force the use of a known-to-be-good backend |
546 | If you must do this, then force the use of a known-to-be-good backend |
454 | (at the time of this writing, this includes only EVMETHOD_SELECT and |
547 | (at the time of this writing, this includes only C<EVBACKEND_SELECT> and |
455 | EVMETHOD_POLL). |
548 | C<EVBACKEND_POLL>). |
456 | |
549 | |
457 | =over 4 |
550 | =over 4 |
458 | |
551 | |
459 | =item ev_io_init (ev_io *, callback, int fd, int events) |
552 | =item ev_io_init (ev_io *, callback, int fd, int events) |
460 | |
553 | |
461 | =item ev_io_set (ev_io *, int fd, int events) |
554 | =item ev_io_set (ev_io *, int fd, int events) |
462 | |
555 | |
463 | Configures an C<ev_io> watcher. The fd is the file descriptor to rceeive |
556 | Configures an C<ev_io> watcher. The fd is the file descriptor to rceeive |
464 | events for and events is either C<EV_READ>, C<EV_WRITE> or C<EV_READ | |
557 | events for and events is either C<EV_READ>, C<EV_WRITE> or C<EV_READ | |
465 | EV_WRITE> to receive the given events. |
558 | EV_WRITE> to receive the given events. |
|
|
559 | |
|
|
560 | Please note that most of the more scalable backend mechanisms (for example |
|
|
561 | epoll and solaris ports) can result in spurious readyness notifications |
|
|
562 | for file descriptors, so you practically need to use non-blocking I/O (and |
|
|
563 | treat callback invocation as hint only), or retest separately with a safe |
|
|
564 | interface before doing I/O (XLib can do this), or force the use of either |
|
|
565 | C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>, which don't suffer from this |
|
|
566 | problem. Also note that it is quite easy to have your callback invoked |
|
|
567 | when the readyness condition is no longer valid even when employing |
|
|
568 | typical ways of handling events, so its a good idea to use non-blocking |
|
|
569 | I/O unconditionally. |
466 | |
570 | |
467 | =back |
571 | =back |
468 | |
572 | |
469 | =head2 C<ev_timer> - relative and optionally recurring timeouts |
573 | =head2 C<ev_timer> - relative and optionally recurring timeouts |
470 | |
574 | |