… | |
… | |
127 | .\} |
127 | .\} |
128 | .rm #[ #] #H #V #F C |
128 | .rm #[ #] #H #V #F C |
129 | .\" ======================================================================== |
129 | .\" ======================================================================== |
130 | .\" |
130 | .\" |
131 | .IX Title ""<STANDARD INPUT>" 1" |
131 | .IX Title ""<STANDARD INPUT>" 1" |
132 | .TH "<STANDARD INPUT>" 1 "2007-11-18" "perl v5.8.8" "User Contributed Perl Documentation" |
132 | .TH "<STANDARD INPUT>" 1 "2007-11-23" "perl v5.8.8" "User Contributed Perl Documentation" |
133 | .SH "NAME" |
133 | .SH "NAME" |
134 | libev \- a high performance full\-featured event loop written in C |
134 | libev \- a high performance full\-featured event loop written in C |
135 | .SH "SYNOPSIS" |
135 | .SH "SYNOPSIS" |
136 | .IX Header "SYNOPSIS" |
136 | .IX Header "SYNOPSIS" |
137 | .Vb 1 |
137 | .Vb 1 |
… | |
… | |
199 | .Sp |
199 | .Sp |
200 | Usually, it's a good idea to terminate if the major versions mismatch, |
200 | Usually, it's a good idea to terminate if the major versions mismatch, |
201 | as this indicates an incompatible change. Minor versions are usually |
201 | as this indicates an incompatible change. Minor versions are usually |
202 | compatible to older versions, so a larger minor version alone is usually |
202 | compatible to older versions, so a larger minor version alone is usually |
203 | not a problem. |
203 | not a problem. |
|
|
204 | .IP "unsigned int ev_supported_backends ()" 4 |
|
|
205 | .IX Item "unsigned int ev_supported_backends ()" |
|
|
206 | Return the set of all backends (i.e. their corresponding \f(CW\*(C`EV_BACKEND_*\*(C'\fR |
|
|
207 | value) compiled into this binary of libev (independent of their |
|
|
208 | availability on the system you are running on). See \f(CW\*(C`ev_default_loop\*(C'\fR for |
|
|
209 | a description of the set values. |
|
|
210 | .IP "unsigned int ev_recommended_backends ()" 4 |
|
|
211 | .IX Item "unsigned int ev_recommended_backends ()" |
|
|
212 | Return the set of all backends compiled into this binary of libev and also |
|
|
213 | recommended for this platform. This set is often smaller than the one |
|
|
214 | returned by \f(CW\*(C`ev_supported_backends\*(C'\fR, as for example kqueue is broken on |
|
|
215 | most BSDs and will not be autodetected unless you explicitly request it |
|
|
216 | (assuming you know what you are doing). This is the set of backends that |
|
|
217 | libev will probe for if you specify no backends explicitly. |
204 | .IP "ev_set_allocator (void *(*cb)(void *ptr, long size))" 4 |
218 | .IP "ev_set_allocator (void *(*cb)(void *ptr, long size))" 4 |
205 | .IX Item "ev_set_allocator (void *(*cb)(void *ptr, long size))" |
219 | .IX Item "ev_set_allocator (void *(*cb)(void *ptr, long size))" |
206 | Sets the allocation function to use (the prototype is similar to the |
220 | Sets the allocation function to use (the prototype is similar to the |
207 | realloc C function, the semantics are identical). It is used to allocate |
221 | realloc C function, the semantics are identical). It is used to allocate |
208 | and free memory (no surprises here). If it returns zero when memory |
222 | and free memory (no surprises here). If it returns zero when memory |
… | |
… | |
236 | .IP "struct ev_loop *ev_default_loop (unsigned int flags)" 4 |
250 | .IP "struct ev_loop *ev_default_loop (unsigned int flags)" 4 |
237 | .IX Item "struct ev_loop *ev_default_loop (unsigned int flags)" |
251 | .IX Item "struct ev_loop *ev_default_loop (unsigned int flags)" |
238 | This will initialise the default event loop if it hasn't been initialised |
252 | This will initialise the default event loop if it hasn't been initialised |
239 | yet and return it. If the default loop could not be initialised, returns |
253 | yet and return it. If the default loop could not be initialised, returns |
240 | false. If it already was initialised it simply returns it (and ignores the |
254 | false. If it already was initialised it simply returns it (and ignores the |
241 | flags). |
255 | flags. If that is troubling you, check \f(CW\*(C`ev_backend ()\*(C'\fR afterwards). |
242 | .Sp |
256 | .Sp |
243 | If you don't know what event loop to use, use the one returned from this |
257 | If you don't know what event loop to use, use the one returned from this |
244 | function. |
258 | function. |
245 | .Sp |
259 | .Sp |
246 | The flags argument can be used to specify special behaviour or specific |
260 | The flags argument can be used to specify special behaviour or specific |
247 | backends to use, and is usually specified as 0 (or \s-1EVFLAG_AUTO\s0). |
261 | backends to use, and is usually specified as \f(CW0\fR (or \f(CW\*(C`EVFLAG_AUTO\*(C'\fR). |
248 | .Sp |
262 | .Sp |
249 | It supports the following flags: |
263 | The following flags are supported: |
250 | .RS 4 |
264 | .RS 4 |
251 | .ie n .IP """EVFLAG_AUTO""" 4 |
265 | .ie n .IP """EVFLAG_AUTO""" 4 |
252 | .el .IP "\f(CWEVFLAG_AUTO\fR" 4 |
266 | .el .IP "\f(CWEVFLAG_AUTO\fR" 4 |
253 | .IX Item "EVFLAG_AUTO" |
267 | .IX Item "EVFLAG_AUTO" |
254 | The default flags value. Use this if you have no clue (it's the right |
268 | The default flags value. Use this if you have no clue (it's the right |
… | |
… | |
260 | or setgid) then libev will \fInot\fR look at the environment variable |
274 | or setgid) then libev will \fInot\fR look at the environment variable |
261 | \&\f(CW\*(C`LIBEV_FLAGS\*(C'\fR. Otherwise (the default), this environment variable will |
275 | \&\f(CW\*(C`LIBEV_FLAGS\*(C'\fR. Otherwise (the default), this environment variable will |
262 | override the flags completely if it is found in the environment. This is |
276 | override the flags completely if it is found in the environment. This is |
263 | useful to try out specific backends to test their performance, or to work |
277 | useful to try out specific backends to test their performance, or to work |
264 | around bugs. |
278 | around bugs. |
265 | .ie n .IP """EVMETHOD_SELECT"" (portable select backend)" 4 |
279 | .ie n .IP """EVBACKEND_SELECT"" (value 1, portable select backend)" 4 |
266 | .el .IP "\f(CWEVMETHOD_SELECT\fR (portable select backend)" 4 |
280 | .el .IP "\f(CWEVBACKEND_SELECT\fR (value 1, portable select backend)" 4 |
267 | .IX Item "EVMETHOD_SELECT (portable select backend)" |
281 | .IX Item "EVBACKEND_SELECT (value 1, portable select backend)" |
268 | .PD 0 |
282 | This is your standard \fIselect\fR\|(2) backend. Not \fIcompletely\fR standard, as |
|
|
283 | libev tries to roll its own fd_set with no limits on the number of fds, |
|
|
284 | but if that fails, expect a fairly low limit on the number of fds when |
|
|
285 | using this backend. It doesn't scale too well (O(highest_fd)), but its usually |
|
|
286 | the fastest backend for a low number of fds. |
269 | .ie n .IP """EVMETHOD_POLL"" (poll backend, available everywhere except on windows)" 4 |
287 | .ie n .IP """EVBACKEND_POLL"" (value 2, poll backend, available everywhere except on windows)" 4 |
270 | .el .IP "\f(CWEVMETHOD_POLL\fR (poll backend, available everywhere except on windows)" 4 |
288 | .el .IP "\f(CWEVBACKEND_POLL\fR (value 2, poll backend, available everywhere except on windows)" 4 |
271 | .IX Item "EVMETHOD_POLL (poll backend, available everywhere except on windows)" |
289 | .IX Item "EVBACKEND_POLL (value 2, poll backend, available everywhere except on windows)" |
|
|
290 | And this is your standard \fIpoll\fR\|(2) backend. It's more complicated than |
|
|
291 | select, but handles sparse fds better and has no artificial limit on the |
|
|
292 | number of fds you can use (except it will slow down considerably with a |
|
|
293 | lot of inactive fds). It scales similarly to select, i.e. O(total_fds). |
272 | .ie n .IP """EVMETHOD_EPOLL"" (linux only)" 4 |
294 | .ie n .IP """EVBACKEND_EPOLL"" (value 4, Linux)" 4 |
273 | .el .IP "\f(CWEVMETHOD_EPOLL\fR (linux only)" 4 |
295 | .el .IP "\f(CWEVBACKEND_EPOLL\fR (value 4, Linux)" 4 |
274 | .IX Item "EVMETHOD_EPOLL (linux only)" |
296 | .IX Item "EVBACKEND_EPOLL (value 4, Linux)" |
275 | .ie n .IP """EVMETHOD_KQUEUE"" (some bsds only)" 4 |
297 | For few fds, this backend is a bit little slower than poll and select, |
276 | .el .IP "\f(CWEVMETHOD_KQUEUE\fR (some bsds only)" 4 |
298 | but it scales phenomenally better. While poll and select usually scale like |
277 | .IX Item "EVMETHOD_KQUEUE (some bsds only)" |
299 | O(total_fds) where n is the total number of fds (or the highest fd), epoll scales |
|
|
300 | either O(1) or O(active_fds). |
|
|
301 | .Sp |
|
|
302 | While stopping and starting an I/O watcher in the same iteration will |
|
|
303 | result in some caching, there is still a syscall per such incident |
|
|
304 | (because the fd could point to a different file description now), so its |
|
|
305 | best to avoid that. Also, \fIdup()\fRed file descriptors might not work very |
|
|
306 | well if you register events for both fds. |
|
|
307 | .Sp |
|
|
308 | Please note that epoll sometimes generates spurious notifications, so you |
|
|
309 | need to use non-blocking I/O or other means to avoid blocking when no data |
|
|
310 | (or space) is available. |
|
|
311 | .ie n .IP """EVBACKEND_KQUEUE"" (value 8, most \s-1BSD\s0 clones)" 4 |
|
|
312 | .el .IP "\f(CWEVBACKEND_KQUEUE\fR (value 8, most \s-1BSD\s0 clones)" 4 |
|
|
313 | .IX Item "EVBACKEND_KQUEUE (value 8, most BSD clones)" |
|
|
314 | Kqueue deserves special mention, as at the time of this writing, it |
|
|
315 | was broken on all BSDs except NetBSD (usually it doesn't work with |
|
|
316 | anything but sockets and pipes, except on Darwin, where of course its |
|
|
317 | completely useless). For this reason its not being \*(L"autodetected\*(R" |
|
|
318 | unless you explicitly specify it explicitly in the flags (i.e. using |
|
|
319 | \&\f(CW\*(C`EVBACKEND_KQUEUE\*(C'\fR). |
|
|
320 | .Sp |
|
|
321 | It scales in the same way as the epoll backend, but the interface to the |
|
|
322 | kernel is more efficient (which says nothing about its actual speed, of |
|
|
323 | course). While starting and stopping an I/O watcher does not cause an |
|
|
324 | extra syscall as with epoll, it still adds up to four event changes per |
|
|
325 | incident, so its best to avoid that. |
278 | .ie n .IP """EVMETHOD_DEVPOLL"" (solaris 8 only)" 4 |
326 | .ie n .IP """EVBACKEND_DEVPOLL"" (value 16, Solaris 8)" 4 |
279 | .el .IP "\f(CWEVMETHOD_DEVPOLL\fR (solaris 8 only)" 4 |
327 | .el .IP "\f(CWEVBACKEND_DEVPOLL\fR (value 16, Solaris 8)" 4 |
280 | .IX Item "EVMETHOD_DEVPOLL (solaris 8 only)" |
328 | .IX Item "EVBACKEND_DEVPOLL (value 16, Solaris 8)" |
|
|
329 | This is not implemented yet (and might never be). |
281 | .ie n .IP """EVMETHOD_PORT"" (solaris 10 only)" 4 |
330 | .ie n .IP """EVBACKEND_PORT"" (value 32, Solaris 10)" 4 |
282 | .el .IP "\f(CWEVMETHOD_PORT\fR (solaris 10 only)" 4 |
331 | .el .IP "\f(CWEVBACKEND_PORT\fR (value 32, Solaris 10)" 4 |
283 | .IX Item "EVMETHOD_PORT (solaris 10 only)" |
332 | .IX Item "EVBACKEND_PORT (value 32, Solaris 10)" |
284 | .PD |
333 | This uses the Solaris 10 port mechanism. As with everything on Solaris, |
285 | If one or more of these are ored into the flags value, then only these |
334 | it's really slow, but it still scales very well (O(active_fds)). |
286 | backends will be tried (in the reverse order as given here). If one are |
335 | .Sp |
287 | specified, any backend will do. |
336 | Please note that solaris ports can result in a lot of spurious |
|
|
337 | notifications, so you need to use non-blocking I/O or other means to avoid |
|
|
338 | blocking when no data (or space) is available. |
|
|
339 | .ie n .IP """EVBACKEND_ALL""" 4 |
|
|
340 | .el .IP "\f(CWEVBACKEND_ALL\fR" 4 |
|
|
341 | .IX Item "EVBACKEND_ALL" |
|
|
342 | Try all backends (even potentially broken ones that wouldn't be tried |
|
|
343 | with \f(CW\*(C`EVFLAG_AUTO\*(C'\fR). Since this is a mask, you can do stuff such as |
|
|
344 | \&\f(CW\*(C`EVBACKEND_ALL & ~EVBACKEND_KQUEUE\*(C'\fR. |
288 | .RE |
345 | .RE |
289 | .RS 4 |
346 | .RS 4 |
|
|
347 | .Sp |
|
|
348 | If one or more of these are ored into the flags value, then only these |
|
|
349 | backends will be tried (in the reverse order as given here). If none are |
|
|
350 | specified, most compiled-in backend will be tried, usually in reverse |
|
|
351 | order of their flag values :) |
|
|
352 | .Sp |
|
|
353 | The most typical usage is like this: |
|
|
354 | .Sp |
|
|
355 | .Vb 2 |
|
|
356 | \& if (!ev_default_loop (0)) |
|
|
357 | \& fatal ("could not initialise libev, bad $LIBEV_FLAGS in environment?"); |
|
|
358 | .Ve |
|
|
359 | .Sp |
|
|
360 | Restrict libev to the select and poll backends, and do not allow |
|
|
361 | environment settings to be taken into account: |
|
|
362 | .Sp |
|
|
363 | .Vb 1 |
|
|
364 | \& ev_default_loop (EVBACKEND_POLL | EVBACKEND_SELECT | EVFLAG_NOENV); |
|
|
365 | .Ve |
|
|
366 | .Sp |
|
|
367 | Use whatever libev has to offer, but make sure that kqueue is used if |
|
|
368 | available (warning, breaks stuff, best use only with your own private |
|
|
369 | event loop and only if you know the \s-1OS\s0 supports your types of fds): |
|
|
370 | .Sp |
|
|
371 | .Vb 1 |
|
|
372 | \& ev_default_loop (ev_recommended_backends () | EVBACKEND_KQUEUE); |
|
|
373 | .Ve |
290 | .RE |
374 | .RE |
291 | .IP "struct ev_loop *ev_loop_new (unsigned int flags)" 4 |
375 | .IP "struct ev_loop *ev_loop_new (unsigned int flags)" 4 |
292 | .IX Item "struct ev_loop *ev_loop_new (unsigned int flags)" |
376 | .IX Item "struct ev_loop *ev_loop_new (unsigned int flags)" |
293 | Similar to \f(CW\*(C`ev_default_loop\*(C'\fR, but always creates a new event loop that is |
377 | Similar to \f(CW\*(C`ev_default_loop\*(C'\fR, but always creates a new event loop that is |
294 | always distinct from the default loop. Unlike the default loop, it cannot |
378 | always distinct from the default loop. Unlike the default loop, it cannot |
… | |
… | |
308 | This function reinitialises the kernel state for backends that have |
392 | This function reinitialises the kernel state for backends that have |
309 | one. Despite the name, you can call it anytime, but it makes most sense |
393 | one. Despite the name, you can call it anytime, but it makes most sense |
310 | after forking, in either the parent or child process (or both, but that |
394 | after forking, in either the parent or child process (or both, but that |
311 | again makes little sense). |
395 | again makes little sense). |
312 | .Sp |
396 | .Sp |
313 | You \fImust\fR call this function after forking if and only if you want to |
397 | You \fImust\fR call this function in the child process after forking if and |
314 | use the event library in both processes. If you just fork+exec, you don't |
398 | only if you want to use the event library in both processes. If you just |
315 | have to call it. |
399 | fork+exec, you don't have to call it. |
316 | .Sp |
400 | .Sp |
317 | The function itself is quite fast and it's usually not a problem to call |
401 | The function itself is quite fast and it's usually not a problem to call |
318 | it just in case after a fork. To make this easy, the function will fit in |
402 | it just in case after a fork. To make this easy, the function will fit in |
319 | quite nicely into a call to \f(CW\*(C`pthread_atfork\*(C'\fR: |
403 | quite nicely into a call to \f(CW\*(C`pthread_atfork\*(C'\fR: |
320 | .Sp |
404 | .Sp |
321 | .Vb 1 |
405 | .Vb 1 |
322 | \& pthread_atfork (0, 0, ev_default_fork); |
406 | \& pthread_atfork (0, 0, ev_default_fork); |
323 | .Ve |
407 | .Ve |
|
|
408 | .Sp |
|
|
409 | At the moment, \f(CW\*(C`EVBACKEND_SELECT\*(C'\fR and \f(CW\*(C`EVBACKEND_POLL\*(C'\fR are safe to use |
|
|
410 | without calling this function, so if you force one of those backends you |
|
|
411 | do not need to care. |
324 | .IP "ev_loop_fork (loop)" 4 |
412 | .IP "ev_loop_fork (loop)" 4 |
325 | .IX Item "ev_loop_fork (loop)" |
413 | .IX Item "ev_loop_fork (loop)" |
326 | Like \f(CW\*(C`ev_default_fork\*(C'\fR, but acts on an event loop created by |
414 | Like \f(CW\*(C`ev_default_fork\*(C'\fR, but acts on an event loop created by |
327 | \&\f(CW\*(C`ev_loop_new\*(C'\fR. Yes, you have to call this on every allocated event loop |
415 | \&\f(CW\*(C`ev_loop_new\*(C'\fR. Yes, you have to call this on every allocated event loop |
328 | after fork, and how you do this is entirely your own problem. |
416 | after fork, and how you do this is entirely your own problem. |
329 | .IP "unsigned int ev_method (loop)" 4 |
417 | .IP "unsigned int ev_backend (loop)" 4 |
330 | .IX Item "unsigned int ev_method (loop)" |
418 | .IX Item "unsigned int ev_backend (loop)" |
331 | Returns one of the \f(CW\*(C`EVMETHOD_*\*(C'\fR flags indicating the event backend in |
419 | Returns one of the \f(CW\*(C`EVBACKEND_*\*(C'\fR flags indicating the event backend in |
332 | use. |
420 | use. |
333 | .IP "ev_tstamp ev_now (loop)" 4 |
421 | .IP "ev_tstamp ev_now (loop)" 4 |
334 | .IX Item "ev_tstamp ev_now (loop)" |
422 | .IX Item "ev_tstamp ev_now (loop)" |
335 | Returns the current \*(L"event loop time\*(R", which is the time the event loop |
423 | Returns the current \*(L"event loop time\*(R", which is the time the event loop |
336 | got events and started processing them. This timestamp does not change |
424 | got events and started processing them. This timestamp does not change |
… | |
… | |
341 | .IX Item "ev_loop (loop, int flags)" |
429 | .IX Item "ev_loop (loop, int flags)" |
342 | Finally, this is it, the event handler. This function usually is called |
430 | Finally, this is it, the event handler. This function usually is called |
343 | after you initialised all your watchers and you want to start handling |
431 | after you initialised all your watchers and you want to start handling |
344 | events. |
432 | events. |
345 | .Sp |
433 | .Sp |
346 | If the flags argument is specified as 0, it will not return until either |
434 | If the flags argument is specified as \f(CW0\fR, it will not return until |
347 | no event watchers are active anymore or \f(CW\*(C`ev_unloop\*(C'\fR was called. |
435 | either no event watchers are active anymore or \f(CW\*(C`ev_unloop\*(C'\fR was called. |
348 | .Sp |
436 | .Sp |
349 | A flags value of \f(CW\*(C`EVLOOP_NONBLOCK\*(C'\fR will look for new events, will handle |
437 | A flags value of \f(CW\*(C`EVLOOP_NONBLOCK\*(C'\fR will look for new events, will handle |
350 | those events and any outstanding ones, but will not block your process in |
438 | those events and any outstanding ones, but will not block your process in |
351 | case there are no events and will return after one iteration of the loop. |
439 | case there are no events and will return after one iteration of the loop. |
352 | .Sp |
440 | .Sp |
353 | A flags value of \f(CW\*(C`EVLOOP_ONESHOT\*(C'\fR will look for new events (waiting if |
441 | A flags value of \f(CW\*(C`EVLOOP_ONESHOT\*(C'\fR will look for new events (waiting if |
354 | neccessary) and will handle those and any outstanding ones. It will block |
442 | neccessary) and will handle those and any outstanding ones. It will block |
355 | your process until at least one new event arrives, and will return after |
443 | your process until at least one new event arrives, and will return after |
356 | one iteration of the loop. |
444 | one iteration of the loop. This is useful if you are waiting for some |
|
|
445 | external event in conjunction with something not expressible using other |
|
|
446 | libev watchers. However, a pair of \f(CW\*(C`ev_prepare\*(C'\fR/\f(CW\*(C`ev_check\*(C'\fR watchers is |
|
|
447 | usually a better approach for this kind of thing. |
357 | .Sp |
448 | .Sp |
358 | This flags value could be used to implement alternative looping |
|
|
359 | constructs, but the \f(CW\*(C`prepare\*(C'\fR and \f(CW\*(C`check\*(C'\fR watchers provide a better and |
|
|
360 | more generic mechanism. |
|
|
361 | .Sp |
|
|
362 | Here are the gory details of what ev_loop does: |
449 | Here are the gory details of what \f(CW\*(C`ev_loop\*(C'\fR does: |
363 | .Sp |
450 | .Sp |
364 | .Vb 15 |
451 | .Vb 18 |
365 | \& 1. If there are no active watchers (reference count is zero), return. |
452 | \& * If there are no active watchers (reference count is zero), return. |
366 | \& 2. Queue and immediately call all prepare watchers. |
453 | \& - Queue prepare watchers and then call all outstanding watchers. |
367 | \& 3. If we have been forked, recreate the kernel state. |
454 | \& - If we have been forked, recreate the kernel state. |
368 | \& 4. Update the kernel state with all outstanding changes. |
455 | \& - Update the kernel state with all outstanding changes. |
369 | \& 5. Update the "event loop time". |
456 | \& - Update the "event loop time". |
370 | \& 6. Calculate for how long to block. |
457 | \& - Calculate for how long to block. |
371 | \& 7. Block the process, waiting for events. |
458 | \& - Block the process, waiting for any events. |
|
|
459 | \& - Queue all outstanding I/O (fd) events. |
372 | \& 8. Update the "event loop time" and do time jump handling. |
460 | \& - Update the "event loop time" and do time jump handling. |
373 | \& 9. Queue all outstanding timers. |
461 | \& - Queue all outstanding timers. |
374 | \& 10. Queue all outstanding periodics. |
462 | \& - Queue all outstanding periodics. |
375 | \& 11. If no events are pending now, queue all idle watchers. |
463 | \& - If no events are pending now, queue all idle watchers. |
376 | \& 12. Queue all check watchers. |
464 | \& - Queue all check watchers. |
377 | \& 13. Call all queued watchers in reverse order (i.e. check watchers first). |
465 | \& - Call all queued watchers in reverse order (i.e. check watchers first). |
|
|
466 | \& Signals and child watchers are implemented as I/O watchers, and will |
|
|
467 | \& be handled here by queueing them when their watcher gets executed. |
378 | \& 14. If ev_unloop has been called or EVLOOP_ONESHOT or EVLOOP_NONBLOCK |
468 | \& - If ev_unloop has been called or EVLOOP_ONESHOT or EVLOOP_NONBLOCK |
379 | \& was used, return, otherwise continue with step #1. |
469 | \& were used, return, otherwise continue with step *. |
380 | .Ve |
470 | .Ve |
381 | .IP "ev_unloop (loop, how)" 4 |
471 | .IP "ev_unloop (loop, how)" 4 |
382 | .IX Item "ev_unloop (loop, how)" |
472 | .IX Item "ev_unloop (loop, how)" |
383 | Can be used to make a call to \f(CW\*(C`ev_loop\*(C'\fR return early (but only after it |
473 | Can be used to make a call to \f(CW\*(C`ev_loop\*(C'\fR return early (but only after it |
384 | has processed all outstanding events). The \f(CW\*(C`how\*(C'\fR argument must be either |
474 | has processed all outstanding events). The \f(CW\*(C`how\*(C'\fR argument must be either |
… | |
… | |
443 | *)\*(C'\fR), and you can stop watching for events at any time by calling the |
533 | *)\*(C'\fR), and you can stop watching for events at any time by calling the |
444 | corresponding stop function (\f(CW\*(C`ev_<type>_stop (loop, watcher *)\*(C'\fR. |
534 | corresponding stop function (\f(CW\*(C`ev_<type>_stop (loop, watcher *)\*(C'\fR. |
445 | .PP |
535 | .PP |
446 | As long as your watcher is active (has been started but not stopped) you |
536 | As long as your watcher is active (has been started but not stopped) you |
447 | must not touch the values stored in it. Most specifically you must never |
537 | must not touch the values stored in it. Most specifically you must never |
448 | reinitialise it or call its set method. |
538 | reinitialise it or call its set macro. |
449 | .PP |
539 | .PP |
450 | You can check whether an event is active by calling the \f(CW\*(C`ev_is_active |
540 | You can check whether an event is active by calling the \f(CW\*(C`ev_is_active |
451 | (watcher *)\*(C'\fR macro. To see whether an event is outstanding (but the |
541 | (watcher *)\*(C'\fR macro. To see whether an event is outstanding (but the |
452 | callback for it has not been called yet) you can use the \f(CW\*(C`ev_is_pending |
542 | callback for it has not been called yet) you can use the \f(CW\*(C`ev_is_pending |
453 | (watcher *)\*(C'\fR macro. |
543 | (watcher *)\*(C'\fR macro. |
… | |
… | |
573 | descriptors correctly if you register interest in two or more fds pointing |
663 | descriptors correctly if you register interest in two or more fds pointing |
574 | to the same underlying file/socket etc. description (that is, they share |
664 | to the same underlying file/socket etc. description (that is, they share |
575 | the same underlying \*(L"file open\*(R"). |
665 | the same underlying \*(L"file open\*(R"). |
576 | .PP |
666 | .PP |
577 | If you must do this, then force the use of a known-to-be-good backend |
667 | If you must do this, then force the use of a known-to-be-good backend |
578 | (at the time of this writing, this includes only \s-1EVMETHOD_SELECT\s0 and |
668 | (at the time of this writing, this includes only \f(CW\*(C`EVBACKEND_SELECT\*(C'\fR and |
579 | \&\s-1EVMETHOD_POLL\s0). |
669 | \&\f(CW\*(C`EVBACKEND_POLL\*(C'\fR). |
580 | .IP "ev_io_init (ev_io *, callback, int fd, int events)" 4 |
670 | .IP "ev_io_init (ev_io *, callback, int fd, int events)" 4 |
581 | .IX Item "ev_io_init (ev_io *, callback, int fd, int events)" |
671 | .IX Item "ev_io_init (ev_io *, callback, int fd, int events)" |
582 | .PD 0 |
672 | .PD 0 |
583 | .IP "ev_io_set (ev_io *, int fd, int events)" 4 |
673 | .IP "ev_io_set (ev_io *, int fd, int events)" 4 |
584 | .IX Item "ev_io_set (ev_io *, int fd, int events)" |
674 | .IX Item "ev_io_set (ev_io *, int fd, int events)" |
585 | .PD |
675 | .PD |
586 | Configures an \f(CW\*(C`ev_io\*(C'\fR watcher. The fd is the file descriptor to rceeive |
676 | Configures an \f(CW\*(C`ev_io\*(C'\fR watcher. The fd is the file descriptor to rceeive |
587 | events for and events is either \f(CW\*(C`EV_READ\*(C'\fR, \f(CW\*(C`EV_WRITE\*(C'\fR or \f(CW\*(C`EV_READ | |
677 | events for and events is either \f(CW\*(C`EV_READ\*(C'\fR, \f(CW\*(C`EV_WRITE\*(C'\fR or \f(CW\*(C`EV_READ | |
588 | EV_WRITE\*(C'\fR to receive the given events. |
678 | EV_WRITE\*(C'\fR to receive the given events. |
|
|
679 | .Sp |
|
|
680 | Please note that most of the more scalable backend mechanisms (for example |
|
|
681 | epoll and solaris ports) can result in spurious readyness notifications |
|
|
682 | for file descriptors, so you practically need to use non-blocking I/O (and |
|
|
683 | treat callback invocation as hint only), or retest separately with a safe |
|
|
684 | interface before doing I/O (XLib can do this), or force the use of either |
|
|
685 | \&\f(CW\*(C`EVBACKEND_SELECT\*(C'\fR or \f(CW\*(C`EVBACKEND_POLL\*(C'\fR, which don't suffer from this |
|
|
686 | problem. Also note that it is quite easy to have your callback invoked |
|
|
687 | when the readyness condition is no longer valid even when employing |
|
|
688 | typical ways of handling events, so its a good idea to use non-blocking |
|
|
689 | I/O unconditionally. |
589 | .ie n .Sh """ev_timer"" \- relative and optionally recurring timeouts" |
690 | .ie n .Sh """ev_timer"" \- relative and optionally recurring timeouts" |
590 | .el .Sh "\f(CWev_timer\fP \- relative and optionally recurring timeouts" |
691 | .el .Sh "\f(CWev_timer\fP \- relative and optionally recurring timeouts" |
591 | .IX Subsection "ev_timer - relative and optionally recurring timeouts" |
692 | .IX Subsection "ev_timer - relative and optionally recurring timeouts" |
592 | Timer watchers are simple relative timers that generate an event after a |
693 | Timer watchers are simple relative timers that generate an event after a |
593 | given time, and optionally repeating in regular intervals after that. |
694 | given time, and optionally repeating in regular intervals after that. |