… | |
… | |
275 | flags. If that is troubling you, check C<ev_backend ()> afterwards). |
275 | flags. If that is troubling you, check C<ev_backend ()> afterwards). |
276 | |
276 | |
277 | If you don't know what event loop to use, use the one returned from this |
277 | If you don't know what event loop to use, use the one returned from this |
278 | function. |
278 | function. |
279 | |
279 | |
|
|
280 | Note that this function is I<not> thread-safe, so if you want to use it |
|
|
281 | from multiple threads, you have to lock (note also that this is unlikely, |
|
|
282 | as loops cannot bes hared easily between threads anyway). |
|
|
283 | |
280 | The default loop is the only loop that can handle C<ev_signal> and |
284 | The default loop is the only loop that can handle C<ev_signal> and |
281 | C<ev_child> watchers, and to do this, it always registers a handler |
285 | C<ev_child> watchers, and to do this, it always registers a handler |
282 | for C<SIGCHLD>. If this is a problem for your app you can either |
286 | for C<SIGCHLD>. If this is a problem for your app you can either |
283 | create a dynamic loop with C<ev_loop_new> that doesn't do that, or you |
287 | create a dynamic loop with C<ev_loop_new> that doesn't do that, or you |
284 | can simply overwrite the C<SIGCHLD> signal handler I<after> calling |
288 | can simply overwrite the C<SIGCHLD> signal handler I<after> calling |
… | |
… | |
354 | For few fds, this backend is a bit little slower than poll and select, |
358 | For few fds, this backend is a bit little slower than poll and select, |
355 | but it scales phenomenally better. While poll and select usually scale |
359 | but it scales phenomenally better. While poll and select usually scale |
356 | like O(total_fds) where n is the total number of fds (or the highest fd), |
360 | like O(total_fds) where n is the total number of fds (or the highest fd), |
357 | epoll scales either O(1) or O(active_fds). The epoll design has a number |
361 | epoll scales either O(1) or O(active_fds). The epoll design has a number |
358 | of shortcomings, such as silently dropping events in some hard-to-detect |
362 | of shortcomings, such as silently dropping events in some hard-to-detect |
359 | cases and rewiring a syscall per fd change, no fork support and bad |
363 | cases and requiring a syscall per fd change, no fork support and bad |
360 | support for dup. |
364 | support for dup. |
361 | |
365 | |
362 | While stopping, setting and starting an I/O watcher in the same iteration |
366 | While stopping, setting and starting an I/O watcher in the same iteration |
363 | will result in some caching, there is still a syscall per such incident |
367 | will result in some caching, there is still a syscall per such incident |
364 | (because the fd could point to a different file description now), so its |
368 | (because the fd could point to a different file description now), so its |
… | |
… | |
465 | |
469 | |
466 | Similar to C<ev_default_loop>, but always creates a new event loop that is |
470 | Similar to C<ev_default_loop>, but always creates a new event loop that is |
467 | always distinct from the default loop. Unlike the default loop, it cannot |
471 | always distinct from the default loop. Unlike the default loop, it cannot |
468 | handle signal and child watchers, and attempts to do so will be greeted by |
472 | handle signal and child watchers, and attempts to do so will be greeted by |
469 | undefined behaviour (or a failed assertion if assertions are enabled). |
473 | undefined behaviour (or a failed assertion if assertions are enabled). |
|
|
474 | |
|
|
475 | Note that this function I<is> thread-safe, and the recommended way to use |
|
|
476 | libev with threads is indeed to create one loop per thread, and using the |
|
|
477 | default loop in the "main" or "initial" thread. |
470 | |
478 | |
471 | Example: Try to create a event loop that uses epoll and nothing else. |
479 | Example: Try to create a event loop that uses epoll and nothing else. |
472 | |
480 | |
473 | struct ev_loop *epoller = ev_loop_new (EVBACKEND_EPOLL | EVFLAG_NOENV); |
481 | struct ev_loop *epoller = ev_loop_new (EVBACKEND_EPOLL | EVFLAG_NOENV); |
474 | if (!epoller) |
482 | if (!epoller) |
… | |
… | |
2280 | |
2288 | |
2281 | This call incurs the overhead of a syscall only once per loop iteration, |
2289 | This call incurs the overhead of a syscall only once per loop iteration, |
2282 | so while the overhead might be noticable, it doesn't apply to repeated |
2290 | so while the overhead might be noticable, it doesn't apply to repeated |
2283 | calls to C<ev_async_send>. |
2291 | calls to C<ev_async_send>. |
2284 | |
2292 | |
|
|
2293 | =item bool = ev_async_pending (ev_async *) |
|
|
2294 | |
|
|
2295 | Returns a non-zero value when C<ev_async_send> has been called on the |
|
|
2296 | watcher but the event has not yet been processed (or even noted) by the |
|
|
2297 | event loop. |
|
|
2298 | |
|
|
2299 | C<ev_async_send> sets a flag in the watcher and wakes up the loop. When |
|
|
2300 | the loop iterates next and checks for the watcher to have become active, |
|
|
2301 | it will reset the flag again. C<ev_async_pending> can be used to very |
|
|
2302 | quickly check wether invoking the loop might be a good idea. |
|
|
2303 | |
|
|
2304 | Not that this does I<not> check wether the watcher itself is pending, only |
|
|
2305 | wether it has been requested to make this watcher pending. |
|
|
2306 | |
2285 | =back |
2307 | =back |
2286 | |
2308 | |
2287 | |
2309 | |
2288 | =head1 OTHER FUNCTIONS |
2310 | =head1 OTHER FUNCTIONS |
2289 | |
2311 | |