… | |
… | |
306 | =item C<EVBACKEND_SELECT> (value 1, portable select backend) |
306 | =item C<EVBACKEND_SELECT> (value 1, portable select backend) |
307 | |
307 | |
308 | This is your standard select(2) backend. Not I<completely> standard, as |
308 | This is your standard select(2) backend. Not I<completely> standard, as |
309 | libev tries to roll its own fd_set with no limits on the number of fds, |
309 | libev tries to roll its own fd_set with no limits on the number of fds, |
310 | but if that fails, expect a fairly low limit on the number of fds when |
310 | but if that fails, expect a fairly low limit on the number of fds when |
311 | using this backend. It doesn't scale too well (O(highest_fd)), but its usually |
311 | using this backend. It doesn't scale too well (O(highest_fd)), but its |
312 | the fastest backend for a low number of fds. |
312 | usually the fastest backend for a low number of (low-numbered :) fds. |
|
|
313 | |
|
|
314 | To get good performance out of this backend you need a high amount of |
|
|
315 | parallelity (most of the file descriptors should be busy). If you are |
|
|
316 | writing a server, you should C<accept ()> in a loop to accept as many |
|
|
317 | connections as possible during one iteration. You might also want to have |
|
|
318 | a look at C<ev_set_io_collect_interval ()> to increase the amount of |
|
|
319 | readyness notifications you get per iteration. |
313 | |
320 | |
314 | =item C<EVBACKEND_POLL> (value 2, poll backend, available everywhere except on windows) |
321 | =item C<EVBACKEND_POLL> (value 2, poll backend, available everywhere except on windows) |
315 | |
322 | |
316 | And this is your standard poll(2) backend. It's more complicated than |
323 | And this is your standard poll(2) backend. It's more complicated |
317 | select, but handles sparse fds better and has no artificial limit on the |
324 | than select, but handles sparse fds better and has no artificial |
318 | number of fds you can use (except it will slow down considerably with a |
325 | limit on the number of fds you can use (except it will slow down |
319 | lot of inactive fds). It scales similarly to select, i.e. O(total_fds). |
326 | considerably with a lot of inactive fds). It scales similarly to select, |
|
|
327 | i.e. O(total_fds). See the entry for C<EVBACKEND_SELECT>, above, for |
|
|
328 | performance tips. |
320 | |
329 | |
321 | =item C<EVBACKEND_EPOLL> (value 4, Linux) |
330 | =item C<EVBACKEND_EPOLL> (value 4, Linux) |
322 | |
331 | |
323 | For few fds, this backend is a bit little slower than poll and select, |
332 | For few fds, this backend is a bit little slower than poll and select, |
324 | but it scales phenomenally better. While poll and select usually scale |
333 | but it scales phenomenally better. While poll and select usually scale |
325 | like O(total_fds) where n is the total number of fds (or the highest fd), |
334 | like O(total_fds) where n is the total number of fds (or the highest fd), |
326 | epoll scales either O(1) or O(active_fds). The epoll design has a number |
335 | epoll scales either O(1) or O(active_fds). The epoll design has a number |
327 | of shortcomings, such as silently dropping events in some hard-to-detect |
336 | of shortcomings, such as silently dropping events in some hard-to-detect |
328 | cases and rewiring a syscall per fd change, no fork support and bad |
337 | cases and rewiring a syscall per fd change, no fork support and bad |
329 | support for dup: |
338 | support for dup. |
330 | |
339 | |
331 | While stopping, setting and starting an I/O watcher in the same iteration |
340 | While stopping, setting and starting an I/O watcher in the same iteration |
332 | will result in some caching, there is still a syscall per such incident |
341 | will result in some caching, there is still a syscall per such incident |
333 | (because the fd could point to a different file description now), so its |
342 | (because the fd could point to a different file description now), so its |
334 | best to avoid that. Also, C<dup ()>'ed file descriptors might not work |
343 | best to avoid that. Also, C<dup ()>'ed file descriptors might not work |
335 | very well if you register events for both fds. |
344 | very well if you register events for both fds. |
336 | |
345 | |
337 | Please note that epoll sometimes generates spurious notifications, so you |
346 | Please note that epoll sometimes generates spurious notifications, so you |
338 | need to use non-blocking I/O or other means to avoid blocking when no data |
347 | need to use non-blocking I/O or other means to avoid blocking when no data |
339 | (or space) is available. |
348 | (or space) is available. |
|
|
349 | |
|
|
350 | Best performance from this backend is achieved by not unregistering all |
|
|
351 | watchers for a file descriptor until it has been closed, if possible, i.e. |
|
|
352 | keep at least one watcher active per fd at all times. |
|
|
353 | |
|
|
354 | While nominally embeddeble in other event loops, this feature is broken in |
|
|
355 | all kernel versions tested so far. |
340 | |
356 | |
341 | =item C<EVBACKEND_KQUEUE> (value 8, most BSD clones) |
357 | =item C<EVBACKEND_KQUEUE> (value 8, most BSD clones) |
342 | |
358 | |
343 | Kqueue deserves special mention, as at the time of this writing, it |
359 | Kqueue deserves special mention, as at the time of this writing, it |
344 | was broken on all BSDs except NetBSD (usually it doesn't work reliably |
360 | was broken on all BSDs except NetBSD (usually it doesn't work reliably |
… | |
… | |
357 | course). While stopping, setting and starting an I/O watcher does never |
373 | course). While stopping, setting and starting an I/O watcher does never |
358 | cause an extra syscall as with C<EVBACKEND_EPOLL>, it still adds up to |
374 | cause an extra syscall as with C<EVBACKEND_EPOLL>, it still adds up to |
359 | two event changes per incident, support for C<fork ()> is very bad and it |
375 | two event changes per incident, support for C<fork ()> is very bad and it |
360 | drops fds silently in similarly hard-to-detect cases. |
376 | drops fds silently in similarly hard-to-detect cases. |
361 | |
377 | |
|
|
378 | This backend usually performs well under most conditions. |
|
|
379 | |
|
|
380 | While nominally embeddable in other event loops, this doesn't work |
|
|
381 | everywhere, so you might need to test for this. And since it is broken |
|
|
382 | almost everywhere, you should only use it when you have a lot of sockets |
|
|
383 | (for which it usually works), by embedding it into another event loop |
|
|
384 | (e.g. C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>) and using it only for |
|
|
385 | sockets. |
|
|
386 | |
362 | =item C<EVBACKEND_DEVPOLL> (value 16, Solaris 8) |
387 | =item C<EVBACKEND_DEVPOLL> (value 16, Solaris 8) |
363 | |
388 | |
364 | This is not implemented yet (and might never be). |
389 | This is not implemented yet (and might never be, unless you send me an |
|
|
390 | implementation). According to reports, C</dev/poll> only supports sockets |
|
|
391 | and is not embeddable, which would limit the usefulness of this backend |
|
|
392 | immensely. |
365 | |
393 | |
366 | =item C<EVBACKEND_PORT> (value 32, Solaris 10) |
394 | =item C<EVBACKEND_PORT> (value 32, Solaris 10) |
367 | |
395 | |
368 | This uses the Solaris 10 event port mechanism. As with everything on Solaris, |
396 | This uses the Solaris 10 event port mechanism. As with everything on Solaris, |
369 | it's really slow, but it still scales very well (O(active_fds)). |
397 | it's really slow, but it still scales very well (O(active_fds)). |
370 | |
398 | |
371 | Please note that solaris event ports can deliver a lot of spurious |
399 | Please note that solaris event ports can deliver a lot of spurious |
372 | notifications, so you need to use non-blocking I/O or other means to avoid |
400 | notifications, so you need to use non-blocking I/O or other means to avoid |
373 | blocking when no data (or space) is available. |
401 | blocking when no data (or space) is available. |
374 | |
402 | |
|
|
403 | While this backend scales well, it requires one system call per active |
|
|
404 | file descriptor per loop iteration. For small and medium numbers of file |
|
|
405 | descriptors a "slow" C<EVBACKEND_SELECT> or C<EVBACKEND_POLL> backend |
|
|
406 | might perform better. |
|
|
407 | |
375 | =item C<EVBACKEND_ALL> |
408 | =item C<EVBACKEND_ALL> |
376 | |
409 | |
377 | Try all backends (even potentially broken ones that wouldn't be tried |
410 | Try all backends (even potentially broken ones that wouldn't be tried |
378 | with C<EVFLAG_AUTO>). Since this is a mask, you can do stuff such as |
411 | with C<EVFLAG_AUTO>). Since this is a mask, you can do stuff such as |
379 | C<EVBACKEND_ALL & ~EVBACKEND_KQUEUE>. |
412 | C<EVBACKEND_ALL & ~EVBACKEND_KQUEUE>. |
|
|
413 | |
|
|
414 | It is definitely not recommended to use this flag. |
380 | |
415 | |
381 | =back |
416 | =back |
382 | |
417 | |
383 | If one or more of these are ored into the flags value, then only these |
418 | If one or more of these are ored into the flags value, then only these |
384 | backends will be tried (in the reverse order as given here). If none are |
419 | backends will be tried (in the reverse order as given here). If none are |
… | |
… | |
997 | optimisations to libev. |
1032 | optimisations to libev. |
998 | |
1033 | |
999 | =head3 The special problem of dup'ed file descriptors |
1034 | =head3 The special problem of dup'ed file descriptors |
1000 | |
1035 | |
1001 | Some backends (e.g. epoll), cannot register events for file descriptors, |
1036 | Some backends (e.g. epoll), cannot register events for file descriptors, |
1002 | but only events for the underlying file descriptions. That menas when you |
1037 | but only events for the underlying file descriptions. That means when you |
1003 | have C<dup ()>'ed file descriptors and register events for them, only one |
1038 | have C<dup ()>'ed file descriptors and register events for them, only one |
1004 | file descriptor might actually receive events. |
1039 | file descriptor might actually receive events. |
1005 | |
1040 | |
1006 | There is no workaorund possible except not registering events |
1041 | There is no workaround possible except not registering events |
1007 | for potentially C<dup ()>'ed file descriptors or to resort to |
1042 | for potentially C<dup ()>'ed file descriptors, or to resort to |
1008 | C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>. |
1043 | C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>. |
1009 | |
1044 | |
1010 | =head3 The special problem of fork |
1045 | =head3 The special problem of fork |
1011 | |
1046 | |
1012 | Some backends (epoll, kqueue) do not support C<fork ()> at all or exhibit |
1047 | Some backends (epoll, kqueue) do not support C<fork ()> at all or exhibit |
… | |
… | |
2491 | than enough. If you need to manage thousands of children you might want to |
2526 | than enough. If you need to manage thousands of children you might want to |
2492 | increase this value (I<must> be a power of two). |
2527 | increase this value (I<must> be a power of two). |
2493 | |
2528 | |
2494 | =item EV_INOTIFY_HASHSIZE |
2529 | =item EV_INOTIFY_HASHSIZE |
2495 | |
2530 | |
2496 | C<ev_staz> watchers use a small hash table to distribute workload by |
2531 | C<ev_stat> watchers use a small hash table to distribute workload by |
2497 | inotify watch id. The default size is C<16> (or C<1> with C<EV_MINIMAL>), |
2532 | inotify watch id. The default size is C<16> (or C<1> with C<EV_MINIMAL>), |
2498 | usually more than enough. If you need to manage thousands of C<ev_stat> |
2533 | usually more than enough. If you need to manage thousands of C<ev_stat> |
2499 | watchers you might want to increase this value (I<must> be a power of |
2534 | watchers you might want to increase this value (I<must> be a power of |
2500 | two). |
2535 | two). |
2501 | |
2536 | |