ViewVC Help
View File | Revision Log | Show Annotations | Download File
/cvs/libev/ev.pod
(Generate patch)

Comparing libev/ev.pod (file contents):
Revision 1.369 by root, Mon May 30 18:34:28 2011 UTC vs.
Revision 1.372 by root, Sat Jun 4 05:44:16 2011 UTC

178you actually want to know. Also interesting is the combination of 178you actually want to know. Also interesting is the combination of
179C<ev_update_now> and C<ev_now>. 179C<ev_update_now> and C<ev_now>.
180 180
181=item ev_sleep (ev_tstamp interval) 181=item ev_sleep (ev_tstamp interval)
182 182
183Sleep for the given interval: The current thread will be blocked until 183Sleep for the given interval: The current thread will be blocked
184either it is interrupted or the given time interval has passed. Basically 184until either it is interrupted or the given time interval has
185passed (approximately - it might return a bit earlier even if not
186interrupted). Returns immediately if C<< interval <= 0 >>.
187
185this is a sub-second-resolution C<sleep ()>. 188Basically this is a sub-second-resolution C<sleep ()>.
189
190The range of the C<interval> is limited - libev only guarantees to work
191with sleep times of up to one day (C<< interval <= 86400 >>).
186 192
187=item int ev_version_major () 193=item int ev_version_major ()
188 194
189=item int ev_version_minor () 195=item int ev_version_minor ()
190 196
4990.1ms) and so on. The biggest issue is fork races, however - if a program 5050.1ms) and so on. The biggest issue is fork races, however - if a program
500forks then I<both> parent and child process have to recreate the epoll 506forks then I<both> parent and child process have to recreate the epoll
501set, which can take considerable time (one syscall per file descriptor) 507set, which can take considerable time (one syscall per file descriptor)
502and is of course hard to detect. 508and is of course hard to detect.
503 509
504Epoll is also notoriously buggy - embedding epoll fds I<should> work, but 510Epoll is also notoriously buggy - embedding epoll fds I<should> work,
505of course I<doesn't>, and epoll just loves to report events for totally 511but of course I<doesn't>, and epoll just loves to report events for
506I<different> file descriptors (even already closed ones, so one cannot 512totally I<different> file descriptors (even already closed ones, so
507even remove them from the set) than registered in the set (especially 513one cannot even remove them from the set) than registered in the set
508on SMP systems). Libev tries to counter these spurious notifications by 514(especially on SMP systems). Libev tries to counter these spurious
509employing an additional generation counter and comparing that against the 515notifications by employing an additional generation counter and comparing
510events to filter out spurious ones, recreating the set when required. Last 516that against the events to filter out spurious ones, recreating the set
517when required. Epoll also errornously rounds down timeouts, but gives you
518no way to know when and by how much, so sometimes you have to busy-wait
519because epoll returns immediately despite a nonzero timeout. And last
511not least, it also refuses to work with some file descriptors which work 520not least, it also refuses to work with some file descriptors which work
512perfectly fine with C<select> (files, many character devices...). 521perfectly fine with C<select> (files, many character devices...).
513 522
514Epoll is truly the train wreck analog among event poll mechanisms, 523Epoll is truly the train wreck among event poll mechanisms, a frankenpoll,
515a frankenpoll, cobbled together in a hurry, no thought to design or 524cobbled together in a hurry, no thought to design or interaction with
516interaction with others. 525others. Oh, the pain, will it ever stop...
517 526
518While stopping, setting and starting an I/O watcher in the same iteration 527While stopping, setting and starting an I/O watcher in the same iteration
519will result in some caching, there is still a system call per such 528will result in some caching, there is still a system call per such
520incident (because the same I<file descriptor> could point to a different 529incident (because the same I<file descriptor> could point to a different
521I<file description> now), so its best to avoid that. Also, C<dup ()>'ed 530I<file description> now), so its best to avoid that. Also, C<dup ()>'ed
943overhead for the actual polling but can deliver many events at once. 952overhead for the actual polling but can deliver many events at once.
944 953
945By setting a higher I<io collect interval> you allow libev to spend more 954By setting a higher I<io collect interval> you allow libev to spend more
946time collecting I/O events, so you can handle more events per iteration, 955time collecting I/O events, so you can handle more events per iteration,
947at the cost of increasing latency. Timeouts (both C<ev_periodic> and 956at the cost of increasing latency. Timeouts (both C<ev_periodic> and
948C<ev_timer>) will be not affected. Setting this to a non-null value will 957C<ev_timer>) will not be affected. Setting this to a non-null value will
949introduce an additional C<ev_sleep ()> call into most loop iterations. The 958introduce an additional C<ev_sleep ()> call into most loop iterations. The
950sleep time ensures that libev will not poll for I/O events more often then 959sleep time ensures that libev will not poll for I/O events more often then
951once per this interval, on average. 960once per this interval, on average.
952 961
953Likewise, by setting a higher I<timeout collect interval> you allow libev 962Likewise, by setting a higher I<timeout collect interval> you allow libev

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines