ViewVC Help
View File | Revision Log | Show Annotations | Download File
/cvs/libev/ev.pod
(Generate patch)

Comparing libev/ev.pod (file contents):
Revision 1.24 by root, Mon Nov 12 19:20:05 2007 UTC vs.
Revision 1.30 by root, Fri Nov 23 04:36:03 2007 UTC

56 56
57=over 4 57=over 4
58 58
59=item ev_tstamp ev_time () 59=item ev_tstamp ev_time ()
60 60
61Returns the current time as libev would use it. 61Returns the current time as libev would use it. Please note that the
62C<ev_now> function is usually faster and also often returns the timestamp
63you actually want to know.
62 64
63=item int ev_version_major () 65=item int ev_version_major ()
64 66
65=item int ev_version_minor () 67=item int ev_version_minor ()
66 68
143C<LIBEV_FLAGS>. Otherwise (the default), this environment variable will 145C<LIBEV_FLAGS>. Otherwise (the default), this environment variable will
144override the flags completely if it is found in the environment. This is 146override the flags completely if it is found in the environment. This is
145useful to try out specific backends to test their performance, or to work 147useful to try out specific backends to test their performance, or to work
146around bugs. 148around bugs.
147 149
148=item C<EVMETHOD_SELECT> (portable select backend) 150=item C<EVMETHOD_SELECT> (value 1, portable select backend)
149 151
152This is your standard select(2) backend. Not I<completely> standard, as
153libev tries to roll its own fd_set with no limits on the number of fds,
154but if that fails, expect a fairly low limit on the number of fds when
155using this backend. It doesn't scale too well (O(highest_fd)), but its usually
156the fastest backend for a low number of fds.
157
150=item C<EVMETHOD_POLL> (poll backend, available everywhere except on windows) 158=item C<EVMETHOD_POLL> (value 2, poll backend, available everywhere except on windows)
151 159
160And this is your standard poll(2) backend. It's more complicated than
161select, but handles sparse fds better and has no artificial limit on the
162number of fds you can use (except it will slow down considerably with a
163lot of inactive fds). It scales similarly to select, i.e. O(total_fds).
164
152=item C<EVMETHOD_EPOLL> (linux only) 165=item C<EVMETHOD_EPOLL> (value 4, Linux)
153 166
154=item C<EVMETHOD_KQUEUE> (some bsds only) 167For few fds, this backend is a bit little slower than poll and select,
168but it scales phenomenally better. While poll and select usually scale like
169O(total_fds) where n is the total number of fds (or the highest fd), epoll scales
170either O(1) or O(active_fds).
155 171
172While stopping and starting an I/O watcher in the same iteration will
173result in some caching, there is still a syscall per such incident
174(because the fd could point to a different file description now), so its
175best to avoid that. Also, dup()ed file descriptors might not work very
176well if you register events for both fds.
177
178=item C<EVMETHOD_KQUEUE> (value 8, most BSD clones)
179
180Kqueue deserves special mention, as at the time of this writing, it
181was broken on all BSDs except NetBSD (usually it doesn't work with
182anything but sockets and pipes, except on Darwin, where of course its
183completely useless). For this reason its not being "autodetected" unless
184you explicitly specify the flags (i.e. you don't use EVFLAG_AUTO).
185
186It scales in the same way as the epoll backend, but the interface to the
187kernel is more efficient (which says nothing about its actual speed, of
188course). While starting and stopping an I/O watcher does not cause an
189extra syscall as with epoll, it still adds up to four event changes per
190incident, so its best to avoid that.
191
156=item C<EVMETHOD_DEVPOLL> (solaris 8 only) 192=item C<EVMETHOD_DEVPOLL> (value 16, Solaris 8)
157 193
194This is not implemented yet (and might never be).
195
158=item C<EVMETHOD_PORT> (solaris 10 only) 196=item C<EVMETHOD_PORT> (value 32, Solaris 10)
197
198This uses the Solaris 10 port mechanism. As with everything on Solaris,
199it's really slow, but it still scales very well (O(active_fds)).
200
201=item C<EVMETHOD_ALL>
202
203Try all backends (even potentially broken ones that wouldn't be tried
204with C<EVFLAG_AUTO>). Since this is a mask, you can do stuff such as
205C<EVMETHOD_ALL & ~EVMETHOD_KQUEUE>.
206
207=back
159 208
160If one or more of these are ored into the flags value, then only these 209If one or more of these are ored into the flags value, then only these
161backends will be tried (in the reverse order as given here). If one are 210backends will be tried (in the reverse order as given here). If none are
162specified, any backend will do. 211specified, most compiled-in backend will be tried, usually in reverse
163 212order of their flag values :)
164=back
165 213
166=item struct ev_loop *ev_loop_new (unsigned int flags) 214=item struct ev_loop *ev_loop_new (unsigned int flags)
167 215
168Similar to C<ev_default_loop>, but always creates a new event loop that is 216Similar to C<ev_default_loop>, but always creates a new event loop that is
169always distinct from the default loop. Unlike the default loop, it cannot 217always distinct from the default loop. Unlike the default loop, it cannot
186This function reinitialises the kernel state for backends that have 234This function reinitialises the kernel state for backends that have
187one. Despite the name, you can call it anytime, but it makes most sense 235one. Despite the name, you can call it anytime, but it makes most sense
188after forking, in either the parent or child process (or both, but that 236after forking, in either the parent or child process (or both, but that
189again makes little sense). 237again makes little sense).
190 238
191You I<must> call this function after forking if and only if you want to 239You I<must> call this function in the child process after forking if and
192use the event library in both processes. If you just fork+exec, you don't 240only if you want to use the event library in both processes. If you just
193have to call it. 241fork+exec, you don't have to call it.
194 242
195The function itself is quite fast and it's usually not a problem to call 243The function itself is quite fast and it's usually not a problem to call
196it just in case after a fork. To make this easy, the function will fit in 244it just in case after a fork. To make this easy, the function will fit in
197quite nicely into a call to C<pthread_atfork>: 245quite nicely into a call to C<pthread_atfork>:
198 246
237 285
238This flags value could be used to implement alternative looping 286This flags value could be used to implement alternative looping
239constructs, but the C<prepare> and C<check> watchers provide a better and 287constructs, but the C<prepare> and C<check> watchers provide a better and
240more generic mechanism. 288more generic mechanism.
241 289
290Here are the gory details of what ev_loop does:
291
292 1. If there are no active watchers (reference count is zero), return.
293 2. Queue and immediately call all prepare watchers.
294 3. If we have been forked, recreate the kernel state.
295 4. Update the kernel state with all outstanding changes.
296 5. Update the "event loop time".
297 6. Calculate for how long to block.
298 7. Block the process, waiting for events.
299 8. Update the "event loop time" and do time jump handling.
300 9. Queue all outstanding timers.
301 10. Queue all outstanding periodics.
302 11. If no events are pending now, queue all idle watchers.
303 12. Queue all check watchers.
304 13. Call all queued watchers in reverse order (i.e. check watchers first).
305 14. If ev_unloop has been called or EVLOOP_ONESHOT or EVLOOP_NONBLOCK
306 was used, return, otherwise continue with step #1.
307
242=item ev_unloop (loop, how) 308=item ev_unloop (loop, how)
243 309
244Can be used to make a call to C<ev_loop> return early (but only after it 310Can be used to make a call to C<ev_loop> return early (but only after it
245has processed all outstanding events). The C<how> argument must be either 311has processed all outstanding events). The C<how> argument must be either
246C<EVUNLOOP_ONCE>, which will make the innermost C<ev_loop> call return, or 312C<EVUNLOOP_ONE>, which will make the innermost C<ev_loop> call return, or
247C<EVUNLOOP_ALL>, which will make all nested C<ev_loop> calls return. 313C<EVUNLOOP_ALL>, which will make all nested C<ev_loop> calls return.
248 314
249=item ev_ref (loop) 315=item ev_ref (loop)
250 316
251=item ev_unref (loop) 317=item ev_unref (loop)
452given time, and optionally repeating in regular intervals after that. 518given time, and optionally repeating in regular intervals after that.
453 519
454The timers are based on real time, that is, if you register an event that 520The timers are based on real time, that is, if you register an event that
455times out after an hour and you reset your system clock to last years 521times out after an hour and you reset your system clock to last years
456time, it will still time out after (roughly) and hour. "Roughly" because 522time, it will still time out after (roughly) and hour. "Roughly" because
457detecting time jumps is hard, and soem inaccuracies are unavoidable (the 523detecting time jumps is hard, and some inaccuracies are unavoidable (the
458monotonic clock option helps a lot here). 524monotonic clock option helps a lot here).
459 525
460The relative timeouts are calculated relative to the C<ev_now ()> 526The relative timeouts are calculated relative to the C<ev_now ()>
461time. This is usually the right thing as this timestamp refers to the time 527time. This is usually the right thing as this timestamp refers to the time
462of the event triggering whatever timeout you are modifying/starting. If 528of the event triggering whatever timeout you are modifying/starting. If
463you suspect event processing to be delayed and you *need* to base the timeout 529you suspect event processing to be delayed and you I<need> to base the timeout
464on the current time, use something like this to adjust for this: 530on the current time, use something like this to adjust for this:
465 531
466 ev_timer_set (&timer, after + ev_now () - ev_time (), 0.); 532 ev_timer_set (&timer, after + ev_now () - ev_time (), 0.);
533
534The callback is guarenteed to be invoked only when its timeout has passed,
535but if multiple timers become ready during the same loop iteration then
536order of execution is undefined.
467 537
468=over 4 538=over 4
469 539
470=item ev_timer_init (ev_timer *, callback, ev_tstamp after, ev_tstamp repeat) 540=item ev_timer_init (ev_timer *, callback, ev_tstamp after, ev_tstamp repeat)
471 541
518again). 588again).
519 589
520They can also be used to implement vastly more complex timers, such as 590They can also be used to implement vastly more complex timers, such as
521triggering an event on eahc midnight, local time. 591triggering an event on eahc midnight, local time.
522 592
593As with timers, the callback is guarenteed to be invoked only when the
594time (C<at>) has been passed, but if multiple periodic timers become ready
595during the same loop iteration then order of execution is undefined.
596
523=over 4 597=over 4
524 598
525=item ev_periodic_init (ev_periodic *, callback, ev_tstamp at, ev_tstamp interval, reschedule_cb) 599=item ev_periodic_init (ev_periodic *, callback, ev_tstamp at, ev_tstamp interval, reschedule_cb)
526 600
527=item ev_periodic_set (ev_periodic *, ev_tstamp after, ev_tstamp repeat, reschedule_cb) 601=item ev_periodic_set (ev_periodic *, ev_tstamp after, ev_tstamp repeat, reschedule_cb)
528 602
529Lots of arguments, lets sort it out... There are basically three modes of 603Lots of arguments, lets sort it out... There are basically three modes of
530operation, and we will explain them from simplest to complex: 604operation, and we will explain them from simplest to complex:
531
532 605
533=over 4 606=over 4
534 607
535=item * absolute timer (interval = reschedule_cb = 0) 608=item * absolute timer (interval = reschedule_cb = 0)
536 609

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines