ViewVC Help
View File | Revision Log | Show Annotations | Download File
/cvs/libev/ev.pod
(Generate patch)

Comparing libev/ev.pod (file contents):
Revision 1.352 by root, Mon Jan 10 14:30:15 2011 UTC vs.
Revision 1.381 by root, Sat Aug 13 17:41:14 2011 UTC

58 ev_timer_start (loop, &timeout_watcher); 58 ev_timer_start (loop, &timeout_watcher);
59 59
60 // now wait for events to arrive 60 // now wait for events to arrive
61 ev_run (loop, 0); 61 ev_run (loop, 0);
62 62
63 // unloop was called, so exit 63 // break was called, so exit
64 return 0; 64 return 0;
65 } 65 }
66 66
67=head1 ABOUT THIS DOCUMENT 67=head1 ABOUT THIS DOCUMENT
68 68
178you actually want to know. Also interesting is the combination of 178you actually want to know. Also interesting is the combination of
179C<ev_update_now> and C<ev_now>. 179C<ev_update_now> and C<ev_now>.
180 180
181=item ev_sleep (ev_tstamp interval) 181=item ev_sleep (ev_tstamp interval)
182 182
183Sleep for the given interval: The current thread will be blocked until 183Sleep for the given interval: The current thread will be blocked
184either it is interrupted or the given time interval has passed. Basically 184until either it is interrupted or the given time interval has
185passed (approximately - it might return a bit earlier even if not
186interrupted). Returns immediately if C<< interval <= 0 >>.
187
185this is a sub-second-resolution C<sleep ()>. 188Basically this is a sub-second-resolution C<sleep ()>.
189
190The range of the C<interval> is limited - libev only guarantees to work
191with sleep times of up to one day (C<< interval <= 86400 >>).
186 192
187=item int ev_version_major () 193=item int ev_version_major ()
188 194
189=item int ev_version_minor () 195=item int ev_version_minor ()
190 196
435example) that can't properly initialise their signal masks. 441example) that can't properly initialise their signal masks.
436 442
437=item C<EVFLAG_NOSIGMASK> 443=item C<EVFLAG_NOSIGMASK>
438 444
439When this flag is specified, then libev will avoid to modify the signal 445When this flag is specified, then libev will avoid to modify the signal
440mask. Specifically, this means you ahve to make sure signals are unblocked 446mask. Specifically, this means you have to make sure signals are unblocked
441when you want to receive them. 447when you want to receive them.
442 448
443This behaviour is useful when you want to do your own signal handling, or 449This behaviour is useful when you want to do your own signal handling, or
444want to handle signals only in specific threads and want to avoid libev 450want to handle signals only in specific threads and want to avoid libev
445unblocking the signals. 451unblocking the signals.
452
453It's also required by POSIX in a threaded program, as libev calls
454C<sigprocmask>, whose behaviour is officially unspecified.
446 455
447This flag's behaviour will become the default in future versions of libev. 456This flag's behaviour will become the default in future versions of libev.
448 457
449=item C<EVBACKEND_SELECT> (value 1, portable select backend) 458=item C<EVBACKEND_SELECT> (value 1, portable select backend)
450 459
480=item C<EVBACKEND_EPOLL> (value 4, Linux) 489=item C<EVBACKEND_EPOLL> (value 4, Linux)
481 490
482Use the linux-specific epoll(7) interface (for both pre- and post-2.6.9 491Use the linux-specific epoll(7) interface (for both pre- and post-2.6.9
483kernels). 492kernels).
484 493
485For few fds, this backend is a bit little slower than poll and select, 494For few fds, this backend is a bit little slower than poll and select, but
486but it scales phenomenally better. While poll and select usually scale 495it scales phenomenally better. While poll and select usually scale like
487like O(total_fds) where n is the total number of fds (or the highest fd), 496O(total_fds) where total_fds is the total number of fds (or the highest
488epoll scales either O(1) or O(active_fds). 497fd), epoll scales either O(1) or O(active_fds).
489 498
490The epoll mechanism deserves honorable mention as the most misdesigned 499The epoll mechanism deserves honorable mention as the most misdesigned
491of the more advanced event mechanisms: mere annoyances include silently 500of the more advanced event mechanisms: mere annoyances include silently
492dropping file descriptors, requiring a system call per change per file 501dropping file descriptors, requiring a system call per change per file
493descriptor (and unnecessary guessing of parameters), problems with dup, 502descriptor (and unnecessary guessing of parameters), problems with dup,
4960.1ms) and so on. The biggest issue is fork races, however - if a program 5050.1ms) and so on. The biggest issue is fork races, however - if a program
497forks then I<both> parent and child process have to recreate the epoll 506forks then I<both> parent and child process have to recreate the epoll
498set, which can take considerable time (one syscall per file descriptor) 507set, which can take considerable time (one syscall per file descriptor)
499and is of course hard to detect. 508and is of course hard to detect.
500 509
501Epoll is also notoriously buggy - embedding epoll fds I<should> work, but 510Epoll is also notoriously buggy - embedding epoll fds I<should> work,
502of course I<doesn't>, and epoll just loves to report events for totally 511but of course I<doesn't>, and epoll just loves to report events for
503I<different> file descriptors (even already closed ones, so one cannot 512totally I<different> file descriptors (even already closed ones, so
504even remove them from the set) than registered in the set (especially 513one cannot even remove them from the set) than registered in the set
505on SMP systems). Libev tries to counter these spurious notifications by 514(especially on SMP systems). Libev tries to counter these spurious
506employing an additional generation counter and comparing that against the 515notifications by employing an additional generation counter and comparing
507events to filter out spurious ones, recreating the set when required. Last 516that against the events to filter out spurious ones, recreating the set
517when required. Epoll also erroneously rounds down timeouts, but gives you
518no way to know when and by how much, so sometimes you have to busy-wait
519because epoll returns immediately despite a nonzero timeout. And last
508not least, it also refuses to work with some file descriptors which work 520not least, it also refuses to work with some file descriptors which work
509perfectly fine with C<select> (files, many character devices...). 521perfectly fine with C<select> (files, many character devices...).
510 522
511Epoll is truly the train wreck analog among event poll mechanisms. 523Epoll is truly the train wreck among event poll mechanisms, a frankenpoll,
524cobbled together in a hurry, no thought to design or interaction with
525others. Oh, the pain, will it ever stop...
512 526
513While stopping, setting and starting an I/O watcher in the same iteration 527While stopping, setting and starting an I/O watcher in the same iteration
514will result in some caching, there is still a system call per such 528will result in some caching, there is still a system call per such
515incident (because the same I<file descriptor> could point to a different 529incident (because the same I<file descriptor> could point to a different
516I<file description> now), so its best to avoid that. Also, C<dup ()>'ed 530I<file description> now), so its best to avoid that. Also, C<dup ()>'ed
594among the OS-specific backends (I vastly prefer correctness over speed 608among the OS-specific backends (I vastly prefer correctness over speed
595hacks). 609hacks).
596 610
597On the negative side, the interface is I<bizarre> - so bizarre that 611On the negative side, the interface is I<bizarre> - so bizarre that
598even sun itself gets it wrong in their code examples: The event polling 612even sun itself gets it wrong in their code examples: The event polling
599function sometimes returning events to the caller even though an error 613function sometimes returns events to the caller even though an error
600occured, but with no indication whether it has done so or not (yes, it's 614occurred, but with no indication whether it has done so or not (yes, it's
601even documented that way) - deadly for edge-triggered interfaces where 615even documented that way) - deadly for edge-triggered interfaces where you
602you absolutely have to know whether an event occured or not because you 616absolutely have to know whether an event occurred or not because you have
603have to re-arm the watcher. 617to re-arm the watcher.
604 618
605Fortunately libev seems to be able to work around these idiocies. 619Fortunately libev seems to be able to work around these idiocies.
606 620
607This backend maps C<EV_READ> and C<EV_WRITE> in the same way as 621This backend maps C<EV_READ> and C<EV_WRITE> in the same way as
608C<EVBACKEND_POLL>. 622C<EVBACKEND_POLL>.
820This is useful if you are waiting for some external event in conjunction 834This is useful if you are waiting for some external event in conjunction
821with something not expressible using other libev watchers (i.e. "roll your 835with something not expressible using other libev watchers (i.e. "roll your
822own C<ev_run>"). However, a pair of C<ev_prepare>/C<ev_check> watchers is 836own C<ev_run>"). However, a pair of C<ev_prepare>/C<ev_check> watchers is
823usually a better approach for this kind of thing. 837usually a better approach for this kind of thing.
824 838
825Here are the gory details of what C<ev_run> does: 839Here are the gory details of what C<ev_run> does (this is for your
840understanding, not a guarantee that things will work exactly like this in
841future versions):
826 842
827 - Increment loop depth. 843 - Increment loop depth.
828 - Reset the ev_break status. 844 - Reset the ev_break status.
829 - Before the first iteration, call any pending watchers. 845 - Before the first iteration, call any pending watchers.
830 LOOP: 846 LOOP:
863anymore. 879anymore.
864 880
865 ... queue jobs here, make sure they register event watchers as long 881 ... queue jobs here, make sure they register event watchers as long
866 ... as they still have work to do (even an idle watcher will do..) 882 ... as they still have work to do (even an idle watcher will do..)
867 ev_run (my_loop, 0); 883 ev_run (my_loop, 0);
868 ... jobs done or somebody called unloop. yeah! 884 ... jobs done or somebody called break. yeah!
869 885
870=item ev_break (loop, how) 886=item ev_break (loop, how)
871 887
872Can be used to make a call to C<ev_run> return early (but only after it 888Can be used to make a call to C<ev_run> return early (but only after it
873has processed all outstanding events). The C<how> argument must be either 889has processed all outstanding events). The C<how> argument must be either
936overhead for the actual polling but can deliver many events at once. 952overhead for the actual polling but can deliver many events at once.
937 953
938By setting a higher I<io collect interval> you allow libev to spend more 954By setting a higher I<io collect interval> you allow libev to spend more
939time collecting I/O events, so you can handle more events per iteration, 955time collecting I/O events, so you can handle more events per iteration,
940at the cost of increasing latency. Timeouts (both C<ev_periodic> and 956at the cost of increasing latency. Timeouts (both C<ev_periodic> and
941C<ev_timer>) will be not affected. Setting this to a non-null value will 957C<ev_timer>) will not be affected. Setting this to a non-null value will
942introduce an additional C<ev_sleep ()> call into most loop iterations. The 958introduce an additional C<ev_sleep ()> call into most loop iterations. The
943sleep time ensures that libev will not poll for I/O events more often then 959sleep time ensures that libev will not poll for I/O events more often then
944once per this interval, on average. 960once per this interval, on average (as long as the host time resolution is
961good enough).
945 962
946Likewise, by setting a higher I<timeout collect interval> you allow libev 963Likewise, by setting a higher I<timeout collect interval> you allow libev
947to spend more time collecting timeouts, at the expense of increased 964to spend more time collecting timeouts, at the expense of increased
948latency/jitter/inexactness (the watcher callback will be called 965latency/jitter/inexactness (the watcher callback will be called
949later). C<ev_io> watchers will not be affected. Setting this to a non-null 966later). C<ev_io> watchers will not be affected. Setting this to a non-null
1355See also C<ev_feed_fd_event> and C<ev_feed_signal_event> for related 1372See also C<ev_feed_fd_event> and C<ev_feed_signal_event> for related
1356functions that do not need a watcher. 1373functions that do not need a watcher.
1357 1374
1358=back 1375=back
1359 1376
1360=head2 ASSOCIATING CUSTOM DATA WITH A WATCHER 1377See also the L<ASSOCIATING CUSTOM DATA WITH A WATCHER> and L<BUILDING YOUR
1361 1378OWN COMPOSITE WATCHERS> idioms.
1362Each watcher has, by default, a member C<void *data> that you can change
1363and read at any time: libev will completely ignore it. This can be used
1364to associate arbitrary data with your watcher. If you need more data and
1365don't want to allocate memory and store a pointer to it in that data
1366member, you can also "subclass" the watcher type and provide your own
1367data:
1368
1369 struct my_io
1370 {
1371 ev_io io;
1372 int otherfd;
1373 void *somedata;
1374 struct whatever *mostinteresting;
1375 };
1376
1377 ...
1378 struct my_io w;
1379 ev_io_init (&w.io, my_cb, fd, EV_READ);
1380
1381And since your callback will be called with a pointer to the watcher, you
1382can cast it back to your own type:
1383
1384 static void my_cb (struct ev_loop *loop, ev_io *w_, int revents)
1385 {
1386 struct my_io *w = (struct my_io *)w_;
1387 ...
1388 }
1389
1390More interesting and less C-conformant ways of casting your callback type
1391instead have been omitted.
1392
1393Another common scenario is to use some data structure with multiple
1394embedded watchers:
1395
1396 struct my_biggy
1397 {
1398 int some_data;
1399 ev_timer t1;
1400 ev_timer t2;
1401 }
1402
1403In this case getting the pointer to C<my_biggy> is a bit more
1404complicated: Either you store the address of your C<my_biggy> struct
1405in the C<data> member of the watcher (for woozies), or you need to use
1406some pointer arithmetic using C<offsetof> inside your watchers (for real
1407programmers):
1408
1409 #include <stddef.h>
1410
1411 static void
1412 t1_cb (EV_P_ ev_timer *w, int revents)
1413 {
1414 struct my_biggy big = (struct my_biggy *)
1415 (((char *)w) - offsetof (struct my_biggy, t1));
1416 }
1417
1418 static void
1419 t2_cb (EV_P_ ev_timer *w, int revents)
1420 {
1421 struct my_biggy big = (struct my_biggy *)
1422 (((char *)w) - offsetof (struct my_biggy, t2));
1423 }
1424 1379
1425=head2 WATCHER STATES 1380=head2 WATCHER STATES
1426 1381
1427There are various watcher states mentioned throughout this manual - 1382There are various watcher states mentioned throughout this manual -
1428active, pending and so on. In this section these states and the rules to 1383active, pending and so on. In this section these states and the rules to
1431 1386
1432=over 4 1387=over 4
1433 1388
1434=item initialiased 1389=item initialiased
1435 1390
1436Before a watcher can be registered with the event looop it has to be 1391Before a watcher can be registered with the event loop it has to be
1437initialised. This can be done with a call to C<ev_TYPE_init>, or calls to 1392initialised. This can be done with a call to C<ev_TYPE_init>, or calls to
1438C<ev_init> followed by the watcher-specific C<ev_TYPE_set> function. 1393C<ev_init> followed by the watcher-specific C<ev_TYPE_set> function.
1439 1394
1440In this state it is simply some block of memory that is suitable for use 1395In this state it is simply some block of memory that is suitable for
1441in an event loop. It can be moved around, freed, reused etc. at will. 1396use in an event loop. It can be moved around, freed, reused etc. at
1397will - as long as you either keep the memory contents intact, or call
1398C<ev_TYPE_init> again.
1442 1399
1443=item started/running/active 1400=item started/running/active
1444 1401
1445Once a watcher has been started with a call to C<ev_TYPE_start> it becomes 1402Once a watcher has been started with a call to C<ev_TYPE_start> it becomes
1446property of the event loop, and is actively waiting for events. While in 1403property of the event loop, and is actively waiting for events. While in
1474latter will clear any pending state the watcher might be in, regardless 1431latter will clear any pending state the watcher might be in, regardless
1475of whether it was active or not, so stopping a watcher explicitly before 1432of whether it was active or not, so stopping a watcher explicitly before
1476freeing it is often a good idea. 1433freeing it is often a good idea.
1477 1434
1478While stopped (and not pending) the watcher is essentially in the 1435While stopped (and not pending) the watcher is essentially in the
1479initialised state, that is it can be reused, moved, modified in any way 1436initialised state, that is, it can be reused, moved, modified in any way
1480you wish. 1437you wish (but when you trash the memory block, you need to C<ev_TYPE_init>
1438it again).
1481 1439
1482=back 1440=back
1483 1441
1484=head2 WATCHER PRIORITY MODELS 1442=head2 WATCHER PRIORITY MODELS
1485 1443
1614In general you can register as many read and/or write event watchers per 1572In general you can register as many read and/or write event watchers per
1615fd as you want (as long as you don't confuse yourself). Setting all file 1573fd as you want (as long as you don't confuse yourself). Setting all file
1616descriptors to non-blocking mode is also usually a good idea (but not 1574descriptors to non-blocking mode is also usually a good idea (but not
1617required if you know what you are doing). 1575required if you know what you are doing).
1618 1576
1619If you cannot use non-blocking mode, then force the use of a
1620known-to-be-good backend (at the time of this writing, this includes only
1621C<EVBACKEND_SELECT> and C<EVBACKEND_POLL>). The same applies to file
1622descriptors for which non-blocking operation makes no sense (such as
1623files) - libev doesn't guarantee any specific behaviour in that case.
1624
1625Another thing you have to watch out for is that it is quite easy to 1577Another thing you have to watch out for is that it is quite easy to
1626receive "spurious" readiness notifications, that is your callback might 1578receive "spurious" readiness notifications, that is, your callback might
1627be called with C<EV_READ> but a subsequent C<read>(2) will actually block 1579be called with C<EV_READ> but a subsequent C<read>(2) will actually block
1628because there is no data. Not only are some backends known to create a 1580because there is no data. It is very easy to get into this situation even
1629lot of those (for example Solaris ports), it is very easy to get into 1581with a relatively standard program structure. Thus it is best to always
1630this situation even with a relatively standard program structure. Thus 1582use non-blocking I/O: An extra C<read>(2) returning C<EAGAIN> is far
1631it is best to always use non-blocking I/O: An extra C<read>(2) returning
1632C<EAGAIN> is far preferable to a program hanging until some data arrives. 1583preferable to a program hanging until some data arrives.
1633 1584
1634If you cannot run the fd in non-blocking mode (for example you should 1585If you cannot run the fd in non-blocking mode (for example you should
1635not play around with an Xlib connection), then you have to separately 1586not play around with an Xlib connection), then you have to separately
1636re-test whether a file descriptor is really ready with a known-to-be good 1587re-test whether a file descriptor is really ready with a known-to-be good
1637interface such as poll (fortunately in our Xlib example, Xlib already 1588interface such as poll (fortunately in the case of Xlib, it already does
1638does this on its own, so its quite safe to use). Some people additionally 1589this on its own, so its quite safe to use). Some people additionally
1639use C<SIGALRM> and an interval timer, just to be sure you won't block 1590use C<SIGALRM> and an interval timer, just to be sure you won't block
1640indefinitely. 1591indefinitely.
1641 1592
1642But really, best use non-blocking mode. 1593But really, best use non-blocking mode.
1643 1594
1671 1622
1672There is no workaround possible except not registering events 1623There is no workaround possible except not registering events
1673for potentially C<dup ()>'ed file descriptors, or to resort to 1624for potentially C<dup ()>'ed file descriptors, or to resort to
1674C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>. 1625C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>.
1675 1626
1627=head3 The special problem of files
1628
1629Many people try to use C<select> (or libev) on file descriptors
1630representing files, and expect it to become ready when their program
1631doesn't block on disk accesses (which can take a long time on their own).
1632
1633However, this cannot ever work in the "expected" way - you get a readiness
1634notification as soon as the kernel knows whether and how much data is
1635there, and in the case of open files, that's always the case, so you
1636always get a readiness notification instantly, and your read (or possibly
1637write) will still block on the disk I/O.
1638
1639Another way to view it is that in the case of sockets, pipes, character
1640devices and so on, there is another party (the sender) that delivers data
1641on its own, but in the case of files, there is no such thing: the disk
1642will not send data on its own, simply because it doesn't know what you
1643wish to read - you would first have to request some data.
1644
1645Since files are typically not-so-well supported by advanced notification
1646mechanism, libev tries hard to emulate POSIX behaviour with respect
1647to files, even though you should not use it. The reason for this is
1648convenience: sometimes you want to watch STDIN or STDOUT, which is
1649usually a tty, often a pipe, but also sometimes files or special devices
1650(for example, C<epoll> on Linux works with F</dev/random> but not with
1651F</dev/urandom>), and even though the file might better be served with
1652asynchronous I/O instead of with non-blocking I/O, it is still useful when
1653it "just works" instead of freezing.
1654
1655So avoid file descriptors pointing to files when you know it (e.g. use
1656libeio), but use them when it is convenient, e.g. for STDIN/STDOUT, or
1657when you rarely read from a file instead of from a socket, and want to
1658reuse the same code path.
1659
1676=head3 The special problem of fork 1660=head3 The special problem of fork
1677 1661
1678Some backends (epoll, kqueue) do not support C<fork ()> at all or exhibit 1662Some backends (epoll, kqueue) do not support C<fork ()> at all or exhibit
1679useless behaviour. Libev fully supports fork, but needs to be told about 1663useless behaviour. Libev fully supports fork, but needs to be told about
1680it in the child. 1664it in the child if you want to continue to use it in the child.
1681 1665
1682To support fork in your programs, you either have to call 1666To support fork in your child processes, you have to call C<ev_loop_fork
1683C<ev_default_fork ()> or C<ev_loop_fork ()> after a fork in the child, 1667()> after a fork in the child, enable C<EVFLAG_FORKCHECK>, or resort to
1684enable C<EVFLAG_FORKCHECK>, or resort to C<EVBACKEND_SELECT> or 1668C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>.
1685C<EVBACKEND_POLL>.
1686 1669
1687=head3 The special problem of SIGPIPE 1670=head3 The special problem of SIGPIPE
1688 1671
1689While not really specific to libev, it is easy to forget about C<SIGPIPE>: 1672While not really specific to libev, it is easy to forget about C<SIGPIPE>:
1690when writing to a pipe whose other end has been closed, your program gets 1673when writing to a pipe whose other end has been closed, your program gets
1788detecting time jumps is hard, and some inaccuracies are unavoidable (the 1771detecting time jumps is hard, and some inaccuracies are unavoidable (the
1789monotonic clock option helps a lot here). 1772monotonic clock option helps a lot here).
1790 1773
1791The callback is guaranteed to be invoked only I<after> its timeout has 1774The callback is guaranteed to be invoked only I<after> its timeout has
1792passed (not I<at>, so on systems with very low-resolution clocks this 1775passed (not I<at>, so on systems with very low-resolution clocks this
1793might introduce a small delay). If multiple timers become ready during the 1776might introduce a small delay, see "the special problem of being too
1777early", below). If multiple timers become ready during the same loop
1794same loop iteration then the ones with earlier time-out values are invoked 1778iteration then the ones with earlier time-out values are invoked before
1795before ones of the same priority with later time-out values (but this is 1779ones of the same priority with later time-out values (but this is no
1796no longer true when a callback calls C<ev_run> recursively). 1780longer true when a callback calls C<ev_run> recursively).
1797 1781
1798=head3 Be smart about timeouts 1782=head3 Be smart about timeouts
1799 1783
1800Many real-world problems involve some kind of timeout, usually for error 1784Many real-world problems involve some kind of timeout, usually for error
1801recovery. A typical example is an HTTP request - if the other side hangs, 1785recovery. A typical example is an HTTP request - if the other side hangs,
1968Method #1 is almost always a bad idea, and buys you nothing. Method #4 is 1952Method #1 is almost always a bad idea, and buys you nothing. Method #4 is
1969rather complicated, but extremely efficient, something that really pays 1953rather complicated, but extremely efficient, something that really pays
1970off after the first million or so of active timers, i.e. it's usually 1954off after the first million or so of active timers, i.e. it's usually
1971overkill :) 1955overkill :)
1972 1956
1957=head3 The special problem of being too early
1958
1959If you ask a timer to call your callback after three seconds, then
1960you expect it to be invoked after three seconds - but of course, this
1961cannot be guaranteed to infinite precision. Less obviously, it cannot be
1962guaranteed to any precision by libev - imagine somebody suspending the
1963process a STOP signal for a few hours for example.
1964
1965So, libev tries to invoke your callback as soon as possible I<after> the
1966delay has occured, but cannot guarantee this.
1967
1968A less obvious failure mode is calling your callback too early: many event
1969loops compare timestamps with a "elapsed delay >= requested delay", but
1970this can cause your callback to be invoked much earlier than you would
1971expect.
1972
1973To see why, imagine a system with a clock that only offers full second
1974resolution (think windows if you can't come up with a broken enough OS
1975yourself). If you schedule a one-second timer at the time 500.9, then the
1976event loop will schedule your timeout to elapse at a system time of 500
1977(500.9 truncated to the resolution) + 1, or 501.
1978
1979If an event library looks at the timeout 0.1s later, it will see "501 >=
1980501" and invoke the callback 0.1s after it was started, even though a
1981one-second delay was requested - this is being "too early", despite best
1982intentions.
1983
1984This is the reason why libev will never invoke the callback if the elapsed
1985delay equals the requested delay, but only when the elapsed delay is
1986larger than the requested delay. In the example above, libev would only invoke
1987the callback at system time 502, or 1.1s after the timer was started.
1988
1989So, while libev cannot guarantee that your callback will be invoked
1990exactly when requested, it I<can> and I<does> guarantee that the requested
1991delay has actually elapsed, or in other words, it always errs on the "too
1992late" side of things.
1993
1973=head3 The special problem of time updates 1994=head3 The special problem of time updates
1974 1995
1975Establishing the current time is a costly operation (it usually takes at 1996Establishing the current time is a costly operation (it usually takes at
1976least two system calls): EV therefore updates its idea of the current 1997least two system calls): EV therefore updates its idea of the current
1977time only before and after C<ev_run> collects new events, which causes a 1998time only before and after C<ev_run> collects new events, which causes a
1987 ev_timer_set (&timer, after + ev_now () - ev_time (), 0.); 2008 ev_timer_set (&timer, after + ev_now () - ev_time (), 0.);
1988 2009
1989If the event loop is suspended for a long time, you can also force an 2010If the event loop is suspended for a long time, you can also force an
1990update of the time returned by C<ev_now ()> by calling C<ev_now_update 2011update of the time returned by C<ev_now ()> by calling C<ev_now_update
1991()>. 2012()>.
2013
2014=head3 The special problem of unsychronised clocks
2015
2016Modern systems have a variety of clocks - libev itself uses the normal
2017"wall clock" clock and, if available, the monotonic clock (to avoid time
2018jumps).
2019
2020Neither of these clocks is synchronised with each other or any other clock
2021on the system, so C<ev_time ()> might return a considerably different time
2022than C<gettimeofday ()> or C<time ()>. On a GNU/Linux system, for example,
2023a call to C<gettimeofday> might return a second count that is one higher
2024than a directly following call to C<time>.
2025
2026The moral of this is to only compare libev-related timestamps with
2027C<ev_time ()> and C<ev_now ()>, at least if you want better precision than
2028a seocnd or so.
2029
2030One more problem arises due to this lack of synchronisation: if libev uses
2031the system monotonic clock and you compare timestamps from C<ev_time>
2032or C<ev_now> from when you started your timer and when your callback is
2033invoked, you will find that sometimes the callback is a bit "early".
2034
2035This is because C<ev_timer>s work in real time, not wall clock time, so
2036libev makes sure your callback is not invoked before the delay happened,
2037I<measured according to the real time>, not the system clock.
2038
2039If your timeouts are based on a physical timescale (e.g. "time out this
2040connection after 100 seconds") then this shouldn't bother you as it is
2041exactly the right behaviour.
2042
2043If you want to compare wall clock/system timestamps to your timers, then
2044you need to use C<ev_periodic>s, as these are based on the wall clock
2045time, where your comparisons will always generate correct results.
1992 2046
1993=head3 The special problems of suspended animation 2047=head3 The special problems of suspended animation
1994 2048
1995When you leave the server world it is quite customary to hit machines that 2049When you leave the server world it is quite customary to hit machines that
1996can suspend/hibernate - what happens to the clocks during such a suspend? 2050can suspend/hibernate - what happens to the clocks during such a suspend?
2040keep up with the timer (because it takes longer than those 10 seconds to 2094keep up with the timer (because it takes longer than those 10 seconds to
2041do stuff) the timer will not fire more than once per event loop iteration. 2095do stuff) the timer will not fire more than once per event loop iteration.
2042 2096
2043=item ev_timer_again (loop, ev_timer *) 2097=item ev_timer_again (loop, ev_timer *)
2044 2098
2045This will act as if the timer timed out and restart it again if it is 2099This will act as if the timer timed out and restarts it again if it is
2046repeating. The exact semantics are: 2100repeating. The exact semantics are:
2047 2101
2048If the timer is pending, its pending status is cleared. 2102If the timer is pending, its pending status is cleared.
2049 2103
2050If the timer is started but non-repeating, stop it (as if it timed out). 2104If the timer is started but non-repeating, stop it (as if it timed out).
2180 2234
2181Another way to think about it (for the mathematically inclined) is that 2235Another way to think about it (for the mathematically inclined) is that
2182C<ev_periodic> will try to run the callback in this mode at the next possible 2236C<ev_periodic> will try to run the callback in this mode at the next possible
2183time where C<time = offset (mod interval)>, regardless of any time jumps. 2237time where C<time = offset (mod interval)>, regardless of any time jumps.
2184 2238
2185For numerical stability it is preferable that the C<offset> value is near 2239The C<interval> I<MUST> be positive, and for numerical stability, the
2186C<ev_now ()> (the current time), but there is no range requirement for 2240interval value should be higher than C<1/8192> (which is around 100
2187this value, and in fact is often specified as zero. 2241microseconds) and C<offset> should be higher than C<0> and should have
2242at most a similar magnitude as the current time (say, within a factor of
2243ten). Typical values for offset are, in fact, C<0> or something between
2244C<0> and C<interval>, which is also the recommended range.
2188 2245
2189Note also that there is an upper limit to how often a timer can fire (CPU 2246Note also that there is an upper limit to how often a timer can fire (CPU
2190speed for example), so if C<interval> is very small then timing stability 2247speed for example), so if C<interval> is very small then timing stability
2191will of course deteriorate. Libev itself tries to be exact to be about one 2248will of course deteriorate. Libev itself tries to be exact to be about one
2192millisecond (if the OS supports it and the machine is fast enough). 2249millisecond (if the OS supports it and the machine is fast enough).
2335=head3 The special problem of inheritance over fork/execve/pthread_create 2392=head3 The special problem of inheritance over fork/execve/pthread_create
2336 2393
2337Both the signal mask (C<sigprocmask>) and the signal disposition 2394Both the signal mask (C<sigprocmask>) and the signal disposition
2338(C<sigaction>) are unspecified after starting a signal watcher (and after 2395(C<sigaction>) are unspecified after starting a signal watcher (and after
2339stopping it again), that is, libev might or might not block the signal, 2396stopping it again), that is, libev might or might not block the signal,
2340and might or might not set or restore the installed signal handler. 2397and might or might not set or restore the installed signal handler (but
2398see C<EVFLAG_NOSIGMASK>).
2341 2399
2342While this does not matter for the signal disposition (libev never 2400While this does not matter for the signal disposition (libev never
2343sets signals to C<SIG_IGN>, so handlers will be reset to C<SIG_DFL> on 2401sets signals to C<SIG_IGN>, so handlers will be reset to C<SIG_DFL> on
2344C<execve>), this matters for the signal mask: many programs do not expect 2402C<execve>), this matters for the signal mask: many programs do not expect
2345certain signals to be blocked. 2403certain signals to be blocked.
3216 atexit (program_exits); 3274 atexit (program_exits);
3217 3275
3218 3276
3219=head2 C<ev_async> - how to wake up an event loop 3277=head2 C<ev_async> - how to wake up an event loop
3220 3278
3221In general, you cannot use an C<ev_run> from multiple threads or other 3279In general, you cannot use an C<ev_loop> from multiple threads or other
3222asynchronous sources such as signal handlers (as opposed to multiple event 3280asynchronous sources such as signal handlers (as opposed to multiple event
3223loops - those are of course safe to use in different threads). 3281loops - those are of course safe to use in different threads).
3224 3282
3225Sometimes, however, you need to wake up an event loop you do not control, 3283Sometimes, however, you need to wake up an event loop you do not control,
3226for example because it belongs to another thread. This is what C<ev_async> 3284for example because it belongs to another thread. This is what C<ev_async>
3233C<ev_async_sent> calls). In fact, you could use signal watchers as a kind 3291C<ev_async_sent> calls). In fact, you could use signal watchers as a kind
3234of "global async watchers" by using a watcher on an otherwise unused 3292of "global async watchers" by using a watcher on an otherwise unused
3235signal, and C<ev_feed_signal> to signal this watcher from another thread, 3293signal, and C<ev_feed_signal> to signal this watcher from another thread,
3236even without knowing which loop owns the signal. 3294even without knowing which loop owns the signal.
3237 3295
3238Unlike C<ev_signal> watchers, C<ev_async> works with any event loop, not
3239just the default loop.
3240
3241=head3 Queueing 3296=head3 Queueing
3242 3297
3243C<ev_async> does not support queueing of data in any way. The reason 3298C<ev_async> does not support queueing of data in any way. The reason
3244is that the author does not know of a simple (or any) algorithm for a 3299is that the author does not know of a simple (or any) algorithm for a
3245multiple-writer-single-reader queue that works in all cases and doesn't 3300multiple-writer-single-reader queue that works in all cases and doesn't
3336trust me. 3391trust me.
3337 3392
3338=item ev_async_send (loop, ev_async *) 3393=item ev_async_send (loop, ev_async *)
3339 3394
3340Sends/signals/activates the given C<ev_async> watcher, that is, feeds 3395Sends/signals/activates the given C<ev_async> watcher, that is, feeds
3341an C<EV_ASYNC> event on the watcher into the event loop. Unlike 3396an C<EV_ASYNC> event on the watcher into the event loop, and instantly
3397returns.
3398
3342C<ev_feed_event>, this call is safe to do from other threads, signal or 3399Unlike C<ev_feed_event>, this call is safe to do from other threads,
3343similar contexts (see the discussion of C<EV_ATOMIC_T> in the embedding 3400signal or similar contexts (see the discussion of C<EV_ATOMIC_T> in the
3344section below on what exactly this means). 3401embedding section below on what exactly this means).
3345 3402
3346Note that, as with other watchers in libev, multiple events might get 3403Note that, as with other watchers in libev, multiple events might get
3347compressed into a single callback invocation (another way to look at this 3404compressed into a single callback invocation (another way to look at
3348is that C<ev_async> watchers are level-triggered, set on C<ev_async_send>, 3405this is that C<ev_async> watchers are level-triggered: they are set on
3349reset when the event loop detects that). 3406C<ev_async_send>, reset when the event loop detects that).
3350 3407
3351This call incurs the overhead of a system call only once per event loop 3408This call incurs the overhead of at most one extra system call per event
3352iteration, so while the overhead might be noticeable, it doesn't apply to 3409loop iteration, if the event loop is blocked, and no syscall at all if
3353repeated calls to C<ev_async_send> for the same event loop. 3410the event loop (or your program) is processing events. That means that
3411repeated calls are basically free (there is no need to avoid calls for
3412performance reasons) and that the overhead becomes smaller (typically
3413zero) under load.
3354 3414
3355=item bool = ev_async_pending (ev_async *) 3415=item bool = ev_async_pending (ev_async *)
3356 3416
3357Returns a non-zero value when C<ev_async_send> has been called on the 3417Returns a non-zero value when C<ev_async_send> has been called on the
3358watcher but the event has not yet been processed (or even noted) by the 3418watcher but the event has not yet been processed (or even noted) by the
3429 3489
3430This section explains some common idioms that are not immediately 3490This section explains some common idioms that are not immediately
3431obvious. Note that examples are sprinkled over the whole manual, and this 3491obvious. Note that examples are sprinkled over the whole manual, and this
3432section only contains stuff that wouldn't fit anywhere else. 3492section only contains stuff that wouldn't fit anywhere else.
3433 3493
3434=over 4 3494=head2 ASSOCIATING CUSTOM DATA WITH A WATCHER
3435 3495
3436=item Model/nested event loop invocations and exit conditions. 3496Each watcher has, by default, a C<void *data> member that you can read
3497or modify at any time: libev will completely ignore it. This can be used
3498to associate arbitrary data with your watcher. If you need more data and
3499don't want to allocate memory separately and store a pointer to it in that
3500data member, you can also "subclass" the watcher type and provide your own
3501data:
3502
3503 struct my_io
3504 {
3505 ev_io io;
3506 int otherfd;
3507 void *somedata;
3508 struct whatever *mostinteresting;
3509 };
3510
3511 ...
3512 struct my_io w;
3513 ev_io_init (&w.io, my_cb, fd, EV_READ);
3514
3515And since your callback will be called with a pointer to the watcher, you
3516can cast it back to your own type:
3517
3518 static void my_cb (struct ev_loop *loop, ev_io *w_, int revents)
3519 {
3520 struct my_io *w = (struct my_io *)w_;
3521 ...
3522 }
3523
3524More interesting and less C-conformant ways of casting your callback
3525function type instead have been omitted.
3526
3527=head2 BUILDING YOUR OWN COMPOSITE WATCHERS
3528
3529Another common scenario is to use some data structure with multiple
3530embedded watchers, in effect creating your own watcher that combines
3531multiple libev event sources into one "super-watcher":
3532
3533 struct my_biggy
3534 {
3535 int some_data;
3536 ev_timer t1;
3537 ev_timer t2;
3538 }
3539
3540In this case getting the pointer to C<my_biggy> is a bit more
3541complicated: Either you store the address of your C<my_biggy> struct in
3542the C<data> member of the watcher (for woozies or C++ coders), or you need
3543to use some pointer arithmetic using C<offsetof> inside your watchers (for
3544real programmers):
3545
3546 #include <stddef.h>
3547
3548 static void
3549 t1_cb (EV_P_ ev_timer *w, int revents)
3550 {
3551 struct my_biggy big = (struct my_biggy *)
3552 (((char *)w) - offsetof (struct my_biggy, t1));
3553 }
3554
3555 static void
3556 t2_cb (EV_P_ ev_timer *w, int revents)
3557 {
3558 struct my_biggy big = (struct my_biggy *)
3559 (((char *)w) - offsetof (struct my_biggy, t2));
3560 }
3561
3562=head2 MODEL/NESTED EVENT LOOP INVOCATIONS AND EXIT CONDITIONS
3437 3563
3438Often (especially in GUI toolkits) there are places where you have 3564Often (especially in GUI toolkits) there are places where you have
3439I<modal> interaction, which is most easily implemented by recursively 3565I<modal> interaction, which is most easily implemented by recursively
3440invoking C<ev_run>. 3566invoking C<ev_run>.
3441 3567
3470 exit_main_loop = 1; 3596 exit_main_loop = 1;
3471 3597
3472 // exit both 3598 // exit both
3473 exit_main_loop = exit_nested_loop = 1; 3599 exit_main_loop = exit_nested_loop = 1;
3474 3600
3475=back 3601=head2 THREAD LOCKING EXAMPLE
3602
3603Here is a fictitious example of how to run an event loop in a different
3604thread from where callbacks are being invoked and watchers are
3605created/added/removed.
3606
3607For a real-world example, see the C<EV::Loop::Async> perl module,
3608which uses exactly this technique (which is suited for many high-level
3609languages).
3610
3611The example uses a pthread mutex to protect the loop data, a condition
3612variable to wait for callback invocations, an async watcher to notify the
3613event loop thread and an unspecified mechanism to wake up the main thread.
3614
3615First, you need to associate some data with the event loop:
3616
3617 typedef struct {
3618 mutex_t lock; /* global loop lock */
3619 ev_async async_w;
3620 thread_t tid;
3621 cond_t invoke_cv;
3622 } userdata;
3623
3624 void prepare_loop (EV_P)
3625 {
3626 // for simplicity, we use a static userdata struct.
3627 static userdata u;
3628
3629 ev_async_init (&u->async_w, async_cb);
3630 ev_async_start (EV_A_ &u->async_w);
3631
3632 pthread_mutex_init (&u->lock, 0);
3633 pthread_cond_init (&u->invoke_cv, 0);
3634
3635 // now associate this with the loop
3636 ev_set_userdata (EV_A_ u);
3637 ev_set_invoke_pending_cb (EV_A_ l_invoke);
3638 ev_set_loop_release_cb (EV_A_ l_release, l_acquire);
3639
3640 // then create the thread running ev_run
3641 pthread_create (&u->tid, 0, l_run, EV_A);
3642 }
3643
3644The callback for the C<ev_async> watcher does nothing: the watcher is used
3645solely to wake up the event loop so it takes notice of any new watchers
3646that might have been added:
3647
3648 static void
3649 async_cb (EV_P_ ev_async *w, int revents)
3650 {
3651 // just used for the side effects
3652 }
3653
3654The C<l_release> and C<l_acquire> callbacks simply unlock/lock the mutex
3655protecting the loop data, respectively.
3656
3657 static void
3658 l_release (EV_P)
3659 {
3660 userdata *u = ev_userdata (EV_A);
3661 pthread_mutex_unlock (&u->lock);
3662 }
3663
3664 static void
3665 l_acquire (EV_P)
3666 {
3667 userdata *u = ev_userdata (EV_A);
3668 pthread_mutex_lock (&u->lock);
3669 }
3670
3671The event loop thread first acquires the mutex, and then jumps straight
3672into C<ev_run>:
3673
3674 void *
3675 l_run (void *thr_arg)
3676 {
3677 struct ev_loop *loop = (struct ev_loop *)thr_arg;
3678
3679 l_acquire (EV_A);
3680 pthread_setcanceltype (PTHREAD_CANCEL_ASYNCHRONOUS, 0);
3681 ev_run (EV_A_ 0);
3682 l_release (EV_A);
3683
3684 return 0;
3685 }
3686
3687Instead of invoking all pending watchers, the C<l_invoke> callback will
3688signal the main thread via some unspecified mechanism (signals? pipe
3689writes? C<Async::Interrupt>?) and then waits until all pending watchers
3690have been called (in a while loop because a) spurious wakeups are possible
3691and b) skipping inter-thread-communication when there are no pending
3692watchers is very beneficial):
3693
3694 static void
3695 l_invoke (EV_P)
3696 {
3697 userdata *u = ev_userdata (EV_A);
3698
3699 while (ev_pending_count (EV_A))
3700 {
3701 wake_up_other_thread_in_some_magic_or_not_so_magic_way ();
3702 pthread_cond_wait (&u->invoke_cv, &u->lock);
3703 }
3704 }
3705
3706Now, whenever the main thread gets told to invoke pending watchers, it
3707will grab the lock, call C<ev_invoke_pending> and then signal the loop
3708thread to continue:
3709
3710 static void
3711 real_invoke_pending (EV_P)
3712 {
3713 userdata *u = ev_userdata (EV_A);
3714
3715 pthread_mutex_lock (&u->lock);
3716 ev_invoke_pending (EV_A);
3717 pthread_cond_signal (&u->invoke_cv);
3718 pthread_mutex_unlock (&u->lock);
3719 }
3720
3721Whenever you want to start/stop a watcher or do other modifications to an
3722event loop, you will now have to lock:
3723
3724 ev_timer timeout_watcher;
3725 userdata *u = ev_userdata (EV_A);
3726
3727 ev_timer_init (&timeout_watcher, timeout_cb, 5.5, 0.);
3728
3729 pthread_mutex_lock (&u->lock);
3730 ev_timer_start (EV_A_ &timeout_watcher);
3731 ev_async_send (EV_A_ &u->async_w);
3732 pthread_mutex_unlock (&u->lock);
3733
3734Note that sending the C<ev_async> watcher is required because otherwise
3735an event loop currently blocking in the kernel will have no knowledge
3736about the newly added timer. By waking up the loop it will pick up any new
3737watchers in the next event loop iteration.
3738
3739=head2 THREADS, COROUTINES, CONTINUATIONS, QUEUES... INSTEAD OF CALLBACKS
3740
3741While the overhead of a callback that e.g. schedules a thread is small, it
3742is still an overhead. If you embed libev, and your main usage is with some
3743kind of threads or coroutines, you might want to customise libev so that
3744doesn't need callbacks anymore.
3745
3746Imagine you have coroutines that you can switch to using a function
3747C<switch_to (coro)>, that libev runs in a coroutine called C<libev_coro>
3748and that due to some magic, the currently active coroutine is stored in a
3749global called C<current_coro>. Then you can build your own "wait for libev
3750event" primitive by changing C<EV_CB_DECLARE> and C<EV_CB_INVOKE> (note
3751the differing C<;> conventions):
3752
3753 #define EV_CB_DECLARE(type) struct my_coro *cb;
3754 #define EV_CB_INVOKE(watcher) switch_to ((watcher)->cb)
3755
3756That means instead of having a C callback function, you store the
3757coroutine to switch to in each watcher, and instead of having libev call
3758your callback, you instead have it switch to that coroutine.
3759
3760A coroutine might now wait for an event with a function called
3761C<wait_for_event>. (the watcher needs to be started, as always, but it doesn't
3762matter when, or whether the watcher is active or not when this function is
3763called):
3764
3765 void
3766 wait_for_event (ev_watcher *w)
3767 {
3768 ev_cb_set (w) = current_coro;
3769 switch_to (libev_coro);
3770 }
3771
3772That basically suspends the coroutine inside C<wait_for_event> and
3773continues the libev coroutine, which, when appropriate, switches back to
3774this or any other coroutine. I am sure if you sue this your own :)
3775
3776You can do similar tricks if you have, say, threads with an event queue -
3777instead of storing a coroutine, you store the queue object and instead of
3778switching to a coroutine, you push the watcher onto the queue and notify
3779any waiters.
3780
3781To embed libev, see L<EMBEDDING>, but in short, it's easiest to create two
3782files, F<my_ev.h> and F<my_ev.c> that include the respective libev files:
3783
3784 // my_ev.h
3785 #define EV_CB_DECLARE(type) struct my_coro *cb;
3786 #define EV_CB_INVOKE(watcher) switch_to ((watcher)->cb);
3787 #include "../libev/ev.h"
3788
3789 // my_ev.c
3790 #define EV_H "my_ev.h"
3791 #include "../libev/ev.c"
3792
3793And then use F<my_ev.h> when you would normally use F<ev.h>, and compile
3794F<my_ev.c> into your project. When properly specifying include paths, you
3795can even use F<ev.h> as header file name directly.
3476 3796
3477 3797
3478=head1 LIBEVENT EMULATION 3798=head1 LIBEVENT EMULATION
3479 3799
3480Libev offers a compatibility emulation layer for libevent. It cannot 3800Libev offers a compatibility emulation layer for libevent. It cannot
3695watchers in the constructor. 4015watchers in the constructor.
3696 4016
3697 class myclass 4017 class myclass
3698 { 4018 {
3699 ev::io io ; void io_cb (ev::io &w, int revents); 4019 ev::io io ; void io_cb (ev::io &w, int revents);
3700 ev::io2 io2 ; void io2_cb (ev::io &w, int revents); 4020 ev::io io2 ; void io2_cb (ev::io &w, int revents);
3701 ev::idle idle; void idle_cb (ev::idle &w, int revents); 4021 ev::idle idle; void idle_cb (ev::idle &w, int revents);
3702 4022
3703 myclass (int fd) 4023 myclass (int fd)
3704 { 4024 {
3705 io .set <myclass, &myclass::io_cb > (this); 4025 io .set <myclass, &myclass::io_cb > (this);
3756L<http://hackage.haskell.org/cgi-bin/hackage-scripts/package/hlibev>. 4076L<http://hackage.haskell.org/cgi-bin/hackage-scripts/package/hlibev>.
3757 4077
3758=item D 4078=item D
3759 4079
3760Leandro Lucarella has written a D language binding (F<ev.d>) for libev, to 4080Leandro Lucarella has written a D language binding (F<ev.d>) for libev, to
3761be found at L<http://proj.llucax.com.ar/wiki/evd>. 4081be found at L<http://www.llucax.com.ar/proj/ev.d/index.html>.
3762 4082
3763=item Ocaml 4083=item Ocaml
3764 4084
3765Erkki Seppala has written Ocaml bindings for libev, to be found at 4085Erkki Seppala has written Ocaml bindings for libev, to be found at
3766L<http://modeemi.cs.tut.fi/~flux/software/ocaml-ev/>. 4086L<http://modeemi.cs.tut.fi/~flux/software/ocaml-ev/>.
3814suitable for use with C<EV_A>. 4134suitable for use with C<EV_A>.
3815 4135
3816=item C<EV_DEFAULT>, C<EV_DEFAULT_> 4136=item C<EV_DEFAULT>, C<EV_DEFAULT_>
3817 4137
3818Similar to the other two macros, this gives you the value of the default 4138Similar to the other two macros, this gives you the value of the default
3819loop, if multiple loops are supported ("ev loop default"). 4139loop, if multiple loops are supported ("ev loop default"). The default loop
4140will be initialised if it isn't already initialised.
4141
4142For non-multiplicity builds, these macros do nothing, so you always have
4143to initialise the loop somewhere.
3820 4144
3821=item C<EV_DEFAULT_UC>, C<EV_DEFAULT_UC_> 4145=item C<EV_DEFAULT_UC>, C<EV_DEFAULT_UC_>
3822 4146
3823Usage identical to C<EV_DEFAULT> and C<EV_DEFAULT_>, but requires that the 4147Usage identical to C<EV_DEFAULT> and C<EV_DEFAULT_>, but requires that the
3824default loop has been initialised (C<UC> == unchecked). Their behaviour 4148default loop has been initialised (C<UC> == unchecked). Their behaviour
3969supported). It will also not define any of the structs usually found in 4293supported). It will also not define any of the structs usually found in
3970F<event.h> that are not directly supported by the libev core alone. 4294F<event.h> that are not directly supported by the libev core alone.
3971 4295
3972In standalone mode, libev will still try to automatically deduce the 4296In standalone mode, libev will still try to automatically deduce the
3973configuration, but has to be more conservative. 4297configuration, but has to be more conservative.
4298
4299=item EV_USE_FLOOR
4300
4301If defined to be C<1>, libev will use the C<floor ()> function for its
4302periodic reschedule calculations, otherwise libev will fall back on a
4303portable (slower) implementation. If you enable this, you usually have to
4304link against libm or something equivalent. Enabling this when the C<floor>
4305function is not available will fail, so the safe default is to not enable
4306this.
3974 4307
3975=item EV_USE_MONOTONIC 4308=item EV_USE_MONOTONIC
3976 4309
3977If defined to be C<1>, libev will try to detect the availability of the 4310If defined to be C<1>, libev will try to detect the availability of the
3978monotonic clock option at both compile time and runtime. Otherwise no 4311monotonic clock option at both compile time and runtime. Otherwise no
4111indicate GNU/Linux + Glibc 2.4 or newer, otherwise disabled. 4444indicate GNU/Linux + Glibc 2.4 or newer, otherwise disabled.
4112 4445
4113=item EV_ATOMIC_T 4446=item EV_ATOMIC_T
4114 4447
4115Libev requires an integer type (suitable for storing C<0> or C<1>) whose 4448Libev requires an integer type (suitable for storing C<0> or C<1>) whose
4116access is atomic with respect to other threads or signal contexts. No such 4449access is atomic and serialised with respect to other threads or signal
4117type is easily found in the C language, so you can provide your own type 4450contexts. No such type is easily found in the C language, so you can
4118that you know is safe for your purposes. It is used both for signal handler "locking" 4451provide your own type that you know is safe for your purposes. It is used
4119as well as for signal and thread safety in C<ev_async> watchers. 4452both for signal handler "locking" as well as for signal and thread safety
4453in C<ev_async> watchers.
4120 4454
4121In the absence of this define, libev will use C<sig_atomic_t volatile> 4455In the absence of this define, libev will use C<sig_atomic_t volatile>
4122(from F<signal.h>), which is usually good enough on most platforms. 4456(from F<signal.h>), which is usually good enough on most platforms,
4457although strictly speaking using a type that also implies a memory fence
4458is required.
4123 4459
4124=item EV_H (h) 4460=item EV_H (h)
4125 4461
4126The name of the F<ev.h> header file used to include it. The default if 4462The name of the F<ev.h> header file used to include it. The default if
4127undefined is C<"ev.h"> in F<event.h>, F<ev.c> and F<ev++.h>. This can be 4463undefined is C<"ev.h"> in F<event.h>, F<ev.c> and F<ev++.h>. This can be
4150If undefined or defined to C<1>, then all event-loop-specific functions 4486If undefined or defined to C<1>, then all event-loop-specific functions
4151will have the C<struct ev_loop *> as first argument, and you can create 4487will have the C<struct ev_loop *> as first argument, and you can create
4152additional independent event loops. Otherwise there will be no support 4488additional independent event loops. Otherwise there will be no support
4153for multiple event loops and there is no first event loop pointer 4489for multiple event loops and there is no first event loop pointer
4154argument. Instead, all functions act on the single default loop. 4490argument. Instead, all functions act on the single default loop.
4491
4492Note that C<EV_DEFAULT> and C<EV_DEFAULT_> will no longer provide a
4493default loop when multiplicity is switched off - you always have to
4494initialise the loop manually in this case.
4155 4495
4156=item EV_MINPRI 4496=item EV_MINPRI
4157 4497
4158=item EV_MAXPRI 4498=item EV_MAXPRI
4159 4499
4410And a F<ev_cpp.C> implementation file that contains libev proper and is compiled: 4750And a F<ev_cpp.C> implementation file that contains libev proper and is compiled:
4411 4751
4412 #include "ev_cpp.h" 4752 #include "ev_cpp.h"
4413 #include "ev.c" 4753 #include "ev.c"
4414 4754
4415=head1 INTERACTION WITH OTHER PROGRAMS OR LIBRARIES 4755=head1 INTERACTION WITH OTHER PROGRAMS, LIBRARIES OR THE ENVIRONMENT
4416 4756
4417=head2 THREADS AND COROUTINES 4757=head2 THREADS AND COROUTINES
4418 4758
4419=head3 THREADS 4759=head3 THREADS
4420 4760
4471default loop and triggering an C<ev_async> watcher from the default loop 4811default loop and triggering an C<ev_async> watcher from the default loop
4472watcher callback into the event loop interested in the signal. 4812watcher callback into the event loop interested in the signal.
4473 4813
4474=back 4814=back
4475 4815
4476=head4 THREAD LOCKING EXAMPLE 4816See also L<THREAD LOCKING EXAMPLE>.
4477
4478Here is a fictitious example of how to run an event loop in a different
4479thread than where callbacks are being invoked and watchers are
4480created/added/removed.
4481
4482For a real-world example, see the C<EV::Loop::Async> perl module,
4483which uses exactly this technique (which is suited for many high-level
4484languages).
4485
4486The example uses a pthread mutex to protect the loop data, a condition
4487variable to wait for callback invocations, an async watcher to notify the
4488event loop thread and an unspecified mechanism to wake up the main thread.
4489
4490First, you need to associate some data with the event loop:
4491
4492 typedef struct {
4493 mutex_t lock; /* global loop lock */
4494 ev_async async_w;
4495 thread_t tid;
4496 cond_t invoke_cv;
4497 } userdata;
4498
4499 void prepare_loop (EV_P)
4500 {
4501 // for simplicity, we use a static userdata struct.
4502 static userdata u;
4503
4504 ev_async_init (&u->async_w, async_cb);
4505 ev_async_start (EV_A_ &u->async_w);
4506
4507 pthread_mutex_init (&u->lock, 0);
4508 pthread_cond_init (&u->invoke_cv, 0);
4509
4510 // now associate this with the loop
4511 ev_set_userdata (EV_A_ u);
4512 ev_set_invoke_pending_cb (EV_A_ l_invoke);
4513 ev_set_loop_release_cb (EV_A_ l_release, l_acquire);
4514
4515 // then create the thread running ev_loop
4516 pthread_create (&u->tid, 0, l_run, EV_A);
4517 }
4518
4519The callback for the C<ev_async> watcher does nothing: the watcher is used
4520solely to wake up the event loop so it takes notice of any new watchers
4521that might have been added:
4522
4523 static void
4524 async_cb (EV_P_ ev_async *w, int revents)
4525 {
4526 // just used for the side effects
4527 }
4528
4529The C<l_release> and C<l_acquire> callbacks simply unlock/lock the mutex
4530protecting the loop data, respectively.
4531
4532 static void
4533 l_release (EV_P)
4534 {
4535 userdata *u = ev_userdata (EV_A);
4536 pthread_mutex_unlock (&u->lock);
4537 }
4538
4539 static void
4540 l_acquire (EV_P)
4541 {
4542 userdata *u = ev_userdata (EV_A);
4543 pthread_mutex_lock (&u->lock);
4544 }
4545
4546The event loop thread first acquires the mutex, and then jumps straight
4547into C<ev_run>:
4548
4549 void *
4550 l_run (void *thr_arg)
4551 {
4552 struct ev_loop *loop = (struct ev_loop *)thr_arg;
4553
4554 l_acquire (EV_A);
4555 pthread_setcanceltype (PTHREAD_CANCEL_ASYNCHRONOUS, 0);
4556 ev_run (EV_A_ 0);
4557 l_release (EV_A);
4558
4559 return 0;
4560 }
4561
4562Instead of invoking all pending watchers, the C<l_invoke> callback will
4563signal the main thread via some unspecified mechanism (signals? pipe
4564writes? C<Async::Interrupt>?) and then waits until all pending watchers
4565have been called (in a while loop because a) spurious wakeups are possible
4566and b) skipping inter-thread-communication when there are no pending
4567watchers is very beneficial):
4568
4569 static void
4570 l_invoke (EV_P)
4571 {
4572 userdata *u = ev_userdata (EV_A);
4573
4574 while (ev_pending_count (EV_A))
4575 {
4576 wake_up_other_thread_in_some_magic_or_not_so_magic_way ();
4577 pthread_cond_wait (&u->invoke_cv, &u->lock);
4578 }
4579 }
4580
4581Now, whenever the main thread gets told to invoke pending watchers, it
4582will grab the lock, call C<ev_invoke_pending> and then signal the loop
4583thread to continue:
4584
4585 static void
4586 real_invoke_pending (EV_P)
4587 {
4588 userdata *u = ev_userdata (EV_A);
4589
4590 pthread_mutex_lock (&u->lock);
4591 ev_invoke_pending (EV_A);
4592 pthread_cond_signal (&u->invoke_cv);
4593 pthread_mutex_unlock (&u->lock);
4594 }
4595
4596Whenever you want to start/stop a watcher or do other modifications to an
4597event loop, you will now have to lock:
4598
4599 ev_timer timeout_watcher;
4600 userdata *u = ev_userdata (EV_A);
4601
4602 ev_timer_init (&timeout_watcher, timeout_cb, 5.5, 0.);
4603
4604 pthread_mutex_lock (&u->lock);
4605 ev_timer_start (EV_A_ &timeout_watcher);
4606 ev_async_send (EV_A_ &u->async_w);
4607 pthread_mutex_unlock (&u->lock);
4608
4609Note that sending the C<ev_async> watcher is required because otherwise
4610an event loop currently blocking in the kernel will have no knowledge
4611about the newly added timer. By waking up the loop it will pick up any new
4612watchers in the next event loop iteration.
4613 4817
4614=head3 COROUTINES 4818=head3 COROUTINES
4615 4819
4616Libev is very accommodating to coroutines ("cooperative threads"): 4820Libev is very accommodating to coroutines ("cooperative threads"):
4617libev fully supports nesting calls to its functions from different 4821libev fully supports nesting calls to its functions from different
4782requires, and its I/O model is fundamentally incompatible with the POSIX 4986requires, and its I/O model is fundamentally incompatible with the POSIX
4783model. Libev still offers limited functionality on this platform in 4987model. Libev still offers limited functionality on this platform in
4784the form of the C<EVBACKEND_SELECT> backend, and only supports socket 4988the form of the C<EVBACKEND_SELECT> backend, and only supports socket
4785descriptors. This only applies when using Win32 natively, not when using 4989descriptors. This only applies when using Win32 natively, not when using
4786e.g. cygwin. Actually, it only applies to the microsofts own compilers, 4990e.g. cygwin. Actually, it only applies to the microsofts own compilers,
4787as every compielr comes with a slightly differently broken/incompatible 4991as every compiler comes with a slightly differently broken/incompatible
4788environment. 4992environment.
4789 4993
4790Lifting these limitations would basically require the full 4994Lifting these limitations would basically require the full
4791re-implementation of the I/O system. If you are into this kind of thing, 4995re-implementation of the I/O system. If you are into this kind of thing,
4792then note that glib does exactly that for you in a very portable way (note 4996then note that glib does exactly that for you in a very portable way (note
4925 5129
4926The type C<double> is used to represent timestamps. It is required to 5130The type C<double> is used to represent timestamps. It is required to
4927have at least 51 bits of mantissa (and 9 bits of exponent), which is 5131have at least 51 bits of mantissa (and 9 bits of exponent), which is
4928good enough for at least into the year 4000 with millisecond accuracy 5132good enough for at least into the year 4000 with millisecond accuracy
4929(the design goal for libev). This requirement is overfulfilled by 5133(the design goal for libev). This requirement is overfulfilled by
4930implementations using IEEE 754, which is basically all existing ones. With 5134implementations using IEEE 754, which is basically all existing ones.
5135
4931IEEE 754 doubles, you get microsecond accuracy until at least 2200. 5136With IEEE 754 doubles, you get microsecond accuracy until at least the
5137year 2255 (and millisecond accuray till the year 287396 - by then, libev
5138is either obsolete or somebody patched it to use C<long double> or
5139something like that, just kidding).
4932 5140
4933=back 5141=back
4934 5142
4935If you know of other additional requirements drop me a note. 5143If you know of other additional requirements drop me a note.
4936 5144
4998=item Processing ev_async_send: O(number_of_async_watchers) 5206=item Processing ev_async_send: O(number_of_async_watchers)
4999 5207
5000=item Processing signals: O(max_signal_number) 5208=item Processing signals: O(max_signal_number)
5001 5209
5002Sending involves a system call I<iff> there were no other C<ev_async_send> 5210Sending involves a system call I<iff> there were no other C<ev_async_send>
5003calls in the current loop iteration. Checking for async and signal events 5211calls in the current loop iteration and the loop is currently
5212blocked. Checking for async and signal events involves iterating over all
5004involves iterating over all running async watchers or all signal numbers. 5213running async watchers or all signal numbers.
5005 5214
5006=back 5215=back
5007 5216
5008 5217
5009=head1 PORTING FROM LIBEV 3.X TO 4.X 5218=head1 PORTING FROM LIBEV 3.X TO 4.X
5126The physical time that is observed. It is apparently strictly monotonic :) 5335The physical time that is observed. It is apparently strictly monotonic :)
5127 5336
5128=item wall-clock time 5337=item wall-clock time
5129 5338
5130The time and date as shown on clocks. Unlike real time, it can actually 5339The time and date as shown on clocks. Unlike real time, it can actually
5131be wrong and jump forwards and backwards, e.g. when the you adjust your 5340be wrong and jump forwards and backwards, e.g. when you adjust your
5132clock. 5341clock.
5133 5342
5134=item watcher 5343=item watcher
5135 5344
5136A data structure that describes interest in certain events. Watchers need 5345A data structure that describes interest in certain events. Watchers need
5139=back 5348=back
5140 5349
5141=head1 AUTHOR 5350=head1 AUTHOR
5142 5351
5143Marc Lehmann <libev@schmorp.de>, with repeated corrections by Mikael 5352Marc Lehmann <libev@schmorp.de>, with repeated corrections by Mikael
5144Magnusson and Emanuele Giaquinta. 5353Magnusson and Emanuele Giaquinta, and minor corrections by many others.
5145 5354

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines