ViewVC Help
View File | Revision Log | Show Annotations | Download File
/cvs/libev/ev.pod
(Generate patch)

Comparing libev/ev.pod (file contents):
Revision 1.239 by root, Tue Apr 21 14:14:19 2009 UTC vs.
Revision 1.257 by root, Wed Jul 15 16:08:24 2009 UTC

620happily wraps around with enough iterations. 620happily wraps around with enough iterations.
621 621
622This value can sometimes be useful as a generation counter of sorts (it 622This value can sometimes be useful as a generation counter of sorts (it
623"ticks" the number of loop iterations), as it roughly corresponds with 623"ticks" the number of loop iterations), as it roughly corresponds with
624C<ev_prepare> and C<ev_check> calls. 624C<ev_prepare> and C<ev_check> calls.
625
626=item unsigned int ev_loop_depth (loop)
627
628Returns the number of times C<ev_loop> was entered minus the number of
629times C<ev_loop> was exited, in other words, the recursion depth.
630
631Outside C<ev_loop>, this number is zero. In a callback, this number is
632C<1>, unless C<ev_loop> was invoked recursively (or from another thread),
633in which case it is higher.
634
635Leaving C<ev_loop> abnormally (setjmp/longjmp, cancelling the thread
636etc.), doesn't count as exit.
625 637
626=item unsigned int ev_backend (loop) 638=item unsigned int ev_backend (loop)
627 639
628Returns one of the C<EVBACKEND_*> flags indicating the event backend in 640Returns one of the C<EVBACKEND_*> flags indicating the event backend in
629use. 641use.
811 823
812By setting a higher I<io collect interval> you allow libev to spend more 824By setting a higher I<io collect interval> you allow libev to spend more
813time collecting I/O events, so you can handle more events per iteration, 825time collecting I/O events, so you can handle more events per iteration,
814at the cost of increasing latency. Timeouts (both C<ev_periodic> and 826at the cost of increasing latency. Timeouts (both C<ev_periodic> and
815C<ev_timer>) will be not affected. Setting this to a non-null value will 827C<ev_timer>) will be not affected. Setting this to a non-null value will
816introduce an additional C<ev_sleep ()> call into most loop iterations. 828introduce an additional C<ev_sleep ()> call into most loop iterations. The
829sleep time ensures that libev will not poll for I/O events more often then
830once per this interval, on average.
817 831
818Likewise, by setting a higher I<timeout collect interval> you allow libev 832Likewise, by setting a higher I<timeout collect interval> you allow libev
819to spend more time collecting timeouts, at the expense of increased 833to spend more time collecting timeouts, at the expense of increased
820latency/jitter/inexactness (the watcher callback will be called 834latency/jitter/inexactness (the watcher callback will be called
821later). C<ev_io> watchers will not be affected. Setting this to a non-null 835later). C<ev_io> watchers will not be affected. Setting this to a non-null
823 837
824Many (busy) programs can usually benefit by setting the I/O collect 838Many (busy) programs can usually benefit by setting the I/O collect
825interval to a value near C<0.1> or so, which is often enough for 839interval to a value near C<0.1> or so, which is often enough for
826interactive servers (of course not for games), likewise for timeouts. It 840interactive servers (of course not for games), likewise for timeouts. It
827usually doesn't make much sense to set it to a lower value than C<0.01>, 841usually doesn't make much sense to set it to a lower value than C<0.01>,
828as this approaches the timing granularity of most systems. 842as this approaches the timing granularity of most systems. Note that if
843you do transactions with the outside world and you can't increase the
844parallelity, then this setting will limit your transaction rate (if you
845need to poll once per transaction and the I/O collect interval is 0.01,
846then you can't do more than 100 transations per second).
829 847
830Setting the I<timeout collect interval> can improve the opportunity for 848Setting the I<timeout collect interval> can improve the opportunity for
831saving power, as the program will "bundle" timer callback invocations that 849saving power, as the program will "bundle" timer callback invocations that
832are "near" in time together, by delaying some, thus reducing the number of 850are "near" in time together, by delaying some, thus reducing the number of
833times the process sleeps and wakes up again. Another useful technique to 851times the process sleeps and wakes up again. Another useful technique to
834reduce iterations/wake-ups is to use C<ev_periodic> watchers and make sure 852reduce iterations/wake-ups is to use C<ev_periodic> watchers and make sure
835they fire on, say, one-second boundaries only. 853they fire on, say, one-second boundaries only.
854
855Example: we only need 0.1s timeout granularity, and we wish not to poll
856more often than 100 times per second:
857
858 ev_set_timeout_collect_interval (EV_DEFAULT_UC_ 0.1);
859 ev_set_io_collect_interval (EV_DEFAULT_UC_ 0.01);
860
861=item ev_invoke_pending (loop)
862
863This call will simply invoke all pending watchers while resetting their
864pending state. Normally, C<ev_loop> does this automatically when required,
865but when overriding the invoke callback this call comes handy.
866
867=item int ev_pending_count (loop)
868
869Returns the number of pending watchers - zero indicates that no watchers
870are pending.
871
872=item ev_set_invoke_pending_cb (loop, void (*invoke_pending_cb)(EV_P))
873
874This overrides the invoke pending functionality of the loop: Instead of
875invoking all pending watchers when there are any, C<ev_loop> will call
876this callback instead. This is useful, for example, when you want to
877invoke the actual watchers inside another context (another thread etc.).
878
879If you want to reset the callback, use C<ev_invoke_pending> as new
880callback.
881
882=item ev_set_loop_release_cb (loop, void (*release)(EV_P), void (*acquire)(EV_P))
883
884Sometimes you want to share the same loop between multiple threads. This
885can be done relatively simply by putting mutex_lock/unlock calls around
886each call to a libev function.
887
888However, C<ev_loop> can run an indefinite time, so it is not feasible to
889wait for it to return. One way around this is to wake up the loop via
890C<ev_unloop> and C<av_async_send>, another way is to set these I<release>
891and I<acquire> callbacks on the loop.
892
893When set, then C<release> will be called just before the thread is
894suspended waiting for new events, and C<acquire> is called just
895afterwards.
896
897Ideally, C<release> will just call your mutex_unlock function, and
898C<acquire> will just call the mutex_lock function again.
899
900While event loop modifications are allowed between invocations of
901C<release> and C<acquire> (that's their only purpose after all), no
902modifications done will affect the event loop, i.e. adding watchers will
903have no effect on the set of file descriptors being watched, or the time
904waited. USe an C<ev_async> watcher to wake up C<ev_loop> when you want it
905to take note of any changes you made.
906
907In theory, threads executing C<ev_loop> will be async-cancel safe between
908invocations of C<release> and C<acquire>.
909
910See also the locking example in the C<THREADS> section later in this
911document.
912
913=item ev_set_userdata (loop, void *data)
914
915=item ev_userdata (loop)
916
917Set and retrieve a single C<void *> associated with a loop. When
918C<ev_set_userdata> has never been called, then C<ev_userdata> returns
919C<0.>
920
921These two functions can be used to associate arbitrary data with a loop,
922and are intended solely for the C<invoke_pending_cb>, C<release> and
923C<acquire> callbacks described above, but of course can be (ab-)used for
924any other purpose as well.
836 925
837=item ev_loop_verify (loop) 926=item ev_loop_verify (loop)
838 927
839This function only does something when C<EV_VERIFY> support has been 928This function only does something when C<EV_VERIFY> support has been
840compiled in, which is the default for non-minimal builds. It tries to go 929compiled in, which is the default for non-minimal builds. It tries to go
1184 #include <stddef.h> 1273 #include <stddef.h>
1185 1274
1186 static void 1275 static void
1187 t1_cb (EV_P_ ev_timer *w, int revents) 1276 t1_cb (EV_P_ ev_timer *w, int revents)
1188 { 1277 {
1189 struct my_biggy big = (struct my_biggy * 1278 struct my_biggy big = (struct my_biggy *)
1190 (((char *)w) - offsetof (struct my_biggy, t1)); 1279 (((char *)w) - offsetof (struct my_biggy, t1));
1191 } 1280 }
1192 1281
1193 static void 1282 static void
1194 t2_cb (EV_P_ ev_timer *w, int revents) 1283 t2_cb (EV_P_ ev_timer *w, int revents)
1195 { 1284 {
1196 struct my_biggy big = (struct my_biggy * 1285 struct my_biggy big = (struct my_biggy *)
1197 (((char *)w) - offsetof (struct my_biggy, t2)); 1286 (((char *)w) - offsetof (struct my_biggy, t2));
1198 } 1287 }
1199 1288
1200=head2 WATCHER PRIORITY MODELS 1289=head2 WATCHER PRIORITY MODELS
1201 1290
1277 // with the default priority are receiving events. 1366 // with the default priority are receiving events.
1278 ev_idle_start (EV_A_ &idle); 1367 ev_idle_start (EV_A_ &idle);
1279 } 1368 }
1280 1369
1281 static void 1370 static void
1282 idle-cb (EV_P_ ev_idle *w, int revents) 1371 idle_cb (EV_P_ ev_idle *w, int revents)
1283 { 1372 {
1284 // actual processing 1373 // actual processing
1285 read (STDIN_FILENO, ...); 1374 read (STDIN_FILENO, ...);
1286 1375
1287 // have to start the I/O watcher again, as 1376 // have to start the I/O watcher again, as
1465year, it will still time out after (roughly) one hour. "Roughly" because 1554year, it will still time out after (roughly) one hour. "Roughly" because
1466detecting time jumps is hard, and some inaccuracies are unavoidable (the 1555detecting time jumps is hard, and some inaccuracies are unavoidable (the
1467monotonic clock option helps a lot here). 1556monotonic clock option helps a lot here).
1468 1557
1469The callback is guaranteed to be invoked only I<after> its timeout has 1558The callback is guaranteed to be invoked only I<after> its timeout has
1470passed. If multiple timers become ready during the same loop iteration 1559passed (not I<at>, so on systems with very low-resolution clocks this
1471then the ones with earlier time-out values are invoked before ones with 1560might introduce a small delay). If multiple timers become ready during the
1472later time-out values (but this is no longer true when a callback calls 1561same loop iteration then the ones with earlier time-out values are invoked
1473C<ev_loop> recursively). 1562before ones of the same priority with later time-out values (but this is
1563no longer true when a callback calls C<ev_loop> recursively).
1474 1564
1475=head3 Be smart about timeouts 1565=head3 Be smart about timeouts
1476 1566
1477Many real-world problems involve some kind of timeout, usually for error 1567Many real-world problems involve some kind of timeout, usually for error
1478recovery. A typical example is an HTTP request - if the other side hangs, 1568recovery. A typical example is an HTTP request - if the other side hangs,
1522C<after> argument to C<ev_timer_set>, and only ever use the C<repeat> 1612C<after> argument to C<ev_timer_set>, and only ever use the C<repeat>
1523member and C<ev_timer_again>. 1613member and C<ev_timer_again>.
1524 1614
1525At start: 1615At start:
1526 1616
1527 ev_timer_init (timer, callback); 1617 ev_init (timer, callback);
1528 timer->repeat = 60.; 1618 timer->repeat = 60.;
1529 ev_timer_again (loop, timer); 1619 ev_timer_again (loop, timer);
1530 1620
1531Each time there is some activity: 1621Each time there is some activity:
1532 1622
1594 1684
1595To start the timer, simply initialise the watcher and set C<last_activity> 1685To start the timer, simply initialise the watcher and set C<last_activity>
1596to the current time (meaning we just have some activity :), then call the 1686to the current time (meaning we just have some activity :), then call the
1597callback, which will "do the right thing" and start the timer: 1687callback, which will "do the right thing" and start the timer:
1598 1688
1599 ev_timer_init (timer, callback); 1689 ev_init (timer, callback);
1600 last_activity = ev_now (loop); 1690 last_activity = ev_now (loop);
1601 callback (loop, timer, EV_TIMEOUT); 1691 callback (loop, timer, EV_TIMEOUT);
1602 1692
1603And when there is some activity, simply store the current time in 1693And when there is some activity, simply store the current time in
1604C<last_activity>, no libev calls at all: 1694C<last_activity>, no libev calls at all:
1664 ev_timer_set (&timer, after + ev_now () - ev_time (), 0.); 1754 ev_timer_set (&timer, after + ev_now () - ev_time (), 0.);
1665 1755
1666If the event loop is suspended for a long time, you can also force an 1756If the event loop is suspended for a long time, you can also force an
1667update of the time returned by C<ev_now ()> by calling C<ev_now_update 1757update of the time returned by C<ev_now ()> by calling C<ev_now_update
1668()>. 1758()>.
1759
1760=head3 The special problems of suspended animation
1761
1762When you leave the server world it is quite customary to hit machines that
1763can suspend/hibernate - what happens to the clocks during such a suspend?
1764
1765Some quick tests made with a Linux 2.6.28 indicate that a suspend freezes
1766all processes, while the clocks (C<times>, C<CLOCK_MONOTONIC>) continue
1767to run until the system is suspended, but they will not advance while the
1768system is suspended. That means, on resume, it will be as if the program
1769was frozen for a few seconds, but the suspend time will not be counted
1770towards C<ev_timer> when a monotonic clock source is used. The real time
1771clock advanced as expected, but if it is used as sole clocksource, then a
1772long suspend would be detected as a time jump by libev, and timers would
1773be adjusted accordingly.
1774
1775I would not be surprised to see different behaviour in different between
1776operating systems, OS versions or even different hardware.
1777
1778The other form of suspend (job control, or sending a SIGSTOP) will see a
1779time jump in the monotonic clocks and the realtime clock. If the program
1780is suspended for a very long time, and monotonic clock sources are in use,
1781then you can expect C<ev_timer>s to expire as the full suspension time
1782will be counted towards the timers. When no monotonic clock source is in
1783use, then libev will again assume a timejump and adjust accordingly.
1784
1785It might be beneficial for this latter case to call C<ev_suspend>
1786and C<ev_resume> in code that handles C<SIGTSTP>, to at least get
1787deterministic behaviour in this case (you can do nothing against
1788C<SIGSTOP>).
1669 1789
1670=head3 Watcher-Specific Functions and Data Members 1790=head3 Watcher-Specific Functions and Data Members
1671 1791
1672=over 4 1792=over 4
1673 1793
2001some child status changes (most typically when a child of yours dies or 2121some child status changes (most typically when a child of yours dies or
2002exits). It is permissible to install a child watcher I<after> the child 2122exits). It is permissible to install a child watcher I<after> the child
2003has been forked (which implies it might have already exited), as long 2123has been forked (which implies it might have already exited), as long
2004as the event loop isn't entered (or is continued from a watcher), i.e., 2124as the event loop isn't entered (or is continued from a watcher), i.e.,
2005forking and then immediately registering a watcher for the child is fine, 2125forking and then immediately registering a watcher for the child is fine,
2006but forking and registering a watcher a few event loop iterations later is 2126but forking and registering a watcher a few event loop iterations later or
2007not. 2127in the next callback invocation is not.
2008 2128
2009Only the default event loop is capable of handling signals, and therefore 2129Only the default event loop is capable of handling signals, and therefore
2010you can only register child watchers in the default event loop. 2130you can only register child watchers in the default event loop.
2131
2132Due to some design glitches inside libev, child watchers will always be
2133handled at maximum priority (their priority is set to C<EV_MAXPRI> by
2134libev)
2011 2135
2012=head3 Process Interaction 2136=head3 Process Interaction
2013 2137
2014Libev grabs C<SIGCHLD> as soon as the default event loop is 2138Libev grabs C<SIGCHLD> as soon as the default event loop is
2015initialised. This is necessary to guarantee proper behaviour even if 2139initialised. This is necessary to guarantee proper behaviour even if
2367 // no longer anything immediate to do. 2491 // no longer anything immediate to do.
2368 } 2492 }
2369 2493
2370 ev_idle *idle_watcher = malloc (sizeof (ev_idle)); 2494 ev_idle *idle_watcher = malloc (sizeof (ev_idle));
2371 ev_idle_init (idle_watcher, idle_cb); 2495 ev_idle_init (idle_watcher, idle_cb);
2372 ev_idle_start (loop, idle_cb); 2496 ev_idle_start (loop, idle_watcher);
2373 2497
2374 2498
2375=head2 C<ev_prepare> and C<ev_check> - customise your event loop! 2499=head2 C<ev_prepare> and C<ev_check> - customise your event loop!
2376 2500
2377Prepare and check watchers are usually (but not always) used in pairs: 2501Prepare and check watchers are usually (but not always) used in pairs:
2470 struct pollfd fds [nfd]; 2594 struct pollfd fds [nfd];
2471 // actual code will need to loop here and realloc etc. 2595 // actual code will need to loop here and realloc etc.
2472 adns_beforepoll (ads, fds, &nfd, &timeout, timeval_from (ev_time ())); 2596 adns_beforepoll (ads, fds, &nfd, &timeout, timeval_from (ev_time ()));
2473 2597
2474 /* the callback is illegal, but won't be called as we stop during check */ 2598 /* the callback is illegal, but won't be called as we stop during check */
2475 ev_timer_init (&tw, 0, timeout * 1e-3); 2599 ev_timer_init (&tw, 0, timeout * 1e-3, 0.);
2476 ev_timer_start (loop, &tw); 2600 ev_timer_start (loop, &tw);
2477 2601
2478 // create one ev_io per pollfd 2602 // create one ev_io per pollfd
2479 for (int i = 0; i < nfd; ++i) 2603 for (int i = 0; i < nfd; ++i)
2480 { 2604 {
3642defined to be C<0>, then they are not. 3766defined to be C<0>, then they are not.
3643 3767
3644=item EV_MINIMAL 3768=item EV_MINIMAL
3645 3769
3646If you need to shave off some kilobytes of code at the expense of some 3770If you need to shave off some kilobytes of code at the expense of some
3647speed, define this symbol to C<1>. Currently this is used to override some 3771speed (but with the full API), define this symbol to C<1>. Currently this
3648inlining decisions, saves roughly 30% code size on amd64. It also selects a 3772is used to override some inlining decisions, saves roughly 30% code size
3649much smaller 2-heap for timer management over the default 4-heap. 3773on amd64. It also selects a much smaller 2-heap for timer management over
3774the default 4-heap.
3775
3776You can save even more by disabling watcher types you do not need
3777and setting C<EV_MAXPRI> == C<EV_MINPRI>. Also, disabling C<assert>
3778(C<-DNDEBUG>) will usually reduce code size a lot.
3779
3780Defining C<EV_MINIMAL> to C<2> will additionally reduce the core API to
3781provide a bare-bones event library. See C<ev.h> for details on what parts
3782of the API are still available, and do not complain if this subset changes
3783over time.
3650 3784
3651=item EV_PID_HASHSIZE 3785=item EV_PID_HASHSIZE
3652 3786
3653C<ev_child> watchers use a small hash table to distribute workload by 3787C<ev_child> watchers use a small hash table to distribute workload by
3654pid. The default size is C<16> (or C<1> with C<EV_MINIMAL>), usually more 3788pid. The default size is C<16> (or C<1> with C<EV_MINIMAL>), usually more
3840default loop and triggering an C<ev_async> watcher from the default loop 3974default loop and triggering an C<ev_async> watcher from the default loop
3841watcher callback into the event loop interested in the signal. 3975watcher callback into the event loop interested in the signal.
3842 3976
3843=back 3977=back
3844 3978
3979=head4 THREAD LOCKING EXAMPLE
3980
3981Here is a fictitious example of how to run an event loop in a different
3982thread than where callbacks are being invoked and watchers are
3983created/added/removed.
3984
3985For a real-world example, see the C<EV::Loop::Async> perl module,
3986which uses exactly this technique (which is suited for many high-level
3987languages).
3988
3989The example uses a pthread mutex to protect the loop data, a condition
3990variable to wait for callback invocations, an async watcher to notify the
3991event loop thread and an unspecified mechanism to wake up the main thread.
3992
3993First, you need to associate some data with the event loop:
3994
3995 typedef struct {
3996 mutex_t lock; /* global loop lock */
3997 ev_async async_w;
3998 thread_t tid;
3999 cond_t invoke_cv;
4000 } userdata;
4001
4002 void prepare_loop (EV_P)
4003 {
4004 // for simplicity, we use a static userdata struct.
4005 static userdata u;
4006
4007 ev_async_init (&u->async_w, async_cb);
4008 ev_async_start (EV_A_ &u->async_w);
4009
4010 pthread_mutex_init (&u->lock, 0);
4011 pthread_cond_init (&u->invoke_cv, 0);
4012
4013 // now associate this with the loop
4014 ev_set_userdata (EV_A_ u);
4015 ev_set_invoke_pending_cb (EV_A_ l_invoke);
4016 ev_set_loop_release_cb (EV_A_ l_release, l_acquire);
4017
4018 // then create the thread running ev_loop
4019 pthread_create (&u->tid, 0, l_run, EV_A);
4020 }
4021
4022The callback for the C<ev_async> watcher does nothing: the watcher is used
4023solely to wake up the event loop so it takes notice of any new watchers
4024that might have been added:
4025
4026 static void
4027 async_cb (EV_P_ ev_async *w, int revents)
4028 {
4029 // just used for the side effects
4030 }
4031
4032The C<l_release> and C<l_acquire> callbacks simply unlock/lock the mutex
4033protecting the loop data, respectively.
4034
4035 static void
4036 l_release (EV_P)
4037 {
4038 userdata *u = ev_userdata (EV_A);
4039 pthread_mutex_unlock (&u->lock);
4040 }
4041
4042 static void
4043 l_acquire (EV_P)
4044 {
4045 userdata *u = ev_userdata (EV_A);
4046 pthread_mutex_lock (&u->lock);
4047 }
4048
4049The event loop thread first acquires the mutex, and then jumps straight
4050into C<ev_loop>:
4051
4052 void *
4053 l_run (void *thr_arg)
4054 {
4055 struct ev_loop *loop = (struct ev_loop *)thr_arg;
4056
4057 l_acquire (EV_A);
4058 pthread_setcanceltype (PTHREAD_CANCEL_ASYNCHRONOUS, 0);
4059 ev_loop (EV_A_ 0);
4060 l_release (EV_A);
4061
4062 return 0;
4063 }
4064
4065Instead of invoking all pending watchers, the C<l_invoke> callback will
4066signal the main thread via some unspecified mechanism (signals? pipe
4067writes? C<Async::Interrupt>?) and then waits until all pending watchers
4068have been called (in a while loop because a) spurious wakeups are possible
4069and b) skipping inter-thread-communication when there are no pending
4070watchers is very beneficial):
4071
4072 static void
4073 l_invoke (EV_P)
4074 {
4075 userdata *u = ev_userdata (EV_A);
4076
4077 while (ev_pending_count (EV_A))
4078 {
4079 wake_up_other_thread_in_some_magic_or_not_so_magic_way ();
4080 pthread_cond_wait (&u->invoke_cv, &u->lock);
4081 }
4082 }
4083
4084Now, whenever the main thread gets told to invoke pending watchers, it
4085will grab the lock, call C<ev_invoke_pending> and then signal the loop
4086thread to continue:
4087
4088 static void
4089 real_invoke_pending (EV_P)
4090 {
4091 userdata *u = ev_userdata (EV_A);
4092
4093 pthread_mutex_lock (&u->lock);
4094 ev_invoke_pending (EV_A);
4095 pthread_cond_signal (&u->invoke_cv);
4096 pthread_mutex_unlock (&u->lock);
4097 }
4098
4099Whenever you want to start/stop a watcher or do other modifications to an
4100event loop, you will now have to lock:
4101
4102 ev_timer timeout_watcher;
4103 userdata *u = ev_userdata (EV_A);
4104
4105 ev_timer_init (&timeout_watcher, timeout_cb, 5.5, 0.);
4106
4107 pthread_mutex_lock (&u->lock);
4108 ev_timer_start (EV_A_ &timeout_watcher);
4109 ev_async_send (EV_A_ &u->async_w);
4110 pthread_mutex_unlock (&u->lock);
4111
4112Note that sending the C<ev_async> watcher is required because otherwise
4113an event loop currently blocking in the kernel will have no knowledge
4114about the newly added timer. By waking up the loop it will pick up any new
4115watchers in the next event loop iteration.
4116
3845=head3 COROUTINES 4117=head3 COROUTINES
3846 4118
3847Libev is very accommodating to coroutines ("cooperative threads"): 4119Libev is very accommodating to coroutines ("cooperative threads"):
3848libev fully supports nesting calls to its functions from different 4120libev fully supports nesting calls to its functions from different
3849coroutines (e.g. you can call C<ev_loop> on the same loop from two 4121coroutines (e.g. you can call C<ev_loop> on the same loop from two
3850different coroutines, and switch freely between both coroutines running the 4122different coroutines, and switch freely between both coroutines running
3851loop, as long as you don't confuse yourself). The only exception is that 4123the loop, as long as you don't confuse yourself). The only exception is
3852you must not do this from C<ev_periodic> reschedule callbacks. 4124that you must not do this from C<ev_periodic> reschedule callbacks.
3853 4125
3854Care has been taken to ensure that libev does not keep local state inside 4126Care has been taken to ensure that libev does not keep local state inside
3855C<ev_loop>, and other calls do not usually allow for coroutine switches as 4127C<ev_loop>, and other calls do not usually allow for coroutine switches as
3856they do not call any callbacks. 4128they do not call any callbacks.
3857 4129
3934way (note also that glib is the slowest event library known to man). 4206way (note also that glib is the slowest event library known to man).
3935 4207
3936There is no supported compilation method available on windows except 4208There is no supported compilation method available on windows except
3937embedding it into other applications. 4209embedding it into other applications.
3938 4210
4211Sensible signal handling is officially unsupported by Microsoft - libev
4212tries its best, but under most conditions, signals will simply not work.
4213
3939Not a libev limitation but worth mentioning: windows apparently doesn't 4214Not a libev limitation but worth mentioning: windows apparently doesn't
3940accept large writes: instead of resulting in a partial write, windows will 4215accept large writes: instead of resulting in a partial write, windows will
3941either accept everything or return C<ENOBUFS> if the buffer is too large, 4216either accept everything or return C<ENOBUFS> if the buffer is too large,
3942so make sure you only write small amounts into your sockets (less than a 4217so make sure you only write small amounts into your sockets (less than a
3943megabyte seems safe, but this apparently depends on the amount of memory 4218megabyte seems safe, but this apparently depends on the amount of memory
3947the abysmal performance of winsockets, using a large number of sockets 4222the abysmal performance of winsockets, using a large number of sockets
3948is not recommended (and not reasonable). If your program needs to use 4223is not recommended (and not reasonable). If your program needs to use
3949more than a hundred or so sockets, then likely it needs to use a totally 4224more than a hundred or so sockets, then likely it needs to use a totally
3950different implementation for windows, as libev offers the POSIX readiness 4225different implementation for windows, as libev offers the POSIX readiness
3951notification model, which cannot be implemented efficiently on windows 4226notification model, which cannot be implemented efficiently on windows
3952(Microsoft monopoly games). 4227(due to Microsoft monopoly games).
3953 4228
3954A typical way to use libev under windows is to embed it (see the embedding 4229A typical way to use libev under windows is to embed it (see the embedding
3955section for details) and use the following F<evwrap.h> header file instead 4230section for details) and use the following F<evwrap.h> header file instead
3956of F<ev.h>: 4231of F<ev.h>:
3957 4232
3993 4268
3994Early versions of winsocket's select only supported waiting for a maximum 4269Early versions of winsocket's select only supported waiting for a maximum
3995of C<64> handles (probably owning to the fact that all windows kernels 4270of C<64> handles (probably owning to the fact that all windows kernels
3996can only wait for C<64> things at the same time internally; Microsoft 4271can only wait for C<64> things at the same time internally; Microsoft
3997recommends spawning a chain of threads and wait for 63 handles and the 4272recommends spawning a chain of threads and wait for 63 handles and the
3998previous thread in each. Great). 4273previous thread in each. Sounds great!).
3999 4274
4000Newer versions support more handles, but you need to define C<FD_SETSIZE> 4275Newer versions support more handles, but you need to define C<FD_SETSIZE>
4001to some high number (e.g. C<2048>) before compiling the winsocket select 4276to some high number (e.g. C<2048>) before compiling the winsocket select
4002call (which might be in libev or elsewhere, for example, perl does its own 4277call (which might be in libev or elsewhere, for example, perl and many
4003select emulation on windows). 4278other interpreters do their own select emulation on windows).
4004 4279
4005Another limit is the number of file descriptors in the Microsoft runtime 4280Another limit is the number of file descriptors in the Microsoft runtime
4006libraries, which by default is C<64> (there must be a hidden I<64> fetish 4281libraries, which by default is C<64> (there must be a hidden I<64>
4007or something like this inside Microsoft). You can increase this by calling 4282fetish or something like this inside Microsoft). You can increase this
4008C<_setmaxstdio>, which can increase this limit to C<2048> (another 4283by calling C<_setmaxstdio>, which can increase this limit to C<2048>
4009arbitrary limit), but is broken in many versions of the Microsoft runtime 4284(another arbitrary limit), but is broken in many versions of the Microsoft
4010libraries.
4011
4012This might get you to about C<512> or C<2048> sockets (depending on 4285runtime libraries. This might get you to about C<512> or C<2048> sockets
4013windows version and/or the phase of the moon). To get more, you need to 4286(depending on windows version and/or the phase of the moon). To get more,
4014wrap all I/O functions and provide your own fd management, but the cost of 4287you need to wrap all I/O functions and provide your own fd management, but
4015calling select (O(n²)) will likely make this unworkable. 4288the cost of calling select (O(n²)) will likely make this unworkable.
4016 4289
4017=back 4290=back
4018 4291
4019=head2 PORTABILITY REQUIREMENTS 4292=head2 PORTABILITY REQUIREMENTS
4020 4293
4063=item C<double> must hold a time value in seconds with enough accuracy 4336=item C<double> must hold a time value in seconds with enough accuracy
4064 4337
4065The type C<double> is used to represent timestamps. It is required to 4338The type C<double> is used to represent timestamps. It is required to
4066have at least 51 bits of mantissa (and 9 bits of exponent), which is good 4339have at least 51 bits of mantissa (and 9 bits of exponent), which is good
4067enough for at least into the year 4000. This requirement is fulfilled by 4340enough for at least into the year 4000. This requirement is fulfilled by
4068implementations implementing IEEE 754 (basically all existing ones). 4341implementations implementing IEEE 754, which is basically all existing
4342ones. With IEEE 754 doubles, you get microsecond accuracy until at least
43432200.
4069 4344
4070=back 4345=back
4071 4346
4072If you know of other additional requirements drop me a note. 4347If you know of other additional requirements drop me a note.
4073 4348

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines