… | |
… | |
362 | this "current" time will differ substantially from the real time, which |
362 | this "current" time will differ substantially from the real time, which |
363 | might affect timers and time-outs. |
363 | might affect timers and time-outs. |
364 | |
364 | |
365 | When this is the case, you can call this method, which will update the |
365 | When this is the case, you can call this method, which will update the |
366 | event loop's idea of "current time". |
366 | event loop's idea of "current time". |
|
|
367 | |
|
|
368 | A typical example would be a script in a web server (e.g. C<mod_perl>) - |
|
|
369 | when mod_perl executes the script, then the event loop will have the wrong |
|
|
370 | idea about the "current time" (being potentially far in the past, when the |
|
|
371 | script ran the last time). In that case you should arrange a call to C<< |
|
|
372 | AnyEvent->now_update >> each time the web server process wakes up again |
|
|
373 | (e.g. at the start of your script, or in a handler). |
367 | |
374 | |
368 | Note that updating the time I<might> cause some events to be handled. |
375 | Note that updating the time I<might> cause some events to be handled. |
369 | |
376 | |
370 | =back |
377 | =back |
371 | |
378 | |
… | |
… | |
592 | after => 1, |
599 | after => 1, |
593 | cb => sub { $result_ready->send }, |
600 | cb => sub { $result_ready->send }, |
594 | ); |
601 | ); |
595 | |
602 | |
596 | # this "blocks" (while handling events) till the callback |
603 | # this "blocks" (while handling events) till the callback |
597 | # calls -<send |
604 | # calls ->send |
598 | $result_ready->recv; |
605 | $result_ready->recv; |
599 | |
606 | |
600 | Example: wait for a timer, but take advantage of the fact that condition |
607 | Example: wait for a timer, but take advantage of the fact that condition |
601 | variables are also callable directly. |
608 | variables are also callable directly. |
602 | |
609 | |
… | |
… | |
666 | one. For example, a function that pings many hosts in parallel might want |
673 | one. For example, a function that pings many hosts in parallel might want |
667 | to use a condition variable for the whole process. |
674 | to use a condition variable for the whole process. |
668 | |
675 | |
669 | Every call to C<< ->begin >> will increment a counter, and every call to |
676 | Every call to C<< ->begin >> will increment a counter, and every call to |
670 | C<< ->end >> will decrement it. If the counter reaches C<0> in C<< ->end |
677 | C<< ->end >> will decrement it. If the counter reaches C<0> in C<< ->end |
671 | >>, the (last) callback passed to C<begin> will be executed. That callback |
678 | >>, the (last) callback passed to C<begin> will be executed, passing the |
672 | is I<supposed> to call C<< ->send >>, but that is not required. If no |
679 | condvar as first argument. That callback is I<supposed> to call C<< ->send |
673 | callback was set, C<send> will be called without any arguments. |
680 | >>, but that is not required. If no group callback was set, C<send> will |
|
|
681 | be called without any arguments. |
674 | |
682 | |
675 | You can think of C<< $cv->send >> giving you an OR condition (one call |
683 | You can think of C<< $cv->send >> giving you an OR condition (one call |
676 | sends), while C<< $cv->begin >> and C<< $cv->end >> giving you an AND |
684 | sends), while C<< $cv->begin >> and C<< $cv->end >> giving you an AND |
677 | condition (all C<begin> calls must be C<end>'ed before the condvar sends). |
685 | condition (all C<begin> calls must be C<end>'ed before the condvar sends). |
678 | |
686 | |
… | |
… | |
705 | begung can potentially be zero: |
713 | begung can potentially be zero: |
706 | |
714 | |
707 | my $cv = AnyEvent->condvar; |
715 | my $cv = AnyEvent->condvar; |
708 | |
716 | |
709 | my %result; |
717 | my %result; |
710 | $cv->begin (sub { $cv->send (\%result) }); |
718 | $cv->begin (sub { shift->send (\%result) }); |
711 | |
719 | |
712 | for my $host (@list_of_hosts) { |
720 | for my $host (@list_of_hosts) { |
713 | $cv->begin; |
721 | $cv->begin; |
714 | ping_host_then_call_callback $host, sub { |
722 | ping_host_then_call_callback $host, sub { |
715 | $result{$host} = ...; |
723 | $result{$host} = ...; |
… | |
… | |
1105 | |
1113 | |
1106 | package AnyEvent; |
1114 | package AnyEvent; |
1107 | |
1115 | |
1108 | # basically a tuned-down version of common::sense |
1116 | # basically a tuned-down version of common::sense |
1109 | sub common_sense { |
1117 | sub common_sense { |
1110 | # no warnings |
1118 | # from common:.sense 1.0 |
1111 | ${^WARNING_BITS} ^= ${^WARNING_BITS}; |
1119 | ${^WARNING_BITS} = "\xfc\x3f\xf3\x00\x0f\xf3\xcf\xc0\xf3\xfc\x33\x03"; |
1112 | # use strict vars subs |
1120 | # use strict vars subs |
1113 | $^H |= 0x00000600; |
1121 | $^H |= 0x00000600; |
1114 | } |
1122 | } |
1115 | |
1123 | |
1116 | BEGIN { AnyEvent::common_sense } |
1124 | BEGIN { AnyEvent::common_sense } |
1117 | |
1125 | |
1118 | use Carp (); |
1126 | use Carp (); |
1119 | |
1127 | |
1120 | our $VERSION = 4.92; |
1128 | our $VERSION = '5.21'; |
1121 | our $MODEL; |
1129 | our $MODEL; |
1122 | |
1130 | |
1123 | our $AUTOLOAD; |
1131 | our $AUTOLOAD; |
1124 | our @ISA; |
1132 | our @ISA; |
1125 | |
1133 | |
1126 | our @REGISTRY; |
1134 | our @REGISTRY; |
1127 | |
|
|
1128 | our $WIN32; |
|
|
1129 | |
1135 | |
1130 | our $VERBOSE; |
1136 | our $VERBOSE; |
1131 | |
1137 | |
1132 | BEGIN { |
1138 | BEGIN { |
1133 | eval "sub WIN32(){ " . (($^O =~ /mswin32/i)*1) ." }"; |
1139 | eval "sub WIN32(){ " . (($^O =~ /mswin32/i)*1) ." }"; |
… | |
… | |
1288 | # we assume CLOEXEC is already set by perl in all important cases |
1294 | # we assume CLOEXEC is already set by perl in all important cases |
1289 | |
1295 | |
1290 | ($fh2, $rw) |
1296 | ($fh2, $rw) |
1291 | } |
1297 | } |
1292 | |
1298 | |
1293 | ############################################################################# |
1299 | =head1 SIMPLIFIED AE API |
1294 | # "new" API, currently only emulation of it |
1300 | |
1295 | ############################################################################# |
1301 | Starting with version 5.0, AnyEvent officially supports a second, much |
|
|
1302 | simpler, API that is designed to reduce the calling, typing and memory |
|
|
1303 | overhead. |
|
|
1304 | |
|
|
1305 | See the L<AE> manpage for details. |
|
|
1306 | |
|
|
1307 | =cut |
1296 | |
1308 | |
1297 | package AE; |
1309 | package AE; |
1298 | |
1310 | |
1299 | our $VERSION = $AnyEvent::VERSION; |
1311 | our $VERSION = $AnyEvent::VERSION; |
1300 | |
1312 | |
… | |
… | |
1336 | |
1348 | |
1337 | package AnyEvent::Base; |
1349 | package AnyEvent::Base; |
1338 | |
1350 | |
1339 | # default implementations for many methods |
1351 | # default implementations for many methods |
1340 | |
1352 | |
1341 | sub _time { |
1353 | sub _time() { |
1342 | # probe for availability of Time::HiRes |
1354 | # probe for availability of Time::HiRes |
1343 | if (eval "use Time::HiRes (); Time::HiRes::time (); 1") { |
1355 | if (eval "use Time::HiRes (); Time::HiRes::time (); 1") { |
1344 | warn "AnyEvent: using Time::HiRes for sub-second timing accuracy.\n" if $VERBOSE >= 8; |
1356 | warn "AnyEvent: using Time::HiRes for sub-second timing accuracy.\n" if $VERBOSE >= 8; |
1345 | *_time = \&Time::HiRes::time; |
1357 | *_time = \&Time::HiRes::time; |
1346 | # if (eval "use POSIX (); (POSIX::times())... |
1358 | # if (eval "use POSIX (); (POSIX::times())... |
… | |
… | |
1366 | |
1378 | |
1367 | our $HAVE_ASYNC_INTERRUPT; |
1379 | our $HAVE_ASYNC_INTERRUPT; |
1368 | |
1380 | |
1369 | sub _have_async_interrupt() { |
1381 | sub _have_async_interrupt() { |
1370 | $HAVE_ASYNC_INTERRUPT = 1*(!$ENV{PERL_ANYEVENT_AVOID_ASYNC_INTERRUPT} |
1382 | $HAVE_ASYNC_INTERRUPT = 1*(!$ENV{PERL_ANYEVENT_AVOID_ASYNC_INTERRUPT} |
1371 | && eval "use Async::Interrupt 1.0 (); 1") |
1383 | && eval "use Async::Interrupt 1.02 (); 1") |
1372 | unless defined $HAVE_ASYNC_INTERRUPT; |
1384 | unless defined $HAVE_ASYNC_INTERRUPT; |
1373 | |
1385 | |
1374 | $HAVE_ASYNC_INTERRUPT |
1386 | $HAVE_ASYNC_INTERRUPT |
1375 | } |
1387 | } |
1376 | |
1388 | |
… | |
… | |
1379 | our ($SIG_COUNT, $SIG_TW); |
1391 | our ($SIG_COUNT, $SIG_TW); |
1380 | |
1392 | |
1381 | sub _signal_exec { |
1393 | sub _signal_exec { |
1382 | $HAVE_ASYNC_INTERRUPT |
1394 | $HAVE_ASYNC_INTERRUPT |
1383 | ? $SIGPIPE_R->drain |
1395 | ? $SIGPIPE_R->drain |
1384 | : sysread $SIGPIPE_R, my $dummy, 9; |
1396 | : sysread $SIGPIPE_R, (my $dummy), 9; |
1385 | |
1397 | |
1386 | while (%SIG_EV) { |
1398 | while (%SIG_EV) { |
1387 | for (keys %SIG_EV) { |
1399 | for (keys %SIG_EV) { |
1388 | delete $SIG_EV{$_}; |
1400 | delete $SIG_EV{$_}; |
1389 | $_->() for values %{ $SIG_CB{$_} || {} }; |
1401 | $_->() for values %{ $SIG_CB{$_} || {} }; |
… | |
… | |
1905 | warn "read: $input\n"; # output what has been read |
1917 | warn "read: $input\n"; # output what has been read |
1906 | $cv->send if $input =~ /^q/i; # quit program if /^q/i |
1918 | $cv->send if $input =~ /^q/i; # quit program if /^q/i |
1907 | }, |
1919 | }, |
1908 | ); |
1920 | ); |
1909 | |
1921 | |
1910 | my $time_watcher; # can only be used once |
|
|
1911 | |
|
|
1912 | sub new_timer { |
|
|
1913 | $timer = AnyEvent->timer (after => 1, cb => sub { |
1922 | my $time_watcher = AnyEvent->timer (after => 1, interval => 1, cb => sub { |
1914 | warn "timeout\n"; # print 'timeout' about every second |
1923 | warn "timeout\n"; # print 'timeout' at most every second |
1915 | &new_timer; # and restart the time |
|
|
1916 | }); |
1924 | }); |
1917 | } |
|
|
1918 | |
|
|
1919 | new_timer; # create first timer |
|
|
1920 | |
1925 | |
1921 | $cv->recv; # wait until user enters /^q/i |
1926 | $cv->recv; # wait until user enters /^q/i |
1922 | |
1927 | |
1923 | =head1 REAL-WORLD EXAMPLE |
1928 | =head1 REAL-WORLD EXAMPLE |
1924 | |
1929 | |
… | |
… | |
2055 | through AnyEvent. The benchmark creates a lot of timers (with a zero |
2060 | through AnyEvent. The benchmark creates a lot of timers (with a zero |
2056 | timeout) and I/O watchers (watching STDOUT, a pty, to become writable, |
2061 | timeout) and I/O watchers (watching STDOUT, a pty, to become writable, |
2057 | which it is), lets them fire exactly once and destroys them again. |
2062 | which it is), lets them fire exactly once and destroys them again. |
2058 | |
2063 | |
2059 | Source code for this benchmark is found as F<eg/bench> in the AnyEvent |
2064 | Source code for this benchmark is found as F<eg/bench> in the AnyEvent |
2060 | distribution. |
2065 | distribution. It uses the L<AE> interface, which makes a real difference |
|
|
2066 | for the EV and Perl backends only. |
2061 | |
2067 | |
2062 | =head3 Explanation of the columns |
2068 | =head3 Explanation of the columns |
2063 | |
2069 | |
2064 | I<watcher> is the number of event watchers created/destroyed. Since |
2070 | I<watcher> is the number of event watchers created/destroyed. Since |
2065 | different event models feature vastly different performances, each event |
2071 | different event models feature vastly different performances, each event |
… | |
… | |
2086 | watcher. |
2092 | watcher. |
2087 | |
2093 | |
2088 | =head3 Results |
2094 | =head3 Results |
2089 | |
2095 | |
2090 | name watchers bytes create invoke destroy comment |
2096 | name watchers bytes create invoke destroy comment |
2091 | EV/EV 400000 224 0.47 0.35 0.27 EV native interface |
2097 | EV/EV 100000 223 0.47 0.43 0.27 EV native interface |
2092 | EV/Any 100000 224 2.88 0.34 0.27 EV + AnyEvent watchers |
2098 | EV/Any 100000 223 0.48 0.42 0.26 EV + AnyEvent watchers |
2093 | CoroEV/Any 100000 224 2.85 0.35 0.28 coroutines + Coro::Signal |
2099 | Coro::EV/Any 100000 223 0.47 0.42 0.26 coroutines + Coro::Signal |
2094 | Perl/Any 100000 452 4.13 0.73 0.95 pure perl implementation |
2100 | Perl/Any 100000 431 2.70 0.74 0.92 pure perl implementation |
2095 | Event/Event 16000 517 32.20 31.80 0.81 Event native interface |
2101 | Event/Event 16000 516 31.16 31.84 0.82 Event native interface |
2096 | Event/Any 16000 590 35.85 31.55 1.06 Event + AnyEvent watchers |
2102 | Event/Any 16000 1203 42.61 34.79 1.80 Event + AnyEvent watchers |
2097 | IOAsync/Any 16000 989 38.10 32.77 11.13 via IO::Async::Loop::IO_Poll |
2103 | IOAsync/Any 16000 1911 41.92 27.45 16.81 via IO::Async::Loop::IO_Poll |
2098 | IOAsync/Any 16000 990 37.59 29.50 10.61 via IO::Async::Loop::Epoll |
2104 | IOAsync/Any 16000 1726 40.69 26.37 15.25 via IO::Async::Loop::Epoll |
2099 | Glib/Any 16000 1357 102.33 12.31 51.00 quadratic behaviour |
2105 | Glib/Any 16000 1118 89.00 12.57 51.17 quadratic behaviour |
2100 | Tk/Any 2000 1860 27.20 66.31 14.00 SEGV with >> 2000 watchers |
2106 | Tk/Any 2000 1346 20.96 10.75 8.00 SEGV with >> 2000 watchers |
2101 | POE/Event 2000 6328 109.99 751.67 14.02 via POE::Loop::Event |
2107 | POE/Any 2000 6951 108.97 795.32 14.24 via POE::Loop::Event |
2102 | POE/Select 2000 6027 94.54 809.13 579.80 via POE::Loop::Select |
2108 | POE/Any 2000 6648 94.79 774.40 575.51 via POE::Loop::Select |
2103 | |
2109 | |
2104 | =head3 Discussion |
2110 | =head3 Discussion |
2105 | |
2111 | |
2106 | The benchmark does I<not> measure scalability of the event loop very |
2112 | The benchmark does I<not> measure scalability of the event loop very |
2107 | well. For example, a select-based event loop (such as the pure perl one) |
2113 | well. For example, a select-based event loop (such as the pure perl one) |
… | |
… | |
2119 | benchmark machine, handling an event takes roughly 1600 CPU cycles with |
2125 | benchmark machine, handling an event takes roughly 1600 CPU cycles with |
2120 | EV, 3100 CPU cycles with AnyEvent's pure perl loop and almost 3000000 CPU |
2126 | EV, 3100 CPU cycles with AnyEvent's pure perl loop and almost 3000000 CPU |
2121 | cycles with POE. |
2127 | cycles with POE. |
2122 | |
2128 | |
2123 | C<EV> is the sole leader regarding speed and memory use, which are both |
2129 | C<EV> is the sole leader regarding speed and memory use, which are both |
2124 | maximal/minimal, respectively. Even when going through AnyEvent, it uses |
2130 | maximal/minimal, respectively. When using the L<AE> API there is zero |
|
|
2131 | overhead (when going through the AnyEvent API create is about 5-6 times |
|
|
2132 | slower, with other times being equal, so still uses far less memory than |
2125 | far less memory than any other event loop and is still faster than Event |
2133 | any other event loop and is still faster than Event natively). |
2126 | natively. |
|
|
2127 | |
2134 | |
2128 | The pure perl implementation is hit in a few sweet spots (both the |
2135 | The pure perl implementation is hit in a few sweet spots (both the |
2129 | constant timeout and the use of a single fd hit optimisations in the perl |
2136 | constant timeout and the use of a single fd hit optimisations in the perl |
2130 | interpreter and the backend itself). Nevertheless this shows that it |
2137 | interpreter and the backend itself). Nevertheless this shows that it |
2131 | adds very little overhead in itself. Like any select-based backend its |
2138 | adds very little overhead in itself. Like any select-based backend its |
… | |
… | |
2205 | In this benchmark, we use 10000 socket pairs (20000 sockets), of which 100 |
2212 | In this benchmark, we use 10000 socket pairs (20000 sockets), of which 100 |
2206 | (1%) are active. This mirrors the activity of large servers with many |
2213 | (1%) are active. This mirrors the activity of large servers with many |
2207 | connections, most of which are idle at any one point in time. |
2214 | connections, most of which are idle at any one point in time. |
2208 | |
2215 | |
2209 | Source code for this benchmark is found as F<eg/bench2> in the AnyEvent |
2216 | Source code for this benchmark is found as F<eg/bench2> in the AnyEvent |
2210 | distribution. |
2217 | distribution. It uses the L<AE> interface, which makes a real difference |
|
|
2218 | for the EV and Perl backends only. |
2211 | |
2219 | |
2212 | =head3 Explanation of the columns |
2220 | =head3 Explanation of the columns |
2213 | |
2221 | |
2214 | I<sockets> is the number of sockets, and twice the number of "servers" (as |
2222 | I<sockets> is the number of sockets, and twice the number of "servers" (as |
2215 | each server has a read and write socket end). |
2223 | each server has a read and write socket end). |
… | |
… | |
2223 | a new one that moves the timeout into the future. |
2231 | a new one that moves the timeout into the future. |
2224 | |
2232 | |
2225 | =head3 Results |
2233 | =head3 Results |
2226 | |
2234 | |
2227 | name sockets create request |
2235 | name sockets create request |
2228 | EV 20000 69.01 11.16 |
2236 | EV 20000 62.66 7.99 |
2229 | Perl 20000 73.32 35.87 |
2237 | Perl 20000 68.32 32.64 |
2230 | IOAsync 20000 157.00 98.14 epoll |
2238 | IOAsync 20000 174.06 101.15 epoll |
2231 | IOAsync 20000 159.31 616.06 poll |
2239 | IOAsync 20000 174.67 610.84 poll |
2232 | Event 20000 212.62 257.32 |
2240 | Event 20000 202.69 242.91 |
2233 | Glib 20000 651.16 1896.30 |
2241 | Glib 20000 557.01 1689.52 |
2234 | POE 20000 349.67 12317.24 uses POE::Loop::Event |
2242 | POE 20000 341.54 12086.32 uses POE::Loop::Event |
2235 | |
2243 | |
2236 | =head3 Discussion |
2244 | =head3 Discussion |
2237 | |
2245 | |
2238 | This benchmark I<does> measure scalability and overall performance of the |
2246 | This benchmark I<does> measure scalability and overall performance of the |
2239 | particular event loop. |
2247 | particular event loop. |
… | |
… | |
2365 | As you can see, the AnyEvent + EV combination even beats the |
2373 | As you can see, the AnyEvent + EV combination even beats the |
2366 | hand-optimised "raw sockets benchmark", while AnyEvent + its pure perl |
2374 | hand-optimised "raw sockets benchmark", while AnyEvent + its pure perl |
2367 | backend easily beats IO::Lambda and POE. |
2375 | backend easily beats IO::Lambda and POE. |
2368 | |
2376 | |
2369 | And even the 100% non-blocking version written using the high-level (and |
2377 | And even the 100% non-blocking version written using the high-level (and |
2370 | slow :) L<AnyEvent::Handle> abstraction beats both POE and IO::Lambda by a |
2378 | slow :) L<AnyEvent::Handle> abstraction beats both POE and IO::Lambda |
2371 | large margin, even though it does all of DNS, tcp-connect and socket I/O |
2379 | higher level ("unoptimised") abstractions by a large margin, even though |
2372 | in a non-blocking way. |
2380 | it does all of DNS, tcp-connect and socket I/O in a non-blocking way. |
2373 | |
2381 | |
2374 | The two AnyEvent benchmarks programs can be found as F<eg/ae0.pl> and |
2382 | The two AnyEvent benchmarks programs can be found as F<eg/ae0.pl> and |
2375 | F<eg/ae2.pl> in the AnyEvent distribution, the remaining benchmarks are |
2383 | F<eg/ae2.pl> in the AnyEvent distribution, the remaining benchmarks are |
2376 | part of the IO::lambda distribution and were used without any changes. |
2384 | part of the IO::Lambda distribution and were used without any changes. |
2377 | |
2385 | |
2378 | |
2386 | |
2379 | =head1 SIGNALS |
2387 | =head1 SIGNALS |
2380 | |
2388 | |
2381 | AnyEvent currently installs handlers for these signals: |
2389 | AnyEvent currently installs handlers for these signals: |
… | |
… | |
2470 | lot less memory), but otherwise doesn't affect guard operation much. It is |
2478 | lot less memory), but otherwise doesn't affect guard operation much. It is |
2471 | purely used for performance. |
2479 | purely used for performance. |
2472 | |
2480 | |
2473 | =item L<JSON> and L<JSON::XS> |
2481 | =item L<JSON> and L<JSON::XS> |
2474 | |
2482 | |
2475 | This module is required when you want to read or write JSON data via |
2483 | One of these modules is required when you want to read or write JSON data |
2476 | L<AnyEvent::Handle>. It is also written in pure-perl, but can take |
2484 | via L<AnyEvent::Handle>. It is also written in pure-perl, but can take |
2477 | advantage of the ultra-high-speed L<JSON::XS> module when it is installed. |
2485 | advantage of the ultra-high-speed L<JSON::XS> module when it is installed. |
2478 | |
2486 | |
2479 | In fact, L<AnyEvent::Handle> will use L<JSON::XS> by default if it is |
2487 | In fact, L<AnyEvent::Handle> will use L<JSON::XS> by default if it is |
2480 | installed. |
2488 | installed. |
2481 | |
2489 | |