… | |
… | |
141 | =head2 I/O WATCHERS |
141 | =head2 I/O WATCHERS |
142 | |
142 | |
143 | You can create an I/O watcher by calling the C<< AnyEvent->io >> method |
143 | You can create an I/O watcher by calling the C<< AnyEvent->io >> method |
144 | with the following mandatory key-value pairs as arguments: |
144 | with the following mandatory key-value pairs as arguments: |
145 | |
145 | |
146 | C<fh> the Perl I<file handle> (I<not> file descriptor) to watch for |
146 | C<fh> the Perl I<file handle> (I<not> file descriptor) to watch |
147 | events. C<poll> must be a string that is either C<r> or C<w>, which |
147 | for events. C<poll> must be a string that is either C<r> or C<w>, |
148 | creates a watcher waiting for "r"eadable or "w"ritable events, |
148 | which creates a watcher waiting for "r"eadable or "w"ritable events, |
149 | respectively. C<cb> is the callback to invoke each time the file handle |
149 | respectively. C<cb> is the callback to invoke each time the file handle |
150 | becomes ready. |
150 | becomes ready. |
151 | |
151 | |
152 | As long as the I/O watcher exists it will keep the file descriptor or a |
152 | Although the callback might get passed parameters, their value and |
153 | copy of it alive/open. |
153 | presence is undefined and you cannot rely on them. Portable AnyEvent |
|
|
154 | callbacks cannot use arguments passed to I/O watcher callbacks. |
154 | |
155 | |
|
|
156 | The I/O watcher might use the underlying file descriptor or a copy of it. |
155 | It is not allowed to close a file handle as long as any watcher is active |
157 | You must not close a file handle as long as any watcher is active on the |
156 | on the underlying file descriptor. |
158 | underlying file descriptor. |
157 | |
159 | |
158 | Some event loops issue spurious readyness notifications, so you should |
160 | Some event loops issue spurious readyness notifications, so you should |
159 | always use non-blocking calls when reading/writing from/to your file |
161 | always use non-blocking calls when reading/writing from/to your file |
160 | handles. |
162 | handles. |
161 | |
163 | |
… | |
… | |
172 | |
174 | |
173 | You can create a time watcher by calling the C<< AnyEvent->timer >> |
175 | You can create a time watcher by calling the C<< AnyEvent->timer >> |
174 | method with the following mandatory arguments: |
176 | method with the following mandatory arguments: |
175 | |
177 | |
176 | C<after> specifies after how many seconds (fractional values are |
178 | C<after> specifies after how many seconds (fractional values are |
177 | supported) should the timer activate. C<cb> the callback to invoke in that |
179 | supported) the callback should be invoked. C<cb> is the callback to invoke |
178 | case. |
180 | in that case. |
|
|
181 | |
|
|
182 | Although the callback might get passed parameters, their value and |
|
|
183 | presence is undefined and you cannot rely on them. Portable AnyEvent |
|
|
184 | callbacks cannot use arguments passed to time watcher callbacks. |
179 | |
185 | |
180 | The timer callback will be invoked at most once: if you want a repeating |
186 | The timer callback will be invoked at most once: if you want a repeating |
181 | timer you have to create a new watcher (this is a limitation by both Tk |
187 | timer you have to create a new watcher (this is a limitation by both Tk |
182 | and Glib). |
188 | and Glib). |
183 | |
189 | |
… | |
… | |
228 | |
234 | |
229 | You can watch for signals using a signal watcher, C<signal> is the signal |
235 | You can watch for signals using a signal watcher, C<signal> is the signal |
230 | I<name> without any C<SIG> prefix, C<cb> is the Perl callback to |
236 | I<name> without any C<SIG> prefix, C<cb> is the Perl callback to |
231 | be invoked whenever a signal occurs. |
237 | be invoked whenever a signal occurs. |
232 | |
238 | |
|
|
239 | Although the callback might get passed parameters, their value and |
|
|
240 | presence is undefined and you cannot rely on them. Portable AnyEvent |
|
|
241 | callbacks cannot use arguments passed to signal watcher callbacks. |
|
|
242 | |
233 | Multiple signal occurances can be clumped together into one callback |
243 | Multiple signal occurances can be clumped together into one callback |
234 | invocation, and callback invocation will be synchronous. synchronous means |
244 | invocation, and callback invocation will be synchronous. synchronous means |
235 | that it might take a while until the signal gets handled by the process, |
245 | that it might take a while until the signal gets handled by the process, |
236 | but it is guarenteed not to interrupt any other callbacks. |
246 | but it is guarenteed not to interrupt any other callbacks. |
237 | |
247 | |
… | |
… | |
251 | |
261 | |
252 | The child process is specified by the C<pid> argument (if set to C<0>, it |
262 | The child process is specified by the C<pid> argument (if set to C<0>, it |
253 | watches for any child process exit). The watcher will trigger as often |
263 | watches for any child process exit). The watcher will trigger as often |
254 | as status change for the child are received. This works by installing a |
264 | as status change for the child are received. This works by installing a |
255 | signal handler for C<SIGCHLD>. The callback will be called with the pid |
265 | signal handler for C<SIGCHLD>. The callback will be called with the pid |
256 | and exit status (as returned by waitpid). |
266 | and exit status (as returned by waitpid), so unlike other watcher types, |
|
|
267 | you I<can> rely on child watcher callback arguments. |
257 | |
268 | |
258 | Example: wait for pid 1333 |
269 | There is a slight catch to child watchers, however: you usually start them |
|
|
270 | I<after> the child process was created, and this means the process could |
|
|
271 | have exited already (and no SIGCHLD will be sent anymore). |
|
|
272 | |
|
|
273 | Not all event models handle this correctly (POE doesn't), but even for |
|
|
274 | event models that I<do> handle this correctly, they usually need to be |
|
|
275 | loaded before the process exits (i.e. before you fork in the first place). |
|
|
276 | |
|
|
277 | This means you cannot create a child watcher as the very first thing in an |
|
|
278 | AnyEvent program, you I<have> to create at least one watcher before you |
|
|
279 | C<fork> the child (alternatively, you can call C<AnyEvent::detect>). |
|
|
280 | |
|
|
281 | Example: fork a process and wait for it |
|
|
282 | |
|
|
283 | my $done = AnyEvent->condvar; |
|
|
284 | |
|
|
285 | AnyEvent::detect; # force event module to be initialised |
|
|
286 | |
|
|
287 | my $pid = fork or exit 5; |
259 | |
288 | |
260 | my $w = AnyEvent->child ( |
289 | my $w = AnyEvent->child ( |
261 | pid => 1333, |
290 | pid => $pid, |
262 | cb => sub { |
291 | cb => sub { |
263 | my ($pid, $status) = @_; |
292 | my ($pid, $status) = @_; |
264 | warn "pid $pid exited with status $status"; |
293 | warn "pid $pid exited with status $status"; |
|
|
294 | $done->broadcast; |
265 | }, |
295 | }, |
266 | ); |
296 | ); |
|
|
297 | |
|
|
298 | # do something else, then wait for process exit |
|
|
299 | $done->wait; |
267 | |
300 | |
268 | =head2 CONDITION VARIABLES |
301 | =head2 CONDITION VARIABLES |
269 | |
302 | |
270 | Condition variables can be created by calling the C<< AnyEvent->condvar >> |
303 | Condition variables can be created by calling the C<< AnyEvent->condvar >> |
271 | method without any arguments. |
304 | method without any arguments. |
… | |
… | |
861 | }); |
894 | }); |
862 | |
895 | |
863 | $quit->wait; |
896 | $quit->wait; |
864 | |
897 | |
865 | |
898 | |
866 | =head1 BENCHMARK |
899 | =head1 BENCHMARKS |
867 | |
900 | |
868 | To give you an idea of the performance and overheads that AnyEvent adds |
901 | To give you an idea of the performance and overheads that AnyEvent adds |
869 | over the event loops themselves (and to give you an impression of the |
902 | over the event loops themselves and to give you an impression of the speed |
870 | speed of various event loops), here is a benchmark of various supported |
903 | of various event loops I prepared some benchmarks. |
871 | event models natively and with anyevent. The benchmark creates a lot of |
904 | |
872 | timers (with a zero timeout) and I/O watchers (watching STDOUT, a pty, to |
905 | =head2 BENCHMARKING ANYEVENT OVERHEAD |
|
|
906 | |
|
|
907 | Here is a benchmark of various supported event models used natively and |
|
|
908 | through anyevent. The benchmark creates a lot of timers (with a zero |
|
|
909 | timeout) and I/O watchers (watching STDOUT, a pty, to become writable, |
873 | become writable, which it is), lets them fire exactly once and destroys |
910 | which it is), lets them fire exactly once and destroys them again. |
874 | them again. |
|
|
875 | |
911 | |
876 | Rewriting the benchmark to use many different sockets instead of using |
912 | Source code for this benchmark is found as F<eg/bench> in the AnyEvent |
877 | the same filehandle for all I/O watchers results in a much longer runtime |
913 | distribution. |
878 | (socket creation is expensive), but qualitatively the same figures, so it |
|
|
879 | was not used. |
|
|
880 | |
914 | |
881 | =head2 Explanation of the columns |
915 | =head3 Explanation of the columns |
882 | |
916 | |
883 | I<watcher> is the number of event watchers created/destroyed. Since |
917 | I<watcher> is the number of event watchers created/destroyed. Since |
884 | different event models feature vastly different performances, each event |
918 | different event models feature vastly different performances, each event |
885 | loop was given a number of watchers so that overall runtime is acceptable |
919 | loop was given a number of watchers so that overall runtime is acceptable |
886 | and similar between tested event loop (and keep them from crashing): Glib |
920 | and similar between tested event loop (and keep them from crashing): Glib |
… | |
… | |
902 | signal the end of this phase. |
936 | signal the end of this phase. |
903 | |
937 | |
904 | I<destroy> is the time, in microseconds, that it takes to destroy a single |
938 | I<destroy> is the time, in microseconds, that it takes to destroy a single |
905 | watcher. |
939 | watcher. |
906 | |
940 | |
907 | =head2 Results |
941 | =head3 Results |
908 | |
942 | |
909 | name watchers bytes create invoke destroy comment |
943 | name watchers bytes create invoke destroy comment |
910 | EV/EV 400000 244 0.56 0.46 0.31 EV native interface |
944 | EV/EV 400000 244 0.56 0.46 0.31 EV native interface |
911 | EV/Any 100000 610 3.52 0.91 0.75 EV + AnyEvent watchers |
945 | EV/Any 100000 244 2.50 0.46 0.29 EV + AnyEvent watchers |
912 | CoroEV/Any 100000 610 3.49 0.92 0.75 coroutines + Coro::Signal |
946 | CoroEV/Any 100000 244 2.49 0.44 0.29 coroutines + Coro::Signal |
913 | Perl/Any 100000 513 4.91 0.92 1.15 pure perl implementation |
947 | Perl/Any 100000 513 4.92 0.87 1.12 pure perl implementation |
914 | Event/Event 16000 523 28.05 21.38 0.86 Event native interface |
948 | Event/Event 16000 516 31.88 31.30 0.85 Event native interface |
915 | Event/Any 16000 943 34.43 20.48 1.39 Event + AnyEvent watchers |
949 | Event/Any 16000 936 39.17 33.63 1.43 Event + AnyEvent watchers |
916 | Glib/Any 16000 1357 96.99 12.55 55.51 quadratic behaviour |
950 | Glib/Any 16000 1357 98.22 12.41 54.00 quadratic behaviour |
917 | Tk/Any 2000 1855 27.01 66.61 14.03 SEGV with >> 2000 watchers |
951 | Tk/Any 2000 1860 26.97 67.98 14.00 SEGV with >> 2000 watchers |
918 | POE/Event 2000 6644 108.15 768.19 14.33 via POE::Loop::Event |
952 | POE/Event 2000 6644 108.64 736.02 14.73 via POE::Loop::Event |
919 | POE/Select 2000 6343 94.69 807.65 562.69 via POE::Loop::Select |
953 | POE/Select 2000 6343 94.13 809.12 565.96 via POE::Loop::Select |
920 | |
954 | |
921 | =head2 Discussion |
955 | =head3 Discussion |
922 | |
956 | |
923 | The benchmark does I<not> measure scalability of the event loop very |
957 | The benchmark does I<not> measure scalability of the event loop very |
924 | well. For example, a select-based event loop (such as the pure perl one) |
958 | well. For example, a select-based event loop (such as the pure perl one) |
925 | can never compete with an event loop that uses epoll when the number of |
959 | can never compete with an event loop that uses epoll when the number of |
926 | file descriptors grows high. In this benchmark, all events become ready at |
960 | file descriptors grows high. In this benchmark, all events become ready at |
927 | the same time, so select/poll-based implementations get an unnatural speed |
961 | the same time, so select/poll-based implementations get an unnatural speed |
928 | boost. |
962 | boost. |
929 | |
963 | |
|
|
964 | Also, note that the number of watchers usually has a nonlinear effect on |
|
|
965 | overall speed, that is, creating twice as many watchers doesn't take twice |
|
|
966 | the time - usually it takes longer. This puts event loops tested with a |
|
|
967 | higher number of watchers at a disadvantage. |
|
|
968 | |
|
|
969 | To put the range of results into perspective, consider that on the |
|
|
970 | benchmark machine, handling an event takes roughly 1600 CPU cycles with |
|
|
971 | EV, 3100 CPU cycles with AnyEvent's pure perl loop and almost 3000000 CPU |
|
|
972 | cycles with POE. |
|
|
973 | |
930 | C<EV> is the sole leader regarding speed and memory use, which are both |
974 | C<EV> is the sole leader regarding speed and memory use, which are both |
931 | maximal/minimal, respectively. Even when going through AnyEvent, there are |
975 | maximal/minimal, respectively. Even when going through AnyEvent, it uses |
932 | only two event loops that use slightly less memory (the C<Event> module |
976 | far less memory than any other event loop and is still faster than Event |
933 | natively and the pure perl backend), and no faster event models, not even |
977 | natively. |
934 | C<Event> natively. |
|
|
935 | |
978 | |
936 | The pure perl implementation is hit in a few sweet spots (both the |
979 | The pure perl implementation is hit in a few sweet spots (both the |
937 | zero timeout and the use of a single fd hit optimisations in the perl |
980 | constant timeout and the use of a single fd hit optimisations in the perl |
938 | interpreter and the backend itself, and all watchers become ready at the |
981 | interpreter and the backend itself). Nevertheless this shows that it |
939 | same time). Nevertheless this shows that it adds very little overhead in |
982 | adds very little overhead in itself. Like any select-based backend its |
940 | itself. Like any select-based backend its performance becomes really bad |
983 | performance becomes really bad with lots of file descriptors (and few of |
941 | with lots of file descriptors (and few of them active), of course, but |
984 | them active), of course, but this was not subject of this benchmark. |
942 | this was not subject of this benchmark. |
|
|
943 | |
985 | |
944 | The C<Event> module has a relatively high setup and callback invocation cost, |
986 | The C<Event> module has a relatively high setup and callback invocation |
945 | but overall scores on the third place. |
987 | cost, but overall scores in on the third place. |
946 | |
988 | |
947 | C<Glib>'s memory usage is quite a bit bit higher, but it features a |
989 | C<Glib>'s memory usage is quite a bit higher, but it features a |
948 | faster callback invocation and overall ends up in the same class as |
990 | faster callback invocation and overall ends up in the same class as |
949 | C<Event>. However, Glib scales extremely badly, doubling the number of |
991 | C<Event>. However, Glib scales extremely badly, doubling the number of |
950 | watchers increases the processing time by more than a factor of four, |
992 | watchers increases the processing time by more than a factor of four, |
951 | making it completely unusable when using larger numbers of watchers |
993 | making it completely unusable when using larger numbers of watchers |
952 | (note that only a single file descriptor was used in the benchmark, so |
994 | (note that only a single file descriptor was used in the benchmark, so |
… | |
… | |
955 | The C<Tk> adaptor works relatively well. The fact that it crashes with |
997 | The C<Tk> adaptor works relatively well. The fact that it crashes with |
956 | more than 2000 watchers is a big setback, however, as correctness takes |
998 | more than 2000 watchers is a big setback, however, as correctness takes |
957 | precedence over speed. Nevertheless, its performance is surprising, as the |
999 | precedence over speed. Nevertheless, its performance is surprising, as the |
958 | file descriptor is dup()ed for each watcher. This shows that the dup() |
1000 | file descriptor is dup()ed for each watcher. This shows that the dup() |
959 | employed by some adaptors is not a big performance issue (it does incur a |
1001 | employed by some adaptors is not a big performance issue (it does incur a |
960 | hidden memory cost inside the kernel, though, that is not reflected in the |
1002 | hidden memory cost inside the kernel which is not reflected in the figures |
961 | figures above). |
1003 | above). |
962 | |
1004 | |
963 | C<POE>, regardless of underlying event loop (wether using its pure perl |
1005 | C<POE>, regardless of underlying event loop (whether using its pure |
964 | select-based backend or the Event module) shows abysmal performance and |
1006 | perl select-based backend or the Event module, the POE-EV backend |
|
|
1007 | couldn't be tested because it wasn't working) shows abysmal performance |
965 | memory usage: Watchers use almost 30 times as much memory as EV watchers, |
1008 | and memory usage: Watchers use almost 30 times as much memory as |
966 | and 10 times as much memory as both Event or EV via AnyEvent. Watcher |
1009 | EV watchers, and 10 times as much memory as Event (the high memory |
|
|
1010 | requirements are caused by requiring a session for each watcher). Watcher |
967 | invocation is almost 900 times slower than with AnyEvent's pure perl |
1011 | invocation speed is almost 900 times slower than with AnyEvent's pure perl |
968 | implementation. The design of the POE adaptor class in AnyEvent can not |
1012 | implementation. The design of the POE adaptor class in AnyEvent can not |
969 | really account for this, as session creation overhead is small compared |
1013 | really account for this, as session creation overhead is small compared |
970 | to execution of the state machine, which is coded pretty optimally within |
1014 | to execution of the state machine, which is coded pretty optimally within |
971 | L<AnyEvent::Impl::POE>. POE simply seems to be abysmally slow. |
1015 | L<AnyEvent::Impl::POE>. POE simply seems to be abysmally slow. |
972 | |
1016 | |
973 | =head2 Summary |
1017 | =head3 Summary |
974 | |
1018 | |
|
|
1019 | =over 4 |
|
|
1020 | |
975 | Using EV through AnyEvent is faster than any other event loop, but most |
1021 | =item * Using EV through AnyEvent is faster than any other event loop |
976 | event loops have acceptable performance with or without AnyEvent. |
1022 | (even when used without AnyEvent), but most event loops have acceptable |
|
|
1023 | performance with or without AnyEvent. |
977 | |
1024 | |
978 | The overhead AnyEvent adds is usually much smaller than the overhead of |
1025 | =item * The overhead AnyEvent adds is usually much smaller than the overhead of |
979 | the actual event loop, only with extremely fast event loops such as the EV |
1026 | the actual event loop, only with extremely fast event loops such as EV |
980 | adds AnyEvent significant overhead. |
1027 | adds AnyEvent significant overhead. |
981 | |
1028 | |
982 | And you should simply avoid POE like the plague if you want performance or |
1029 | =item * You should avoid POE like the plague if you want performance or |
983 | reasonable memory usage. |
1030 | reasonable memory usage. |
|
|
1031 | |
|
|
1032 | =back |
|
|
1033 | |
|
|
1034 | =head2 BENCHMARKING THE LARGE SERVER CASE |
|
|
1035 | |
|
|
1036 | This benchmark atcually benchmarks the event loop itself. It works by |
|
|
1037 | creating a number of "servers": each server consists of a socketpair, a |
|
|
1038 | timeout watcher that gets reset on activity (but never fires), and an I/O |
|
|
1039 | watcher waiting for input on one side of the socket. Each time the socket |
|
|
1040 | watcher reads a byte it will write that byte to a random other "server". |
|
|
1041 | |
|
|
1042 | The effect is that there will be a lot of I/O watchers, only part of which |
|
|
1043 | are active at any one point (so there is a constant number of active |
|
|
1044 | fds for each loop iterstaion, but which fds these are is random). The |
|
|
1045 | timeout is reset each time something is read because that reflects how |
|
|
1046 | most timeouts work (and puts extra pressure on the event loops). |
|
|
1047 | |
|
|
1048 | In this benchmark, we use 10000 socketpairs (20000 sockets), of which 100 |
|
|
1049 | (1%) are active. This mirrors the activity of large servers with many |
|
|
1050 | connections, most of which are idle at any one point in time. |
|
|
1051 | |
|
|
1052 | Source code for this benchmark is found as F<eg/bench2> in the AnyEvent |
|
|
1053 | distribution. |
|
|
1054 | |
|
|
1055 | =head3 Explanation of the columns |
|
|
1056 | |
|
|
1057 | I<sockets> is the number of sockets, and twice the number of "servers" (as |
|
|
1058 | each server has a read and write socket end). |
|
|
1059 | |
|
|
1060 | I<create> is the time it takes to create a socketpair (which is |
|
|
1061 | nontrivial) and two watchers: an I/O watcher and a timeout watcher. |
|
|
1062 | |
|
|
1063 | I<request>, the most important value, is the time it takes to handle a |
|
|
1064 | single "request", that is, reading the token from the pipe and forwarding |
|
|
1065 | it to another server. This includes deleting the old timeout and creating |
|
|
1066 | a new one that moves the timeout into the future. |
|
|
1067 | |
|
|
1068 | =head3 Results |
|
|
1069 | |
|
|
1070 | name sockets create request |
|
|
1071 | EV 20000 69.01 11.16 |
|
|
1072 | Perl 20000 75.28 112.76 |
|
|
1073 | Event 20000 212.62 257.32 |
|
|
1074 | Glib 20000 651.16 1896.30 |
|
|
1075 | POE 20000 349.67 12317.24 uses POE::Loop::Event |
|
|
1076 | |
|
|
1077 | =head3 Discussion |
|
|
1078 | |
|
|
1079 | This benchmark I<does> measure scalability and overall performance of the |
|
|
1080 | particular event loop. |
|
|
1081 | |
|
|
1082 | EV is again fastest. Since it is using epoll on my system, the setup time |
|
|
1083 | is relatively high, though. |
|
|
1084 | |
|
|
1085 | Perl surprisingly comes second. It is much faster than the C-based event |
|
|
1086 | loops Event and Glib. |
|
|
1087 | |
|
|
1088 | Event suffers from high setup time as well (look at its code and you will |
|
|
1089 | understand why). Callback invocation also has a high overhead compared to |
|
|
1090 | the C<< $_->() for .. >>-style loop that the Perl event loop uses. Event |
|
|
1091 | uses select or poll in basically all documented configurations. |
|
|
1092 | |
|
|
1093 | Glib is hit hard by its quadratic behaviour w.r.t. many watchers. It |
|
|
1094 | clearly fails to perform with many filehandles or in busy servers. |
|
|
1095 | |
|
|
1096 | POE is still completely out of the picture, taking over 1000 times as long |
|
|
1097 | as EV, and over 100 times as long as the Perl implementation, even though |
|
|
1098 | it uses a C-based event loop in this case. |
|
|
1099 | |
|
|
1100 | =head3 Summary |
|
|
1101 | |
|
|
1102 | =over 4 |
|
|
1103 | |
|
|
1104 | =item * The pure perl implementation performs extremely well, considering |
|
|
1105 | that it uses select. |
|
|
1106 | |
|
|
1107 | =item * Avoid Glib or POE in large projects where performance matters. |
|
|
1108 | |
|
|
1109 | =back |
|
|
1110 | |
|
|
1111 | =head2 BENCHMARKING SMALL SERVERS |
|
|
1112 | |
|
|
1113 | While event loops should scale (and select-based ones do not...) even to |
|
|
1114 | large servers, most programs we (or I :) actually write have only a few |
|
|
1115 | I/O watchers. |
|
|
1116 | |
|
|
1117 | In this benchmark, I use the same benchmark program as in the large server |
|
|
1118 | case, but it uses only eight "servers", of which three are active at any |
|
|
1119 | one time. This should reflect performance for a small server relatively |
|
|
1120 | well. |
|
|
1121 | |
|
|
1122 | The columns are identical to the previous table. |
|
|
1123 | |
|
|
1124 | =head3 Results |
|
|
1125 | |
|
|
1126 | name sockets create request |
|
|
1127 | EV 16 20.00 6.54 |
|
|
1128 | Event 16 81.27 35.86 |
|
|
1129 | Glib 16 32.63 15.48 |
|
|
1130 | Perl 16 24.62 162.37 |
|
|
1131 | POE 16 261.87 276.28 uses POE::Loop::Event |
|
|
1132 | |
|
|
1133 | =head3 Discussion |
|
|
1134 | |
|
|
1135 | The benchmark tries to test the performance of a typical small |
|
|
1136 | server. While knowing how various event loops perform is interesting, keep |
|
|
1137 | in mind that their overhead in this case is usually not as important, due |
|
|
1138 | to the small absolute number of watchers. |
|
|
1139 | |
|
|
1140 | EV is again fastest. |
|
|
1141 | |
|
|
1142 | The C-based event loops Event and Glib come in second this time, as the |
|
|
1143 | overhead of running an iteration is much smaller in C than in Perl (little |
|
|
1144 | code to execute in the inner loop, and perl's function calling overhead is |
|
|
1145 | high, and updating all the data structures is costly). |
|
|
1146 | |
|
|
1147 | The pure perl event loop is much slower, but still competitive. |
|
|
1148 | |
|
|
1149 | POE also performs much better in this case, but is is stillf ar behind the |
|
|
1150 | others. |
|
|
1151 | |
|
|
1152 | =head3 Summary |
|
|
1153 | |
|
|
1154 | =over 4 |
|
|
1155 | |
|
|
1156 | =item * C-based event loops perform very well with small number of |
|
|
1157 | watchers, as the management overhead dominates. |
|
|
1158 | |
|
|
1159 | =back |
984 | |
1160 | |
985 | |
1161 | |
986 | =head1 FORK |
1162 | =head1 FORK |
987 | |
1163 | |
988 | Most event libraries are not fork-safe. The ones who are usually are |
1164 | Most event libraries are not fork-safe. The ones who are usually are |