… | |
… | |
864 | |
864 | |
865 | |
865 | |
866 | =head1 BENCHMARK |
866 | =head1 BENCHMARK |
867 | |
867 | |
868 | To give you an idea of the performance and overheads that AnyEvent adds |
868 | To give you an idea of the performance and overheads that AnyEvent adds |
869 | over the backends directly, here is a benchmark of various supported event |
869 | over the event loops themselves (and to give you an impression of the |
|
|
870 | speed of various event loops), here is a benchmark of various supported |
870 | models natively and with anyevent. The benchmark creates a lot of timers |
871 | event models natively and with anyevent. The benchmark creates a lot of |
871 | (with a zero timeout) and io events (watching STDOUT, a pty, to become |
872 | timers (with a zero timeout) and io watchers (watching STDOUT, a pty, to |
872 | writable), lets them fire exactly once and destroys them again. |
873 | become writable, which it is), lets them fire exactly once and destroys |
|
|
874 | them again. |
873 | |
875 | |
874 | Explanation of the fields: |
876 | =head2 Explanation of the columns |
875 | |
877 | |
876 | I<watcher> is the number of event watchers created/destroyed. Sicne |
878 | I<watcher> is the number of event watchers created/destroyed. Since |
877 | different event models have vastly different performance each backend was |
879 | different event models feature vastly different performances, each event |
878 | handed a number of watchers so that overall runtime is acceptable and |
880 | loop was given a number of watchers so that overall runtime is acceptable |
879 | similar to all backends (and keep them from crashing). |
881 | and similar between tested event loop (and keep them from crashing): Glib |
|
|
882 | would probably take thousands of years if asked to process the same number |
|
|
883 | of watchers as EV in this benchmark. |
880 | |
884 | |
881 | I<bytes> is the number of bytes (as measured by resident set size) used by |
885 | I<bytes> is the number of bytes (as measured by the resident set size, |
882 | each watcher. |
886 | RSS) consumed by each watcher. This method of measuring captures both C |
|
|
887 | and Perl-based overheads. |
883 | |
888 | |
884 | I<create> is the time, in microseconds, to create a single watcher. |
889 | I<create> is the time, in microseconds (millionths of seconds), that it |
|
|
890 | takes to create a single watcher. The callback is a closure shared between |
|
|
891 | all watchers, to avoid adding memory overhead. That means closure creation |
|
|
892 | and memory usage is not included in the figures. |
885 | |
893 | |
886 | I<invoke> is the time, in microseconds, used to invoke a simple callback |
894 | I<invoke> is the time, in microseconds, used to invoke a simple |
887 | that simply counts down. |
895 | callback. The callback simply counts down a Perl variable and after it was |
|
|
896 | invoked "watcher" times, it would C<< ->broadcast >> a condvar once to |
|
|
897 | signal the end of this phase. |
888 | |
898 | |
889 | I<destroy> is the time, in microseconds, to destroy a single watcher. |
899 | I<destroy> is the time, in microseconds, that it takes to destroy a single |
|
|
900 | watcher. |
|
|
901 | |
|
|
902 | =head2 Results |
890 | |
903 | |
891 | name watcher bytes create invoke destroy comment |
904 | name watcher bytes create invoke destroy comment |
892 | EV/EV 400000 244 0.56 0.46 0.31 EV native interface |
905 | EV/EV 400000 244 0.56 0.46 0.31 EV native interface |
893 | EV/Any 100000 610 3.52 0.91 0.75 |
906 | EV/Any 100000 610 3.52 0.91 0.75 EV + AnyEvent watchers |
894 | CoroEV/Any 100000 610 3.49 0.92 0.75 coroutines + Coro::Signal |
907 | CoroEV/Any 100000 610 3.49 0.92 0.75 coroutines + Coro::Signal |
895 | Perl/Any 10000 654 4.64 1.22 0.77 pure perl implementation |
908 | Perl/Any 16000 654 4.64 1.22 0.77 pure perl implementation |
896 | Event/Event 10000 523 28.05 21.38 5.22 Event native interface |
909 | Event/Event 16000 523 28.05 21.38 0.86 Event native interface |
897 | Event/Any 10000 943 34.43 20.48 1.39 |
910 | Event/Any 16000 943 34.43 20.48 1.39 Event + AnyEvent watchers |
898 | Glib/Any 16000 1357 96.99 12.55 55.51 quadratic behaviour |
911 | Glib/Any 16000 1357 96.99 12.55 55.51 quadratic behaviour |
899 | Tk/Any 2000 1855 27.01 66.61 14.03 SEGV with >> 2000 watchers |
912 | Tk/Any 2000 1855 27.01 66.61 14.03 SEGV with >> 2000 watchers |
|
|
913 | POE/Event 2000 6644 108.15 768.19 14.33 via POE::Loop::Event |
900 | POE/Select 2000 6343 94.69 807.65 562.69 POE::Loop::Select |
914 | POE/Select 2000 6343 94.69 807.65 562.69 via POE::Loop::Select |
901 | POE/Event 2000 6644 108.15 768.19 14.33 POE::Loop::Event |
|
|
902 | |
915 | |
903 | Discussion: The benchmark does I<not> bench scalability of the |
916 | =head2 Discussion |
|
|
917 | |
|
|
918 | The benchmark does I<not> measure scalability of the event loop very |
904 | backend. For example a select-based backend (such as the pureperl one) can |
919 | well. For example, a select-based event loop (such as the pure perl one) |
905 | never compete with a backend using epoll. In this benchmark, only a single |
920 | can never compete with an event loop that uses epoll when the number of |
906 | filehandle is used. |
921 | file descriptors grows high. In this benchmark, only a single filehandle |
|
|
922 | is used (although some of the AnyEvent adaptors dup() its file descriptor |
|
|
923 | to worka round bugs). |
907 | |
924 | |
908 | EV is the sole leader regarding speed and memory use, which are both |
925 | C<EV> is the sole leader regarding speed and memory use, which are both |
909 | maximal/minimal. Even when going through AnyEvent, there is only one event |
926 | maximal/minimal, respectively. Even when going through AnyEvent, there is |
910 | loop that uses less memory (the Event module natively), and no faster |
927 | only one event loop that uses less memory (the C<Event> module natively), and |
911 | event model. |
928 | no faster event model, not event C<Event> natively. |
912 | |
929 | |
913 | The pure perl implementation is hit in a few sweet spots (both the |
930 | The pure perl implementation is hit in a few sweet spots (both the |
914 | zero timeout and the use of a single fd hit optimisations in the perl |
931 | zero timeout and the use of a single fd hit optimisations in the perl |
915 | interpreter and the backend itself), but it shows that it adds very little |
932 | interpreter and the backend itself). Nevertheless tis shows that it |
916 | overhead in itself. Like any select-based backend it's performance becomes |
933 | adds very little overhead in itself. Like any select-based backend its |
917 | really bad with lots of file descriptors. |
934 | performance becomes really bad with lots of file descriptors, of course, |
|
|
935 | but this was not subjetc of this benchmark. |
918 | |
936 | |
919 | The Event module has a relatively high setup and callback invocation cost, |
937 | The C<Event> module has a relatively high setup and callback invocation cost, |
920 | but overall scores on the third place. |
938 | but overall scores on the third place. |
921 | |
939 | |
922 | Glib has a little higher memory cost, a bit fster callback invocation and |
940 | C<Glib>'s memory usage is quite a bit bit higher, features a faster |
923 | has a similar speed as Event. |
941 | callback invocation and overall lands in the same class as C<Event>. |
924 | |
942 | |
925 | The Tk backend works relatively well, the fact that it crashes with |
943 | The C<Tk> adaptor works relatively well, the fact that it crashes with |
926 | more than 2000 watchers is a big setback, however, as correctness takes |
944 | more than 2000 watchers is a big setback, however, as correctness takes |
927 | precedence over speed. |
945 | precedence over speed. Nevertheless, its performance is surprising, as the |
|
|
946 | file descriptor is dup()ed for each watcher. This shows that the dup() |
|
|
947 | employed by some adaptors is not a big performance issue (it does incur a |
|
|
948 | hidden memory cost inside the kernel, though). |
928 | |
949 | |
929 | POE, regardless of backend (wether it's pure perl select backend or the |
950 | C<POE>, regardless of underlying event loop (wether using its pure perl |
930 | Event backend) shows abysmal performance and memory usage: Watchers use |
951 | select-based backend or the Event module) shows abysmal performance and |
931 | almost 30 times as much memory as EV watchers, and 10 times as much memory |
952 | memory usage: Watchers use almost 30 times as much memory as EV watchers, |
932 | as both Event or EV via AnyEvent. |
953 | and 10 times as much memory as both Event or EV via AnyEvent. Watcher |
|
|
954 | invocation is almost 700 times slower than with AnyEvent's pure perl |
|
|
955 | implementation. The design of the POE adaptor class in AnyEvent can not |
|
|
956 | really account for this, as session creation overhead is small compared |
|
|
957 | to execution of the state machine, which is coded pretty optimally within |
|
|
958 | L<AnyEvent::Impl::POE>. POE simply seems to be abysmally slow. |
933 | |
959 | |
|
|
960 | =head2 Summary |
|
|
961 | |
934 | Summary: using EV through AnyEvent is faster than any other event |
962 | Using EV through AnyEvent is faster than any other event loop, but most |
935 | loop. The overhead AnyEvent adds can be very small, and you should avoid |
963 | event loops have acceptable performance with or without AnyEvent. |
936 | POE like the plague if you want performance or reasonable memory usage. |
964 | |
|
|
965 | The overhead AnyEvent adds is usually much smaller than the overhead of |
|
|
966 | the actual event loop, only with extremely fast event loops such as the EV |
|
|
967 | adds Anyevent significant overhead. |
|
|
968 | |
|
|
969 | And you should simply avoid POE like the plague if you want performance or |
|
|
970 | reasonable memory usage. |
937 | |
971 | |
938 | |
972 | |
939 | =head1 FORK |
973 | =head1 FORK |
940 | |
974 | |
941 | Most event libraries are not fork-safe. The ones who are usually are |
975 | Most event libraries are not fork-safe. The ones who are usually are |