ViewVC Help
View File | Revision Log | Show Annotations | Download File
/cvs/AnyEvent/lib/AnyEvent.pm
(Generate patch)

Comparing AnyEvent/lib/AnyEvent.pm (file contents):
Revision 1.75 by root, Fri Apr 25 07:49:39 2008 UTC vs.
Revision 1.77 by root, Fri Apr 25 09:00:37 2008 UTC

871event models natively and with anyevent. The benchmark creates a lot of 871event models natively and with anyevent. The benchmark creates a lot of
872timers (with a zero timeout) and io watchers (watching STDOUT, a pty, to 872timers (with a zero timeout) and io watchers (watching STDOUT, a pty, to
873become writable, which it is), lets them fire exactly once and destroys 873become writable, which it is), lets them fire exactly once and destroys
874them again. 874them again.
875 875
876Rewriting the benchmark to use many different sockets instead of using
877the same filehandle for all io watchers results in a much longer runtime
878(socket creation is expensive), but qualitatively the same figures, so it
879was not used.
880
876=head2 Explanation of the columns 881=head2 Explanation of the columns
877 882
878I<watcher> is the number of event watchers created/destroyed. Since 883I<watcher> is the number of event watchers created/destroyed. Since
879different event models feature vastly different performances, each event 884different event models feature vastly different performances, each event
880loop was given a number of watchers so that overall runtime is acceptable 885loop was given a number of watchers so that overall runtime is acceptable
903 908
904 name watchers bytes create invoke destroy comment 909 name watchers bytes create invoke destroy comment
905 EV/EV 400000 244 0.56 0.46 0.31 EV native interface 910 EV/EV 400000 244 0.56 0.46 0.31 EV native interface
906 EV/Any 100000 610 3.52 0.91 0.75 EV + AnyEvent watchers 911 EV/Any 100000 610 3.52 0.91 0.75 EV + AnyEvent watchers
907 CoroEV/Any 100000 610 3.49 0.92 0.75 coroutines + Coro::Signal 912 CoroEV/Any 100000 610 3.49 0.92 0.75 coroutines + Coro::Signal
908 Perl/Any 16000 654 4.64 1.22 0.77 pure perl implementation 913 Perl/Any 100000 513 4.91 0.92 1.15 pure perl implementation
909 Event/Event 16000 523 28.05 21.38 0.86 Event native interface 914 Event/Event 16000 523 28.05 21.38 0.86 Event native interface
910 Event/Any 16000 943 34.43 20.48 1.39 Event + AnyEvent watchers 915 Event/Any 16000 943 34.43 20.48 1.39 Event + AnyEvent watchers
911 Glib/Any 16000 1357 96.99 12.55 55.51 quadratic behaviour 916 Glib/Any 16000 1357 96.99 12.55 55.51 quadratic behaviour
912 Tk/Any 2000 1855 27.01 66.61 14.03 SEGV with >> 2000 watchers 917 Tk/Any 2000 1855 27.01 66.61 14.03 SEGV with >> 2000 watchers
913 POE/Event 2000 6644 108.15 768.19 14.33 via POE::Loop::Event 918 POE/Event 2000 6644 108.15 768.19 14.33 via POE::Loop::Event
921file descriptors grows high. In this benchmark, only a single filehandle 926file descriptors grows high. In this benchmark, only a single filehandle
922is used (although some of the AnyEvent adaptors dup() its file descriptor 927is used (although some of the AnyEvent adaptors dup() its file descriptor
923to worka round bugs). 928to worka round bugs).
924 929
925C<EV> is the sole leader regarding speed and memory use, which are both 930C<EV> is the sole leader regarding speed and memory use, which are both
926maximal/minimal, respectively. Even when going through AnyEvent, there is 931maximal/minimal, respectively. Even when going through AnyEvent, there are
927only one event loop that uses less memory (the C<Event> module natively), and 932only two event loops that use slightly less memory (the C<Event> module
928no faster event model, not event C<Event> natively. 933natively and the pure perl backend), and no faster event models, not even
934C<Event> natively.
929 935
930The pure perl implementation is hit in a few sweet spots (both the 936The pure perl implementation is hit in a few sweet spots (both the
931zero timeout and the use of a single fd hit optimisations in the perl 937zero timeout and the use of a single fd hit optimisations in the perl
932interpreter and the backend itself). Nevertheless tis shows that it 938interpreter and the backend itself, and all watchers become ready at the
933adds very little overhead in itself. Like any select-based backend its 939same time). Nevertheless this shows that it adds very little overhead in
934performance becomes really bad with lots of file descriptors, of course, 940itself. Like any select-based backend its performance becomes really bad
941with lots of file descriptors (and few of them active), of course, but
935but this was not subjetc of this benchmark. 942this was not subject of this benchmark.
936 943
937The C<Event> module has a relatively high setup and callback invocation cost, 944The C<Event> module has a relatively high setup and callback invocation cost,
938but overall scores on the third place. 945but overall scores on the third place.
939 946
940C<Glib>'s memory usage is quite a bit bit higher, but it features a 947C<Glib>'s memory usage is quite a bit bit higher, but it features a

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines