--- AnyEvent/lib/AnyEvent.pm 2008/04/25 07:49:39 1.75 +++ AnyEvent/lib/AnyEvent.pm 2008/04/25 09:00:37 1.77 @@ -873,6 +873,11 @@ become writable, which it is), lets them fire exactly once and destroys them again. +Rewriting the benchmark to use many different sockets instead of using +the same filehandle for all io watchers results in a much longer runtime +(socket creation is expensive), but qualitatively the same figures, so it +was not used. + =head2 Explanation of the columns I is the number of event watchers created/destroyed. Since @@ -905,7 +910,7 @@ EV/EV 400000 244 0.56 0.46 0.31 EV native interface EV/Any 100000 610 3.52 0.91 0.75 EV + AnyEvent watchers CoroEV/Any 100000 610 3.49 0.92 0.75 coroutines + Coro::Signal - Perl/Any 16000 654 4.64 1.22 0.77 pure perl implementation + Perl/Any 100000 513 4.91 0.92 1.15 pure perl implementation Event/Event 16000 523 28.05 21.38 0.86 Event native interface Event/Any 16000 943 34.43 20.48 1.39 Event + AnyEvent watchers Glib/Any 16000 1357 96.99 12.55 55.51 quadratic behaviour @@ -923,16 +928,18 @@ to worka round bugs). C is the sole leader regarding speed and memory use, which are both -maximal/minimal, respectively. Even when going through AnyEvent, there is -only one event loop that uses less memory (the C module natively), and -no faster event model, not event C natively. +maximal/minimal, respectively. Even when going through AnyEvent, there are +only two event loops that use slightly less memory (the C module +natively and the pure perl backend), and no faster event models, not even +C natively. The pure perl implementation is hit in a few sweet spots (both the zero timeout and the use of a single fd hit optimisations in the perl -interpreter and the backend itself). Nevertheless tis shows that it -adds very little overhead in itself. Like any select-based backend its -performance becomes really bad with lots of file descriptors, of course, -but this was not subjetc of this benchmark. +interpreter and the backend itself, and all watchers become ready at the +same time). Nevertheless this shows that it adds very little overhead in +itself. Like any select-based backend its performance becomes really bad +with lots of file descriptors (and few of them active), of course, but +this was not subject of this benchmark. The C module has a relatively high setup and callback invocation cost, but overall scores on the third place.