--- AnyEvent/lib/AnyEvent.pm 2008/04/25 07:47:22 1.74 +++ AnyEvent/lib/AnyEvent.pm 2008/04/25 09:06:27 1.78 @@ -138,7 +138,7 @@ my variables are only visible after the statement in which they are declared. -=head2 IO WATCHERS +=head2 I/O WATCHERS You can create an I/O watcher by calling the C<< AnyEvent->io >> method with the following mandatory key-value pairs as arguments: @@ -708,7 +708,7 @@ =head1 EXAMPLE PROGRAM -The following program uses an IO watcher to read data from STDIN, a timer +The following program uses an I/O watcher to read data from STDIN, a timer to display a message once per second, and a condition variable to quit the program when the user enters quit: @@ -869,10 +869,15 @@ over the event loops themselves (and to give you an impression of the speed of various event loops), here is a benchmark of various supported event models natively and with anyevent. The benchmark creates a lot of -timers (with a zero timeout) and io watchers (watching STDOUT, a pty, to +timers (with a zero timeout) and I/O watchers (watching STDOUT, a pty, to become writable, which it is), lets them fire exactly once and destroys them again. +Rewriting the benchmark to use many different sockets instead of using +the same filehandle for all I/O watchers results in a much longer runtime +(socket creation is expensive), but qualitatively the same figures, so it +was not used. + =head2 Explanation of the columns I is the number of event watchers created/destroyed. Since @@ -901,17 +906,17 @@ =head2 Results - name watcher bytes create invoke destroy comment - EV/EV 400000 244 0.56 0.46 0.31 EV native interface - EV/Any 100000 610 3.52 0.91 0.75 EV + AnyEvent watchers - CoroEV/Any 100000 610 3.49 0.92 0.75 coroutines + Coro::Signal - Perl/Any 16000 654 4.64 1.22 0.77 pure perl implementation - Event/Event 16000 523 28.05 21.38 0.86 Event native interface - Event/Any 16000 943 34.43 20.48 1.39 Event + AnyEvent watchers - Glib/Any 16000 1357 96.99 12.55 55.51 quadratic behaviour - Tk/Any 2000 1855 27.01 66.61 14.03 SEGV with >> 2000 watchers - POE/Event 2000 6644 108.15 768.19 14.33 via POE::Loop::Event - POE/Select 2000 6343 94.69 807.65 562.69 via POE::Loop::Select + name watchers bytes create invoke destroy comment + EV/EV 400000 244 0.56 0.46 0.31 EV native interface + EV/Any 100000 610 3.52 0.91 0.75 EV + AnyEvent watchers + CoroEV/Any 100000 610 3.49 0.92 0.75 coroutines + Coro::Signal + Perl/Any 100000 513 4.91 0.92 1.15 pure perl implementation + Event/Event 16000 523 28.05 21.38 0.86 Event native interface + Event/Any 16000 943 34.43 20.48 1.39 Event + AnyEvent watchers + Glib/Any 16000 1357 96.99 12.55 55.51 quadratic behaviour + Tk/Any 2000 1855 27.01 66.61 14.03 SEGV with >> 2000 watchers + POE/Event 2000 6644 108.15 768.19 14.33 via POE::Loop::Event + POE/Select 2000 6343 94.69 807.65 562.69 via POE::Loop::Select =head2 Discussion @@ -923,16 +928,18 @@ to worka round bugs). C is the sole leader regarding speed and memory use, which are both -maximal/minimal, respectively. Even when going through AnyEvent, there is -only one event loop that uses less memory (the C module natively), and -no faster event model, not event C natively. +maximal/minimal, respectively. Even when going through AnyEvent, there are +only two event loops that use slightly less memory (the C module +natively and the pure perl backend), and no faster event models, not even +C natively. The pure perl implementation is hit in a few sweet spots (both the zero timeout and the use of a single fd hit optimisations in the perl -interpreter and the backend itself). Nevertheless tis shows that it -adds very little overhead in itself. Like any select-based backend its -performance becomes really bad with lots of file descriptors, of course, -but this was not subjetc of this benchmark. +interpreter and the backend itself, and all watchers become ready at the +same time). Nevertheless this shows that it adds very little overhead in +itself. Like any select-based backend its performance becomes really bad +with lots of file descriptors (and few of them active), of course, but +this was not subject of this benchmark. The C module has a relatively high setup and callback invocation cost, but overall scores on the third place.