ViewVC Help
View File | Revision Log | Show Annotations | Download File
/cvs/AnyEvent-Fork-Pool/Pool.pm
(Generate patch)

Comparing AnyEvent-Fork-Pool/Pool.pm (file contents):
Revision 1.1 by root, Thu Apr 18 14:24:01 2013 UTC vs.
Revision 1.7 by root, Sun Apr 21 12:03:02 2013 UTC

6 6
7 use AnyEvent; 7 use AnyEvent;
8 use AnyEvent::Fork::Pool; 8 use AnyEvent::Fork::Pool;
9 # use AnyEvent::Fork is not needed 9 # use AnyEvent::Fork is not needed
10 10
11 # all parameters with default values 11 # all possible parameters shown, with default values
12 my $pool = new AnyEvent::Fork::Pool 12 my $pool = AnyEvent::Fork
13 "MyWorker::run", 13 ->new
14 ->require ("MyWorker")
15 ->AnyEvent::Fork::Pool::run (
16 "MyWorker::run", # the worker function
14 17
15 # pool management 18 # pool management
16 min => 0, # minimum # of processes
17 max => 8, # maximum # of processes 19 max => 4, # absolute maximum # of processes
20 idle => 0, # minimum # of idle processes
21 load => 2, # queue at most this number of jobs per process
18 busy_time => 0, # wait this before starting a new process 22 start => 0.1, # wait this many seconds before starting a new process
19 max_idle => 1, # wait this before killing an idle process 23 stop => 10, # wait this many seconds before stopping an idle process
20 idle_time => 1, # at most this many idle processes 24 on_destroy => (my $finish = AE::cv), # called when object is destroyed
21 25
22 # template process
23 template => AnyEvent::Fork->new, # the template process to use
24 require => [MyWorker::], # module(s) to load
25 eval => "# perl code to execute in template",
26 on_destroy => (my $finish = AE::cv),
27
28 # parameters passed to AnyEvent::Fork::RPC 26 # parameters passed to AnyEvent::Fork::RPC
29 async => 0, 27 async => 0,
30 on_error => sub { die "FATAL: $_[0]\n" }, 28 on_error => sub { die "FATAL: $_[0]\n" },
31 on_event => sub { my @ev = @_ }, 29 on_event => sub { my @ev = @_ },
32 init => "MyWorker::init", 30 init => "MyWorker::init",
33 serialiser => $AnyEvent::Fork::RPC::STRING_SERIALISER, 31 serialiser => $AnyEvent::Fork::RPC::STRING_SERIALISER,
34 ; 32 );
35 33
36 for (1..10) { 34 for (1..10) {
37 $pool->call (doit => $_, sub { 35 $pool->(doit => $_, sub {
38 print "MyWorker::run returned @_\n"; 36 print "MyWorker::run returned @_\n";
39 }); 37 });
40 } 38 }
41 39
42 undef $pool; 40 undef $pool;
50pool of processes that handles jobs. 48pool of processes that handles jobs.
51 49
52Understanding of L<AnyEvent::Fork> is helpful but not critical to be able 50Understanding of L<AnyEvent::Fork> is helpful but not critical to be able
53to use this module, but a thorough understanding of L<AnyEvent::Fork::RPC> 51to use this module, but a thorough understanding of L<AnyEvent::Fork::RPC>
54is, as it defines the actual API that needs to be implemented in the 52is, as it defines the actual API that needs to be implemented in the
55children. 53worker processes.
56 54
57=head1 EXAMPLES 55=head1 EXAMPLES
58 56
59=head1 API 57=head1 PARENT USAGE
58
59To create a pool, you first have to create a L<AnyEvent::Fork> object -
60this object becomes your template process. Whenever a new worker process
61is needed, it is forked from this template process. Then you need to
62"hand off" this template process to the C<AnyEvent::Fork::Pool> module by
63calling its run method on it:
64
65 my $template = AnyEvent::Fork
66 ->new
67 ->require ("SomeModule", "MyWorkerModule");
68
69 my $pool = $template->AnyEvent::Fork::Pool::run ("MyWorkerModule::myfunction");
70
71The pool "object" is not a regular Perl object, but a code reference that
72you can call and that works roughly like calling the worker function
73directly, except that it returns nothing but instead you need to specify a
74callback to be invoked once results are in:
75
76 $pool->(1, 2, 3, sub { warn "myfunction(1,2,3) returned @_" });
60 77
61=over 4 78=over 4
62 79
63=cut 80=cut
64 81
65package AnyEvent::Fork::Pool; 82package AnyEvent::Fork::Pool;
66 83
67use common::sense; 84use common::sense;
68 85
86use Scalar::Util ();
87
69use Guard (); 88use Guard ();
89use Array::Heap ();
70 90
71use AnyEvent; 91use AnyEvent;
72use AnyEvent::Fork; # we don't actually depend on it, this is for convenience 92use AnyEvent::Fork; # we don't actually depend on it, this is for convenience
73use AnyEvent::Fork::RPC; 93use AnyEvent::Fork::RPC;
74 94
95# these are used for the first and last argument of events
96# in the hope of not colliding. yes, I don't like it either,
97# but didn't come up with an obviously better alternative.
98my $magic0 = ':t6Z@HK1N%Dx@_7?=~-7NQgWDdAs6a,jFN=wLO0*jD*1%P';
99my $magic1 = '<~53rexz.U`!]X[A235^"fyEoiTF\T~oH1l/N6+Djep9b~bI9`\1x%B~vWO1q*';
100
75our $VERSION = 0.1; 101our $VERSION = 0.1;
76 102
77=item my $rpc = new AnyEvent::Fork::RPC::pool $function, [key => value...] 103=item my $pool = AnyEvent::Fork::Pool::run $fork, $function, [key => value...]
104
105The traditional way to call the pool creation function. But it is way
106cooler to call it in the following way:
107
108=item my $pool = $fork->AnyEvent::Fork::Pool::run ($function, [key => value...])
109
110Creates a new pool object with the specified C<$function> as function
111(name) to call for each request. The pool uses the C<$fork> object as the
112template when creating worker processes.
113
114You can supply your own template process, or tell C<AnyEvent::Fork::Pool>
115to create one.
116
117A relatively large number of key/value pairs can be specified to influence
118the behaviour. They are grouped into the categories "pool management",
119"template process" and "rpc parameters".
78 120
79=over 4 121=over 4
80 122
81=item on_error => $cb->($msg) 123=item Pool Management
82 124
83Called on (fatal) errors, with a descriptive (hopefully) message. If 125The pool consists of a certain number of worker processes. These options
84this callback is not provided, but C<on_event> is, then the C<on_event> 126decide how many of these processes exist and when they are started and
85callback is called with the first argument being the string C<error>, 127stopped.
86followed by the error message.
87 128
88If neither handler is provided it prints the error to STDERR and will 129The worker pool is dynamically resized, according to (perceived :)
89start failing badly. 130load. The minimum size is given by the C<idle> parameter and the maximum
131size is given by the C<max> parameter. A new worker is started every
132C<start> seconds at most, and an idle worker is stopped at most every
133C<stop> second.
90 134
91=item on_event => $cb->(...) 135You can specify the amount of jobs sent to a worker concurrently using the
136C<load> parameter.
92 137
93Called for every call to the C<AnyEvent::Fork::RPC::event> function in the 138=over 4
94child, with the arguments of that function passed to the callback.
95 139
96Also called on errors when no C<on_error> handler is provided. 140=item idle => $count (default: 0)
97 141
98=item on_destroy => $cb->() 142The minimum amount of idle processes in the pool - when there are fewer
143than this many idle workers, C<AnyEvent::Fork::Pool> will try to start new
144ones, subject to the limits set by C<max> and C<start>.
99 145
100Called when the C<$rpc> object has been destroyed and all requests have 146This is also the initial amount of workers in the pool. The default of
101been successfully handled. This is useful when you queue some requests and 147zero means that the pool starts empty and can shrink back to zero workers
102want the child to go away after it has handled them. The problem is that 148over time.
103the parent must not exit either until all requests have been handled, and
104this can be accomplished by waiting for this callback.
105 149
106=item init => $function (default none) 150=item max => $count (default: 4)
107 151
108When specified (by name), this function is called in the child as the very 152The maximum number of processes in the pool, in addition to the template
109first thing when taking over the process, with all the arguments normally 153process. C<AnyEvent::Fork::Pool> will never have more than this number of
110passed to the C<AnyEvent::Fork::run> function, except the communications 154worker processes, although there can be more temporarily when a worker is
111socket. 155shut down and hasn't exited yet.
112 156
113It can be used to do one-time things in the child such as storing passed 157=item load => $count (default: 2)
114parameters or opening database connections.
115 158
116It is called very early - before the serialisers are created or the 159The maximum number of concurrent jobs sent to a single worker process.
117C<$function> name is resolved into a function reference, so it could be 160
118used to load any modules that provide the serialiser or function. It can 161Jobs that cannot be sent to a worker immediately (because all workers are
119not, however, create events. 162busy) will be queued until a worker is available.
163
164Setting this low improves latency. For example, at C<1>, every job that
165is sent to a worker is sent to a completely idle worker that doesn't run
166any other jobs. The downside is that throughput is reduced - a worker that
167finishes a job needs to wait for a new job from the parent.
168
169The default of C<2> is usually a good compromise.
170
171=item start => $seconds (default: 0.1)
172
173When there are fewer than C<idle> workers (or all workers are completely
174busy), then a timer is started. If the timer elapses and there are still
175jobs that cannot be queued to a worker, a new worker is started.
176
177This sets the minimum time that all workers must be busy before a new
178worker is started. Or, put differently, the minimum delay between starting
179new workers.
180
181The delay is small by default, which means new workers will be started
182relatively quickly. A delay of C<0> is possible, and ensures that the pool
183will grow as quickly as possible under load.
184
185Non-zero values are useful to avoid "exploding" a pool because a lot of
186jobs are queued in an instant.
187
188Higher values are often useful to improve efficiency at the cost of
189latency - when fewer processes can do the job over time, starting more and
190more is not necessarily going to help.
191
192=item stop => $seconds (default: 10)
193
194When a worker has no jobs to execute it becomes idle. An idle worker that
195hasn't executed a job within this amount of time will be stopped, unless
196the other parameters say otherwise.
197
198Setting this to a very high value means that workers stay around longer,
199even when they have nothing to do, which can be good as they don't have to
200be started on the netx load spike again.
201
202Setting this to a lower value can be useful to avoid memory or simply
203process table wastage.
204
205Usually, setting this to a time longer than the time between load spikes
206is best - if you expect a lot of requests every minute and little work
207in between, setting this to longer than a minute avoids having to stop
208and start workers. On the other hand, you have to ask yourself if letting
209workers run idle is a good use of your resources. Try to find a good
210balance between resource usage of your workers and the time to start new
211workers - the processes created by L<AnyEvent::Fork> itself is fats at
212creating workers while not using much memory for them, so most of the
213overhead is likely from your own code.
214
215=item on_destroy => $callback->() (default: none)
216
217When a pool object goes out of scope, the outstanding requests are still
218handled till completion. Only after handling all jobs will the workers
219be destroyed (and also the template process if it isn't referenced
220otherwise).
221
222To find out when a pool I<really> has finished its work, you can set this
223callback, which will be called when the pool has been destroyed.
224
225=back
226
227=item AnyEvent::Fork::RPC Parameters
228
229These parameters are all passed more or less directly to
230L<AnyEvent::Fork::RPC>. They are only briefly mentioned here, for
231their full documentation please refer to the L<AnyEvent::Fork::RPC>
232documentation. Also, the default values mentioned here are only documented
233as a best effort - the L<AnyEvent::Fork::RPC> documentation is binding.
234
235=over 4
120 236
121=item async => $boolean (default: 0) 237=item async => $boolean (default: 0)
122 238
123The default server used in the child does all I/O blockingly, and only 239Whether to use the synchronous or asynchronous RPC backend.
124allows a single RPC call to execute concurrently.
125 240
126Setting C<async> to a true value switches to another implementation that 241=item on_error => $callback->($message) (default: die with message)
127uses L<AnyEvent> in the child and allows multiple concurrent RPC calls.
128 242
129The actual API in the child is documented in the section that describes 243The callback to call on any (fatal) errors.
130the calling semantics of the returned C<$rpc> function.
131 244
132If you want to pre-load the actual back-end modules to enable memory 245=item on_event => $callback->(...) (default: C<sub { }>, unlike L<AnyEvent::Fork::RPC>)
133sharing, then you should load C<AnyEvent::Fork::RPC::Sync> for
134synchronous, and C<AnyEvent::Fork::RPC::Async> for asynchronous mode.
135 246
136If you use a template process and want to fork both sync and async 247The callback to invoke on events.
137children, then it is permissible to load both modules.
138 248
139=item serialiser => $string (default: '(sub { pack "(w/a*)*", @_ }, sub { unpack "(w/a*)*", shift })') 249=item init => $initfunction (default: none)
140 250
141All arguments, result data and event data have to be serialised to be 251The function to call in the child, once before handling requests.
142transferred between the processes. For this, they have to be frozen and
143thawed in both parent and child processes.
144 252
145By default, only octet strings can be passed between the processes, which 253=item serialiser => $serialiser (defailt: $AnyEvent::Fork::RPC::STRING_SERIALISER)
146is reasonably fast and efficient.
147 254
148For more complicated use cases, you can provide your own freeze and thaw 255The serialiser to use.
149functions, by specifying a string with perl source code. It's supposed to
150return two code references when evaluated: the first receives a list of
151perl values and must return an octet string. The second receives the octet
152string and must return the original list of values.
153
154If you need an external module for serialisation, then you can either
155pre-load it into your L<AnyEvent::Fork> process, or you can add a C<use>
156or C<require> statement into the serialiser string. Or both.
157 256
158=back 257=back
159 258
160See the examples section earlier in this document for some actual 259=back
161examples.
162 260
163=cut 261=cut
164 262
165sub new { 263sub run {
166 my ($self, $function, %arg) = @_; 264 my ($template, $function, %arg) = @_;
167 265
168 my $serialiser = delete $arg{serialiser} || $STRING_SERIALISER; 266 my $max = $arg{max} || 4;
267 my $idle = $arg{idle} || 0,
268 my $load = $arg{load} || 2,
269 my $start = $arg{start} || 0.1,
270 my $stop = $arg{stop} || 10,
169 my $on_event = delete $arg{on_event}; 271 my $on_event = $arg{on_event} || sub { },
170 my $on_error = delete $arg{on_error};
171 my $on_destroy = delete $arg{on_destroy}; 272 my $on_destroy = $arg{on_destroy};
273
274 my @rpc = (
275 async => $arg{async},
276 init => $arg{init},
277 serialiser => delete $arg{serialiser},
278 on_error => $arg{on_error},
172 279 );
173 # default for on_error is to on_event, if specified
174 $on_error ||= $on_event
175 ? sub { $on_event->(error => shift) }
176 : sub { die "AnyEvent::Fork::RPC: uncaught error: $_[0].\n" };
177 280
178 # default for on_event is to raise an error 281 my (@pool, @queue, $nidle, $start_w, $stop_w, $shutdown);
179 $on_event ||= sub { $on_error->("event received, but no on_event handler") }; 282 my ($start_worker, $stop_worker, $want_start, $want_stop, $scheduler);
180 283
181 my ($f, $t) = eval $serialiser; die $@ if $@; 284 my $destroy_guard = Guard::guard {
285 $on_destroy->()
286 if $on_destroy;
287 };
182 288
183 my (@rcb, %rcb, $fh, $shutdown, $wbuf, $ww); 289 $template
184 my ($rlen, $rbuf, $rw) = 512 - 16; 290 ->require ("AnyEvent::Fork::RPC::" . ($arg{async} ? "Async" : "Sync"))
185 291 ->eval ('
186 my $wcb = sub { 292 my ($magic0, $magic1) = @_;
187 my $len = syswrite $fh, $wbuf; 293 sub AnyEvent::Fork::Pool::retire() {
188 294 AnyEvent::Fork::RPC::event $magic0, "quit", $magic1;
189 unless (defined $len) {
190 if ($! != Errno::EAGAIN && $! != Errno::EWOULDBLOCK) {
191 undef $rw; undef $ww; # it ends here
192 $on_error->("$!");
193 } 295 }
194 } 296 ', $magic0, $magic1)
195
196 substr $wbuf, 0, $len, "";
197
198 unless (length $wbuf) {
199 undef $ww;
200 $shutdown and shutdown $fh, 1;
201 }
202 }; 297 ;
203 298
204 my $module = "AnyEvent::Fork::RPC::" . ($arg{async} ? "Async" : "Sync"); 299 $start_worker = sub {
300 my $proc = [0, 0, undef]; # load, index, rpc
205 301
206 $self->require ($module) 302 $proc->[2] = $template
207 ->send_arg ($function, $arg{init}, $serialiser) 303 ->fork
208 ->run ("$module\::run", sub { 304 ->AnyEvent::Fork::RPC::run ($function,
209 $fh = shift; 305 @rpc,
306 on_event => sub {
307 if (@_ == 3 && $_[0] eq $magic0 && $_[2] eq $magic1) {
308 $destroy_guard if 0; # keep it alive
210 309
211 my ($id, $len); 310 $_[1] eq "quit" and $stop_worker->($proc);
212 $rw = AE::io $fh, 0, sub {
213 $rlen = $rlen * 2 + 16 if $rlen - 128 < length $rbuf;
214 $len = sysread $fh, $rbuf, $rlen - length $rbuf, length $rbuf;
215
216 if ($len) {
217 while (8 <= length $rbuf) {
218 ($id, $len) = unpack "LL", $rbuf;
219 8 + $len <= length $rbuf
220 or last; 311 return;
221
222 my @r = $t->(substr $rbuf, 8, $len);
223 substr $rbuf, 0, 8 + $len, "";
224
225 if ($id) {
226 if (@rcb) {
227 (shift @rcb)->(@r);
228 } elsif (my $cb = delete $rcb{$id}) {
229 $cb->(@r);
230 } else {
231 undef $rw; undef $ww;
232 $on_error->("unexpected data from child");
233 } 312 }
234 } else { 313
235 $on_event->(@r); 314 &$on_event;
236 } 315 },
237 } 316 )
238 } elsif (defined $len) { 317 ;
239 undef $rw; undef $ww; # it ends here
240 318
241 if (@rcb || %rcb) { 319 ++$nidle;
242 $on_error->("unexpected eof"); 320 Array::Heap::push_heap_idx @pool, $proc;
321
322 Scalar::Util::weaken $proc;
323 };
324
325 $stop_worker = sub {
326 my $proc = shift;
327
328 $proc->[0]
329 or --$nidle;
330
331 Array::Heap::splice_heap_idx @pool, $proc->[1]
332 if defined $proc->[1];
333
334 @$proc = 0; # tell others to leave it be
335 };
336
337 $want_start = sub {
338 undef $stop_w;
339
340 $start_w ||= AE::timer $start, $start, sub {
341 if (($nidle < $idle || @queue) && @pool < $max) {
342 $start_worker->();
343 $scheduler->();
243 } else { 344 } else {
244 $on_destroy->(); 345 undef $start_w;
245 }
246 } elsif ($! != Errno::EAGAIN && $! != Errno::EWOULDBLOCK) {
247 undef $rw; undef $ww; # it ends here
248 $on_error->("read: $!");
249 } 346 }
250 }; 347 };
251
252 $ww ||= AE::io $fh, 1, $wcb;
253 }); 348 };
254 349
350 $want_stop = sub {
351 $stop_w ||= AE::timer $stop, $stop, sub {
352 $stop_worker->($pool[0])
353 if $nidle;
354
355 undef $stop_w
356 if $nidle <= $idle;
357 };
358 };
359
360 $scheduler = sub {
361 if (@queue) {
362 while (@queue) {
363 @pool or $start_worker->();
364
365 my $proc = $pool[0];
366
367 if ($proc->[0] < $load) {
368 # found free worker, increase load
369 unless ($proc->[0]++) {
370 # worker became busy
371 --$nidle
372 or undef $stop_w;
373
374 $want_start->()
375 if $nidle < $idle && @pool < $max;
376 }
377
378 Array::Heap::adjust_heap_idx @pool, 0;
379
380 my $job = shift @queue;
381 my $ocb = pop @$job;
382
383 $proc->[2]->(@$job, sub {
384 # reduce load
385 --$proc->[0] # worker still busy?
386 or ++$nidle > $idle # not too many idle processes?
387 or $want_stop->();
388
389 Array::Heap::adjust_heap_idx @pool, $proc->[1]
390 if defined $proc->[1];
391
392 &$ocb;
393
394 $scheduler->();
395 });
396 } else {
397 $want_start->()
398 unless @pool >= $max;
399
400 last;
401 }
402 }
403 } elsif ($shutdown) {
404 @pool = ();
405 undef $start_w;
406 undef $start_worker; # frees $destroy_guard reference
407
408 $stop_worker->($pool[0])
409 while $nidle;
410 }
411 };
412
255 my $guard = Guard::guard { 413 my $shutdown_guard = Guard::guard {
256 $shutdown = 1; 414 $shutdown = 1;
257 $ww ||= $fh && AE::io $fh, 1, $wcb; 415 $scheduler->();
416 };
417
418 $start_worker->()
419 while @pool < $idle;
420
421 sub {
422 $shutdown_guard if 0; # keep it alive
423
424 $start_worker->()
425 unless @pool;
426
427 push @queue, [@_];
428 $scheduler->();
258 }; 429 }
259
260 my $id;
261
262 $arg{async}
263 ? sub {
264 $id = ($id == 0xffffffff ? 0 : $id) + 1;
265 $id = ($id == 0xffffffff ? 0 : $id) + 1 while exists $rcb{$id}; # rarely loops
266
267 $rcb{$id} = pop;
268
269 $guard; # keep it alive
270
271 $wbuf .= pack "LL/a*", $id, &$f;
272 $ww ||= $fh && AE::io $fh, 1, $wcb;
273 }
274 : sub {
275 push @rcb, pop;
276
277 $guard; # keep it alive
278
279 $wbuf .= pack "L/a*", &$f;
280 $ww ||= $fh && AE::io $fh, 1, $wcb;
281 }
282} 430}
283 431
284=item $pool->call (..., $cb->(...)) 432=item $pool->(..., $cb->(...))
433
434Call the RPC function of a worker with the given arguments, and when the
435worker is done, call the C<$cb> with the results, just like calling the
436RPC object durectly - see the L<AnyEvent::Fork::RPC> documentation for
437details on the RPC API.
438
439If there is no free worker, the call will be queued until a worker becomes
440available.
441
442Note that there can be considerable time between calling this method and
443the call actually being executed. During this time, the parameters passed
444to this function are effectively read-only - modifying them after the call
445and before the callback is invoked causes undefined behaviour.
446
447=cut
285 448
286=back 449=back
450
451=head1 CHILD USAGE
452
453In addition to the L<AnyEvent::Fork::RPC> API, this module implements one
454more child-side function:
455
456=over 4
457
458=item AnyEvent::Fork::Pool::retire ()
459
460This function sends an event to the parent process to request retirement:
461the worker is removed from the pool and no new jobs will be sent to it,
462but it has to handle the jobs that are already queued.
463
464The parentheses are part of the syntax: the function usually isn't defined
465when you compile your code (because that happens I<before> handing the
466template process over to C<AnyEvent::Fork::Pool::run>, so you need the
467empty parentheses to tell Perl that the function is indeed a function.
468
469Retiring a worker can be useful to gracefully shut it down when the worker
470deems this useful. For example, after executing a job, one could check
471the process size or the number of jobs handled so far, and if either is
472too high, the worker could ask to get retired, to avoid memory leaks to
473accumulate.
474
475=back
476
477=head1 POOL PARAMETERS RECIPES
478
479This section describes some recipes for pool paramaters. These are mostly
480meant for the synchronous RPC backend, as the asynchronous RPC backend
481changes the rules considerably, making workers themselves responsible for
482their scheduling.
483
484=over 4
485
486=item low latency - set load = 1
487
488If you need a deterministic low latency, you should set the C<load>
489parameter to C<1>. This ensures that never more than one job is sent to
490each worker. This avoids having to wait for a previous job to finish.
491
492This makes most sense with the synchronous (default) backend, as the
493asynchronous backend can handle multiple requests concurrently.
494
495=item lowest latency - set load = 1 and idle = max
496
497To achieve the lowest latency, you additionally should disable any dynamic
498resizing of the pool by setting C<idle> to the same value as C<max>.
499
500=item high throughput, cpu bound jobs - set load >= 2, max = #cpus
501
502To get high throughput with cpu-bound jobs, you should set the maximum
503pool size to the number of cpus in your system, and C<load> to at least
504C<2>, to make sure there can be another job waiting for the worker when it
505has finished one.
506
507The value of C<2> for C<load> is the minimum value that I<can> achieve
508100% throughput, but if your parent process itself is sometimes busy, you
509might need higher values. Also there is a limit on the amount of data that
510can be "in flight" to the worker, so if you send big blobs of data to your
511worker, C<load> might have much less of an effect.
512
513=item high throughput, I/O bound jobs - set load >= 2, max = 1, or very high
514
515When your jobs are I/O bound, using more workers usually boils down to
516higher throughput, depending very much on your actual workload - sometimes
517having only one worker is best, for example, when you read or write big
518files at maixmum speed, as a second worker will increase seek times.
519
520=back
521
522=head1 EXCEPTIONS
523
524The same "policy" as with L<AnyEvent::Fork::RPC> applies - exceptins will
525not be caught, and exceptions in both worker and in callbacks causes
526undesirable or undefined behaviour.
287 527
288=head1 SEE ALSO 528=head1 SEE ALSO
289 529
290L<AnyEvent::Fork>, to create the processes in the first place. 530L<AnyEvent::Fork>, to create the processes in the first place.
291 531

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines