ViewVC Help
View File | Revision Log | Show Annotations | Download File
/cvs/AnyEvent-Fork-Pool/Pool.pm
(Generate patch)

Comparing AnyEvent-Fork-Pool/Pool.pm (file contents):
Revision 1.3 by root, Sat Apr 20 16:38:10 2013 UTC vs.
Revision 1.8 by root, Sun Apr 21 12:28:34 2013 UTC

1=head1 NAME 1=head1 NAME
2 2
3AnyEvent::Fork::Pool - simple process pool manager on top of AnyEvent::Fork 3AnyEvent::Fork::Pool - simple process pool manager on top of AnyEvent::Fork
4
5THE API IS NOT FINISHED, CONSIDER THIS AN ALPHA RELEASE
4 6
5=head1 SYNOPSIS 7=head1 SYNOPSIS
6 8
7 use AnyEvent; 9 use AnyEvent;
8 use AnyEvent::Fork::Pool; 10 use AnyEvent::Fork::Pool;
9 # use AnyEvent::Fork is not needed 11 # use AnyEvent::Fork is not needed
10 12
11 # all parameters with default values 13 # all possible parameters shown, with default values
12 my $pool = AnyEvent::Fork 14 my $pool = AnyEvent::Fork
13 ->new 15 ->new
14 ->require ("MyWorker") 16 ->require ("MyWorker")
15 ->AnyEvent::Fork::Pool::run ( 17 ->AnyEvent::Fork::Pool::run (
16 "MyWorker::run", # the worker function 18 "MyWorker::run", # the worker function
17 19
18 # pool management 20 # pool management
19 max => 4, # absolute maximum # of processes 21 max => 4, # absolute maximum # of processes
20 idle => 2, # minimum # of idle processes 22 idle => 0, # minimum # of idle processes
21 load => 2, # queue at most this number of jobs per process 23 load => 2, # queue at most this number of jobs per process
22 start => 0.1, # wait this many seconds before starting a new process 24 start => 0.1, # wait this many seconds before starting a new process
23 stop => 1, # wait this many seconds before stopping an idle process 25 stop => 10, # wait this many seconds before stopping an idle process
24 on_destroy => (my $finish = AE::cv), # called when object is destroyed 26 on_destroy => (my $finish = AE::cv), # called when object is destroyed
25 27
26 # parameters passed to AnyEvent::Fork::RPC 28 # parameters passed to AnyEvent::Fork::RPC
27 async => 0, 29 async => 0,
28 on_error => sub { die "FATAL: $_[0]\n" }, 30 on_error => sub { die "FATAL: $_[0]\n" },
48pool of processes that handles jobs. 50pool of processes that handles jobs.
49 51
50Understanding of L<AnyEvent::Fork> is helpful but not critical to be able 52Understanding of L<AnyEvent::Fork> is helpful but not critical to be able
51to use this module, but a thorough understanding of L<AnyEvent::Fork::RPC> 53to use this module, but a thorough understanding of L<AnyEvent::Fork::RPC>
52is, as it defines the actual API that needs to be implemented in the 54is, as it defines the actual API that needs to be implemented in the
53children. 55worker processes.
54 56
55=head1 EXAMPLES 57=head1 EXAMPLES
56 58
57=head1 PARENT USAGE 59=head1 PARENT USAGE
60
61To create a pool, you first have to create a L<AnyEvent::Fork> object -
62this object becomes your template process. Whenever a new worker process
63is needed, it is forked from this template process. Then you need to
64"hand off" this template process to the C<AnyEvent::Fork::Pool> module by
65calling its run method on it:
66
67 my $template = AnyEvent::Fork
68 ->new
69 ->require ("SomeModule", "MyWorkerModule");
70
71 my $pool = $template->AnyEvent::Fork::Pool::run ("MyWorkerModule::myfunction");
72
73The pool "object" is not a regular Perl object, but a code reference that
74you can call and that works roughly like calling the worker function
75directly, except that it returns nothing but instead you need to specify a
76callback to be invoked once results are in:
77
78 $pool->(1, 2, 3, sub { warn "myfunction(1,2,3) returned @_" });
58 79
59=over 4 80=over 4
60 81
61=cut 82=cut
62 83
79my $magic0 = ':t6Z@HK1N%Dx@_7?=~-7NQgWDdAs6a,jFN=wLO0*jD*1%P'; 100my $magic0 = ':t6Z@HK1N%Dx@_7?=~-7NQgWDdAs6a,jFN=wLO0*jD*1%P';
80my $magic1 = '<~53rexz.U`!]X[A235^"fyEoiTF\T~oH1l/N6+Djep9b~bI9`\1x%B~vWO1q*'; 101my $magic1 = '<~53rexz.U`!]X[A235^"fyEoiTF\T~oH1l/N6+Djep9b~bI9`\1x%B~vWO1q*';
81 102
82our $VERSION = 0.1; 103our $VERSION = 0.1;
83 104
84=item my $rpc = AnyEvent::Fork::Pool::run $fork, $function, [key => value...] 105=item my $pool = AnyEvent::Fork::Pool::run $fork, $function, [key => value...]
85 106
86The traditional way to call it. But it is way cooler to call it in the 107The traditional way to call the pool creation function. But it is way
87following way: 108cooler to call it in the following way:
88 109
89=item my $rpc = $fork->AnyEvent::Fork::Pool::run ($function, [key => value...]) 110=item my $pool = $fork->AnyEvent::Fork::Pool::run ($function, [key => value...])
90 111
91Creates a new pool object with the specified C<$function> as function 112Creates a new pool object with the specified C<$function> as function
92(name) to call for each request. The pool uses the C<$fork> object as the 113(name) to call for each request. The pool uses the C<$fork> object as the
93template when creating worker processes. 114template when creating worker processes.
94 115
103 124
104=item Pool Management 125=item Pool Management
105 126
106The pool consists of a certain number of worker processes. These options 127The pool consists of a certain number of worker processes. These options
107decide how many of these processes exist and when they are started and 128decide how many of these processes exist and when they are started and
108stopp.ed 129stopped.
130
131The worker pool is dynamically resized, according to (perceived :)
132load. The minimum size is given by the C<idle> parameter and the maximum
133size is given by the C<max> parameter. A new worker is started every
134C<start> seconds at most, and an idle worker is stopped at most every
135C<stop> second.
136
137You can specify the amount of jobs sent to a worker concurrently using the
138C<load> parameter.
109 139
110=over 4 140=over 4
111 141
112=item idle => $count (default: 0) 142=item idle => $count (default: 0)
113 143
114The minimum amount of idle processes in the pool - when there are fewer 144The minimum amount of idle processes in the pool - when there are fewer
115than this many idle workers, C<AnyEvent::Fork::Pool> will try to start new 145than this many idle workers, C<AnyEvent::Fork::Pool> will try to start new
116ones, subject to C<max> and C<start>. 146ones, subject to the limits set by C<max> and C<start>.
117 147
118This is also the initial/minimum amount of workers in the pool. The 148This is also the initial amount of workers in the pool. The default of
119default of zero means that the pool starts empty and can shrink back to 149zero means that the pool starts empty and can shrink back to zero workers
120zero workers over time. 150over time.
121 151
122=item max => $count (default: 4) 152=item max => $count (default: 4)
123 153
124The maximum number of processes in the pool, in addition to the template 154The maximum number of processes in the pool, in addition to the template
125process. C<AnyEvent::Fork::Pool> will never create more than this number 155process. C<AnyEvent::Fork::Pool> will never have more than this number of
126of worker processes, although there can be more temporarily when a worker 156worker processes, although there can be more temporarily when a worker is
127is shut down and hasn't exited yet. 157shut down and hasn't exited yet.
128 158
129=item load => $count (default: 2) 159=item load => $count (default: 2)
130 160
131The maximum number of concurrent jobs sent to a single worker 161The maximum number of concurrent jobs sent to a single worker process.
132process. Worker processes that handle this number of jobs already are
133called "busy".
134 162
135Jobs that cannot be sent to a worker immediately (because all workers are 163Jobs that cannot be sent to a worker immediately (because all workers are
136busy) will be queued until a worker is available. 164busy) will be queued until a worker is available.
137 165
166Setting this low improves latency. For example, at C<1>, every job that
167is sent to a worker is sent to a completely idle worker that doesn't run
168any other jobs. The downside is that throughput is reduced - a worker that
169finishes a job needs to wait for a new job from the parent.
170
171The default of C<2> is usually a good compromise.
172
138=item start => $seconds (default: 0.1) 173=item start => $seconds (default: 0.1)
139 174
140When a job is queued and all workers are busy, a timer is started. If the 175When there are fewer than C<idle> workers (or all workers are completely
141timer elapses and there are still jobs that cannot be queued to a worker, 176busy), then a timer is started. If the timer elapses and there are still
142a new worker is started. 177jobs that cannot be queued to a worker, a new worker is started.
143 178
144This configurs the time that all workers must be busy before a new worker 179This sets the minimum time that all workers must be busy before a new
145is started. Or, put differently, the minimum delay betwene starting new 180worker is started. Or, put differently, the minimum delay between starting
146workers. 181new workers.
147 182
148The delay is zero by default, which means new workers will be started 183The delay is small by default, which means new workers will be started
149without delay. 184relatively quickly. A delay of C<0> is possible, and ensures that the pool
185will grow as quickly as possible under load.
150 186
187Non-zero values are useful to avoid "exploding" a pool because a lot of
188jobs are queued in an instant.
189
190Higher values are often useful to improve efficiency at the cost of
191latency - when fewer processes can do the job over time, starting more and
192more is not necessarily going to help.
193
151=item stop => $seconds (default: 1) 194=item stop => $seconds (default: 10)
152 195
153When a worker has no jobs to execute it becomes idle. An idle worker that 196When a worker has no jobs to execute it becomes idle. An idle worker that
154hasn't executed a job within this amount of time will be stopped, unless 197hasn't executed a job within this amount of time will be stopped, unless
155the other parameters say otherwise. 198the other parameters say otherwise.
156 199
200Setting this to a very high value means that workers stay around longer,
201even when they have nothing to do, which can be good as they don't have to
202be started on the netx load spike again.
203
204Setting this to a lower value can be useful to avoid memory or simply
205process table wastage.
206
207Usually, setting this to a time longer than the time between load spikes
208is best - if you expect a lot of requests every minute and little work
209in between, setting this to longer than a minute avoids having to stop
210and start workers. On the other hand, you have to ask yourself if letting
211workers run idle is a good use of your resources. Try to find a good
212balance between resource usage of your workers and the time to start new
213workers - the processes created by L<AnyEvent::Fork> itself is fats at
214creating workers while not using much memory for them, so most of the
215overhead is likely from your own code.
216
157=item on_destroy => $callback->() (default: none) 217=item on_destroy => $callback->() (default: none)
158 218
159When a pool object goes out of scope, it will still handle all outstanding 219When a pool object goes out of scope, the outstanding requests are still
160jobs. After that, it will destroy all workers (and also the template 220handled till completion. Only after handling all jobs will the workers
161process if it isn't referenced otherwise). 221be destroyed (and also the template process if it isn't referenced
222otherwise).
223
224To find out when a pool I<really> has finished its work, you can set this
225callback, which will be called when the pool has been destroyed.
162 226
163=back 227=back
164 228
165=item Template Process 229=item AnyEvent::Fork::RPC Parameters
166 230
167The worker processes are all forked from a single template 231These parameters are all passed more or less directly to
168process. Ideally, all modules and all cdoe used by the worker, as well as 232L<AnyEvent::Fork::RPC>. They are only briefly mentioned here, for
169any shared data structures should be loaded into the template process, to 233their full documentation please refer to the L<AnyEvent::Fork::RPC>
170take advantage of data sharing via fork. 234documentation. Also, the default values mentioned here are only documented
171 235as a best effort - the L<AnyEvent::Fork::RPC> documentation is binding.
172You can create your own template process by creating a L<AnyEvent::Fork>
173object yourself and passing it as the C<template> parameter, but
174C<AnyEvent::Fork::Pool> can create one for you, including some standard
175options.
176 236
177=over 4 237=over 4
178 238
179=item template => $fork (default: C<< AnyEvent::Fork->new >>)
180
181The template process to use, if you want to create your own.
182
183=item require => \@modules (default: C<[]>)
184
185The modules in this list will be laoded into the template process.
186
187=item eval => "# perl code to execute in template" (default: none)
188
189This is a perl string that is evaluated after creating the template
190process and after requiring the modules. It can do whatever it wants to
191configure the process, but it must not do anything that would keep a later
192fork from working (so must not create event handlers or (real) threads for
193example).
194
195=back
196
197=item AnyEvent::Fork::RPC Parameters
198
199These parameters are all passed directly to L<AnyEvent::Fork::RPC>. They
200are only briefly mentioned here, for their full documentation
201please refer to the L<AnyEvent::Fork::RPC> documentation. Also, the
202default values mentioned here are only documented as a best effort -
203L<AnyEvent::Fork::RPC> documentation is binding.
204
205=over 4
206
207=item async => $boolean (default: 0) 239=item async => $boolean (default: 0)
208 240
209Whether to sue the synchronous or asynchronous RPC backend. 241Whether to use the synchronous or asynchronous RPC backend.
210 242
211=item on_error => $callback->($message) (default: die with message) 243=item on_error => $callback->($message) (default: die with message)
212 244
213The callback to call on any (fatal) errors. 245The callback to call on any (fatal) errors.
214 246
235 267
236 my $max = $arg{max} || 4; 268 my $max = $arg{max} || 4;
237 my $idle = $arg{idle} || 0, 269 my $idle = $arg{idle} || 0,
238 my $load = $arg{load} || 2, 270 my $load = $arg{load} || 2,
239 my $start = $arg{start} || 0.1, 271 my $start = $arg{start} || 0.1,
240 my $stop = $arg{stop} || 1, 272 my $stop = $arg{stop} || 10,
241 my $on_event = $arg{on_event} || sub { }, 273 my $on_event = $arg{on_event} || sub { },
242 my $on_destroy = $arg{on_destroy}; 274 my $on_destroy = $arg{on_destroy};
243 275
244 my @rpc = ( 276 my @rpc = (
245 async => $arg{async}, 277 async => $arg{async},
246 init => $arg{init}, 278 init => $arg{init},
247 serialiser => $arg{serialiser}, 279 serialiser => delete $arg{serialiser},
248 on_error => $arg{on_error}, 280 on_error => $arg{on_error},
249 ); 281 );
250 282
251 my (@pool, @queue, $nidle, $start_w, $stop_w, $shutdown); 283 my (@pool, @queue, $nidle, $start_w, $stop_w, $shutdown);
252 my ($start, $stop, $want_start, $want_stop, $scheduler); 284 my ($start_worker, $stop_worker, $want_start, $want_stop, $scheduler);
253 285
254 my $destroy_guard = Guard::guard { 286 my $destroy_guard = Guard::guard {
255 $on_destroy->() 287 $on_destroy->()
256 if $on_destroy; 288 if $on_destroy;
257 }; 289 };
258
259 my $busy;#d#
260 290
261 $template 291 $template
262 ->require ("AnyEvent::Fork::RPC::" . ($arg{async} ? "Async" : "Sync")) 292 ->require ("AnyEvent::Fork::RPC::" . ($arg{async} ? "Async" : "Sync"))
263 ->eval (' 293 ->eval ('
264 my ($magic0, $magic1) = @_; 294 my ($magic0, $magic1) = @_;
265 sub AnyEvent::Fork::Pool::quit() { 295 sub AnyEvent::Fork::Pool::retire() {
266 AnyEvent::Fork::RPC::on_event $magic0, "quit", $magic1; 296 AnyEvent::Fork::RPC::event $magic0, "quit", $magic1;
267 } 297 }
268 ', $magic0, $magic1) 298 ', $magic0, $magic1)
269 ->eval (delete $arg{eval}); 299 ;
270 300
271 $start = sub { 301 $start_worker = sub {
272 my $proc = [0, 0, undef]; # load, index, rpc 302 my $proc = [0, 0, undef]; # load, index, rpc
273
274 warn "start a worker\n";#d#
275 303
276 $proc->[2] = $template 304 $proc->[2] = $template
277 ->fork 305 ->fork
278 ->AnyEvent::Fork::RPC::run ($function, 306 ->AnyEvent::Fork::RPC::run ($function,
279 @rpc, 307 @rpc,
280 on_event => sub { 308 on_event => sub {
281 if (@_ == 3 && $_[0] eq $magic0 && $_[2] eq $magic1) { 309 if (@_ == 3 && $_[0] eq $magic0 && $_[2] eq $magic1) {
282 $destroy_guard if 0; # keep it alive 310 $destroy_guard if 0; # keep it alive
283 311
284 $_[1] eq "quit" and $stop->($proc); 312 $_[1] eq "quit" and $stop_worker->($proc);
285 return; 313 return;
286 } 314 }
287 315
288 &$on_event; 316 &$on_event;
289 }, 317 },
290 ) 318 )
291 ; 319 ;
292 320
293 ++$nidle; 321 ++$nidle;
294 Array::Heap::push_heap @pool, $proc; 322 Array::Heap::push_heap_idx @pool, $proc;
295 323
296 Scalar::Util::weaken $proc; 324 Scalar::Util::weaken $proc;
297 }; 325 };
298 326
299 $stop = sub { 327 $stop_worker = sub {
300 my $proc = shift; 328 my $proc = shift;
301 329
302 $proc->[0] 330 $proc->[0]
303 or --$nidle; 331 or --$nidle;
304 332
305 Array::Heap::splice_heap_idx @pool, $proc->[1] 333 Array::Heap::splice_heap_idx @pool, $proc->[1]
306 if defined $proc->[1]; 334 if defined $proc->[1];
335
336 @$proc = 0; # tell others to leave it be
307 }; 337 };
308 338
309 $want_start = sub { 339 $want_start = sub {
310 undef $stop_w; 340 undef $stop_w;
311 341
312 $start_w ||= AE::timer $start, 0, sub { 342 $start_w ||= AE::timer $start, $start, sub {
313 undef $start_w; 343 if (($nidle < $idle || @queue) && @pool < $max) {
314
315 if (@queue) {
316 $start->(); 344 $start_worker->();
317 $scheduler->(); 345 $scheduler->();
346 } else {
347 undef $start_w;
318 } 348 }
319 }; 349 };
320 }; 350 };
321 351
322 $want_stop = sub { 352 $want_stop = sub {
323 $stop_w ||= AE::timer $stop, 0, sub { 353 $stop_w ||= AE::timer $stop, $stop, sub {
324 undef $stop_w;
325
326 $stop->($pool[0]) 354 $stop_worker->($pool[0])
327 if $nidle; 355 if $nidle;
356
357 undef $stop_w
358 if $nidle <= $idle;
328 }; 359 };
329 }; 360 };
330 361
331 $scheduler = sub { 362 $scheduler = sub {
332 if (@queue) { 363 if (@queue) {
333 while (@queue) { 364 while (@queue) {
365 @pool or $start_worker->();
366
334 my $proc = $pool[0]; 367 my $proc = $pool[0];
335 368
336 if ($proc->[0] < $load) { 369 if ($proc->[0] < $load) {
337 warn "free $proc $proc->[0]\n";#d#
338 # found free worker 370 # found free worker, increase load
339 $proc->[0]++ 371 unless ($proc->[0]++) {
372 # worker became busy
340 or --$nidle >= $idle 373 --$nidle
374 or undef $stop_w;
375
341 or $want_start->(); 376 $want_start->()
377 if $nidle < $idle && @pool < $max;
378 }
342 379
343 Array::Heap::adjust_heap @pool, 0; 380 Array::Heap::adjust_heap_idx @pool, 0;
344 381
345 my $job = shift @queue; 382 my $job = shift @queue;
346 my $ocb = pop @$job; 383 my $ocb = pop @$job;
347 384
348 $proc->[2]->(@$job, sub { 385 $proc->[2]->(@$job, sub {
349 --$busy; warn "busy now $busy\n";#d#
350 # reduce queue counter 386 # reduce load
351 --$pool[$_][0] 387 --$proc->[0] # worker still busy?
352 or ++$nidle > $idle 388 or ++$nidle > $idle # not too many idle processes?
353 or $want_stop->(); 389 or $want_stop->();
354 390
355 Array::Heap::adjust_heap @pool, $_; 391 Array::Heap::adjust_heap_idx @pool, $proc->[1]
392 if defined $proc->[1];
393
394 &$ocb;
356 395
357 $scheduler->(); 396 $scheduler->();
358
359 &$ocb;
360 }); 397 });
361 } else { 398 } else {
362 warn "busy $proc->[0]\n";#d#
363 # all busy, delay
364
365 $want_start->(); 399 $want_start->()
400 unless @pool >= $max;
401
366 last; 402 last;
367 } 403 }
368 } 404 }
369 } elsif ($shutdown) { 405 } elsif ($shutdown) {
370 @pool = (); 406 @pool = ();
371 undef $start_w; 407 undef $start_w;
372 undef $start; # frees $destroy_guard reference 408 undef $start_worker; # frees $destroy_guard reference
373 409
374 $stop->($pool[0]) 410 $stop_worker->($pool[0])
375 while $nidle; 411 while $nidle;
376 } 412 }
377 }; 413 };
378 414
379 my $shutdown_guard = Guard::guard { 415 my $shutdown_guard = Guard::guard {
380 $shutdown = 1; 416 $shutdown = 1;
381 $scheduler->(); 417 $scheduler->();
382 }; 418 };
383 419
384 $start->() 420 $start_worker->()
385 while @pool < $idle; 421 while @pool < $idle;
386 422
387 sub { 423 sub {
388 $shutdown_guard if 0; # keep it alive 424 $shutdown_guard if 0; # keep it alive
389 425
390 ++$busy;#d#
391
392 $start->() 426 $start_worker->()
393 unless @pool; 427 unless @pool;
394 428
395 push @queue, [@_]; 429 push @queue, [@_];
396 $scheduler->(); 430 $scheduler->();
397 } 431 }
398} 432}
399 433
400=item $pool->call (..., $cb->(...)) 434=item $pool->(..., $cb->(...))
401 435
402Call the RPC function of a worker with the given arguments, and when the 436Call the RPC function of a worker with the given arguments, and when the
403worker is done, call the C<$cb> with the results, like just calling the 437worker is done, call the C<$cb> with the results, just like calling the
404L<AnyEvent::Fork::RPC> object directly. 438RPC object durectly - see the L<AnyEvent::Fork::RPC> documentation for
439details on the RPC API.
405 440
406If there is no free worker, the call will be queued. 441If there is no free worker, the call will be queued until a worker becomes
442available.
407 443
408Note that there can be considerable time between calling this method and 444Note that there can be considerable time between calling this method and
409the call actually being executed. During this time, the parameters passed 445the call actually being executed. During this time, the parameters passed
410to this function are effectively read-only - modifying them after the call 446to this function are effectively read-only - modifying them after the call
411and before the callback is invoked causes undefined behaviour. 447and before the callback is invoked causes undefined behaviour.
412 448
413=cut 449=cut
414 450
415=back 451=back
416 452
453=head1 CHILD USAGE
454
455In addition to the L<AnyEvent::Fork::RPC> API, this module implements one
456more child-side function:
457
458=over 4
459
460=item AnyEvent::Fork::Pool::retire ()
461
462This function sends an event to the parent process to request retirement:
463the worker is removed from the pool and no new jobs will be sent to it,
464but it has to handle the jobs that are already queued.
465
466The parentheses are part of the syntax: the function usually isn't defined
467when you compile your code (because that happens I<before> handing the
468template process over to C<AnyEvent::Fork::Pool::run>, so you need the
469empty parentheses to tell Perl that the function is indeed a function.
470
471Retiring a worker can be useful to gracefully shut it down when the worker
472deems this useful. For example, after executing a job, one could check
473the process size or the number of jobs handled so far, and if either is
474too high, the worker could ask to get retired, to avoid memory leaks to
475accumulate.
476
477=back
478
479=head1 POOL PARAMETERS RECIPES
480
481This section describes some recipes for pool paramaters. These are mostly
482meant for the synchronous RPC backend, as the asynchronous RPC backend
483changes the rules considerably, making workers themselves responsible for
484their scheduling.
485
486=over 4
487
488=item low latency - set load = 1
489
490If you need a deterministic low latency, you should set the C<load>
491parameter to C<1>. This ensures that never more than one job is sent to
492each worker. This avoids having to wait for a previous job to finish.
493
494This makes most sense with the synchronous (default) backend, as the
495asynchronous backend can handle multiple requests concurrently.
496
497=item lowest latency - set load = 1 and idle = max
498
499To achieve the lowest latency, you additionally should disable any dynamic
500resizing of the pool by setting C<idle> to the same value as C<max>.
501
502=item high throughput, cpu bound jobs - set load >= 2, max = #cpus
503
504To get high throughput with cpu-bound jobs, you should set the maximum
505pool size to the number of cpus in your system, and C<load> to at least
506C<2>, to make sure there can be another job waiting for the worker when it
507has finished one.
508
509The value of C<2> for C<load> is the minimum value that I<can> achieve
510100% throughput, but if your parent process itself is sometimes busy, you
511might need higher values. Also there is a limit on the amount of data that
512can be "in flight" to the worker, so if you send big blobs of data to your
513worker, C<load> might have much less of an effect.
514
515=item high throughput, I/O bound jobs - set load >= 2, max = 1, or very high
516
517When your jobs are I/O bound, using more workers usually boils down to
518higher throughput, depending very much on your actual workload - sometimes
519having only one worker is best, for example, when you read or write big
520files at maixmum speed, as a second worker will increase seek times.
521
522=back
523
524=head1 EXCEPTIONS
525
526The same "policy" as with L<AnyEvent::Fork::RPC> applies - exceptins will
527not be caught, and exceptions in both worker and in callbacks causes
528undesirable or undefined behaviour.
529
417=head1 SEE ALSO 530=head1 SEE ALSO
418 531
419L<AnyEvent::Fork>, to create the processes in the first place. 532L<AnyEvent::Fork>, to create the processes in the first place.
420 533
421L<AnyEvent::Fork::RPC>, which implements the RPC protocol and API. 534L<AnyEvent::Fork::RPC>, which implements the RPC protocol and API.

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines