1 |
=head1 NAME |
2 |
|
3 |
AnyEvent::Fork - everything you wanted to use fork() for, but couldn't |
4 |
|
5 |
=head1 SYNOPSIS |
6 |
|
7 |
use AnyEvent::Fork; |
8 |
|
9 |
AnyEvent::Fork |
10 |
->new |
11 |
->require ("MyModule") |
12 |
->run ("MyModule::server", my $cv = AE::cv); |
13 |
|
14 |
my $fh = $cv->recv; |
15 |
|
16 |
=head1 DESCRIPTION |
17 |
|
18 |
This module allows you to create new processes, without actually forking |
19 |
them from your current process (avoiding the problems of forking), but |
20 |
preserving most of the advantages of fork. |
21 |
|
22 |
It can be used to create new worker processes or new independent |
23 |
subprocesses for short- and long-running jobs, process pools (e.g. for use |
24 |
in pre-forked servers) but also to spawn new external processes (such as |
25 |
CGI scripts from a web server), which can be faster (and more well behaved) |
26 |
than using fork+exec in big processes. |
27 |
|
28 |
Special care has been taken to make this module useful from other modules, |
29 |
while still supporting specialised environments such as L<App::Staticperl> |
30 |
or L<PAR::Packer>. |
31 |
|
32 |
=head2 WHAT THIS MODULE IS NOT |
33 |
|
34 |
This module only creates processes and lets you pass file handles and |
35 |
strings to it, and run perl code. It does not implement any kind of RPC - |
36 |
there is no back channel from the process back to you, and there is no RPC |
37 |
or message passing going on. |
38 |
|
39 |
If you need some form of RPC, you can either implement it yourself |
40 |
in whatever way you like, use some message-passing module such |
41 |
as L<AnyEvent::MP>, some pipe such as L<AnyEvent::ZeroMQ>, use |
42 |
L<AnyEvent::Handle> on both sides to send e.g. JSON or Storable messages, |
43 |
and so on. |
44 |
|
45 |
=head2 COMPARISON TO OTHER MODULES |
46 |
|
47 |
There is an abundance of modules on CPAN that do "something fork", such as |
48 |
L<Parallel::ForkManager>, L<AnyEvent::ForkManager>, L<AnyEvent::Worker> |
49 |
or L<AnyEvent::Subprocess>. There are modules that implement their own |
50 |
process management, such as L<AnyEvent::DBI>. |
51 |
|
52 |
The problems that all these modules try to solve are real, however, none |
53 |
of them (from what I have seen) tackle the very real problems of unwanted |
54 |
memory sharing, efficiency, not being able to use event processing or |
55 |
similar modules in the processes they create. |
56 |
|
57 |
This module doesn't try to replace any of them - instead it tries to solve |
58 |
the problem of creating processes with a minimum of fuss and overhead (and |
59 |
also luxury). Ideally, most of these would use AnyEvent::Fork internally, |
60 |
except they were written before AnyEvent:Fork was available, so obviously |
61 |
had to roll their own. |
62 |
|
63 |
=head2 PROBLEM STATEMENT |
64 |
|
65 |
There are two traditional ways to implement parallel processing on UNIX |
66 |
like operating systems - fork and process, and fork+exec and process. They |
67 |
have different advantages and disadvantages that I describe below, |
68 |
together with how this module tries to mitigate the disadvantages. |
69 |
|
70 |
=over 4 |
71 |
|
72 |
=item Forking from a big process can be very slow. |
73 |
|
74 |
A 5GB process needs 0.05s to fork on my 3.6GHz amd64 GNU/Linux box. This |
75 |
overhead is often shared with exec (because you have to fork first), but |
76 |
in some circumstances (e.g. when vfork is used), fork+exec can be much |
77 |
faster. |
78 |
|
79 |
This module can help here by telling a small(er) helper process to fork, |
80 |
which is faster then forking the main process, and also uses vfork where |
81 |
possible. This gives the speed of vfork, with the flexibility of fork. |
82 |
|
83 |
=item Forking usually creates a copy-on-write copy of the parent |
84 |
process. |
85 |
|
86 |
For example, modules or data files that are loaded will not use additional |
87 |
memory after a fork. When exec'ing a new process, modules and data files |
88 |
might need to be loaded again, at extra CPU and memory cost. But when |
89 |
forking, literally all data structures are copied - if the program frees |
90 |
them and replaces them by new data, the child processes will retain the |
91 |
old version even if it isn't used, which can suddenly and unexpectedly |
92 |
increase memory usage when freeing memory. |
93 |
|
94 |
The trade-off is between more sharing with fork (which can be good or |
95 |
bad), and no sharing with exec. |
96 |
|
97 |
This module allows the main program to do a controlled fork, and allows |
98 |
modules to exec processes safely at any time. When creating a custom |
99 |
process pool you can take advantage of data sharing via fork without |
100 |
risking to share large dynamic data structures that will blow up child |
101 |
memory usage. |
102 |
|
103 |
In other words, this module puts you into control over what is being |
104 |
shared and what isn't, at all times. |
105 |
|
106 |
=item Exec'ing a new perl process might be difficult. |
107 |
|
108 |
For example, it is not easy to find the correct path to the perl |
109 |
interpreter - C<$^X> might not be a perl interpreter at all. |
110 |
|
111 |
This module tries hard to identify the correct path to the perl |
112 |
interpreter. With a cooperative main program, exec'ing the interpreter |
113 |
might not even be necessary, but even without help from the main program, |
114 |
it will still work when used from a module. |
115 |
|
116 |
=item Exec'ing a new perl process might be slow, as all necessary modules |
117 |
have to be loaded from disk again, with no guarantees of success. |
118 |
|
119 |
Long running processes might run into problems when perl is upgraded |
120 |
and modules are no longer loadable because they refer to a different |
121 |
perl version, or parts of a distribution are newer than the ones already |
122 |
loaded. |
123 |
|
124 |
This module supports creating pre-initialised perl processes to be used as |
125 |
a template for new processes. |
126 |
|
127 |
=item Forking might be impossible when a program is running. |
128 |
|
129 |
For example, POSIX makes it almost impossible to fork from a |
130 |
multi-threaded program while doing anything useful in the child - in |
131 |
fact, if your perl program uses POSIX threads (even indirectly via |
132 |
e.g. L<IO::AIO> or L<threads>), you cannot call fork on the perl level |
133 |
anymore without risking corruption issues on a number of operating |
134 |
systems. |
135 |
|
136 |
This module can safely fork helper processes at any time, by calling |
137 |
fork+exec in C, in a POSIX-compatible way (via L<Proc::FastSpawn>). |
138 |
|
139 |
=item Parallel processing with fork might be inconvenient or difficult |
140 |
to implement. Modules might not work in both parent and child. |
141 |
|
142 |
For example, when a program uses an event loop and creates watchers it |
143 |
becomes very hard to use the event loop from a child program, as the |
144 |
watchers already exist but are only meaningful in the parent. Worse, a |
145 |
module might want to use such a module, not knowing whether another module |
146 |
or the main program also does, leading to problems. |
147 |
|
148 |
Apart from event loops, graphical toolkits also commonly fall into the |
149 |
"unsafe module" category, or just about anything that communicates with |
150 |
the external world, such as network libraries and file I/O modules, which |
151 |
usually don't like being copied and then allowed to continue in two |
152 |
processes. |
153 |
|
154 |
With this module only the main program is allowed to create new processes |
155 |
by forking (because only the main program can know when it is still safe |
156 |
to do so) - all other processes are created via fork+exec, which makes it |
157 |
possible to use modules such as event loops or window interfaces safely. |
158 |
|
159 |
=back |
160 |
|
161 |
=head1 EXAMPLES |
162 |
|
163 |
=head2 Create a single new process, tell it to run your worker function. |
164 |
|
165 |
AnyEvent::Fork |
166 |
->new |
167 |
->require ("MyModule") |
168 |
->run ("MyModule::worker, sub { |
169 |
my ($master_filehandle) = @_; |
170 |
|
171 |
# now $master_filehandle is connected to the |
172 |
# $slave_filehandle in the new process. |
173 |
}); |
174 |
|
175 |
C<MyModule> might look like this: |
176 |
|
177 |
package MyModule; |
178 |
|
179 |
sub worker { |
180 |
my ($slave_filehandle) = @_; |
181 |
|
182 |
# now $slave_filehandle is connected to the $master_filehandle |
183 |
# in the original prorcess. have fun! |
184 |
} |
185 |
|
186 |
=head2 Create a pool of server processes all accepting on the same socket. |
187 |
|
188 |
# create listener socket |
189 |
my $listener = ...; |
190 |
|
191 |
# create a pool template, initialise it and give it the socket |
192 |
my $pool = AnyEvent::Fork |
193 |
->new |
194 |
->require ("Some::Stuff", "My::Server") |
195 |
->send_fh ($listener); |
196 |
|
197 |
# now create 10 identical workers |
198 |
for my $id (1..10) { |
199 |
$pool |
200 |
->fork |
201 |
->send_arg ($id) |
202 |
->run ("My::Server::run"); |
203 |
} |
204 |
|
205 |
# now do other things - maybe use the filehandle provided by run |
206 |
# to wait for the processes to die. or whatever. |
207 |
|
208 |
C<My::Server> might look like this: |
209 |
|
210 |
package My::Server; |
211 |
|
212 |
sub run { |
213 |
my ($slave, $listener, $id) = @_; |
214 |
|
215 |
close $slave; # we do not use the socket, so close it to save resources |
216 |
|
217 |
# we could go ballistic and use e.g. AnyEvent here, or IO::AIO, |
218 |
# or anything we usually couldn't do in a process forked normally. |
219 |
while (my $socket = $listener->accept) { |
220 |
# do sth. with new socket |
221 |
} |
222 |
} |
223 |
|
224 |
=head2 use AnyEvent::Fork as a faster fork+exec |
225 |
|
226 |
This runs C</bin/echo hi>, with stdandard output redirected to /tmp/log |
227 |
and standard error redirected to the communications socket. It is usually |
228 |
faster than fork+exec, but still lets you prepare the environment. |
229 |
|
230 |
open my $output, ">/tmp/log" or die "$!"; |
231 |
|
232 |
AnyEvent::Fork |
233 |
->new |
234 |
->eval (' |
235 |
# compile a helper function for later use |
236 |
sub run { |
237 |
my ($fh, $output, @cmd) = @_; |
238 |
|
239 |
# perl will clear close-on-exec on STDOUT/STDERR |
240 |
open STDOUT, ">&", $output or die; |
241 |
open STDERR, ">&", $fh or die; |
242 |
|
243 |
exec @cmd; |
244 |
} |
245 |
') |
246 |
->send_fh ($output) |
247 |
->send_arg ("/bin/echo", "hi") |
248 |
->run ("run", my $cv = AE::cv); |
249 |
|
250 |
my $stderr = $cv->recv; |
251 |
|
252 |
=head1 CONCEPTS |
253 |
|
254 |
This module can create new processes either by executing a new perl |
255 |
process, or by forking from an existing "template" process. |
256 |
|
257 |
Each such process comes with its own file handle that can be used to |
258 |
communicate with it (it's actually a socket - one end in the new process, |
259 |
one end in the main process), and among the things you can do in it are |
260 |
load modules, fork new processes, send file handles to it, and execute |
261 |
functions. |
262 |
|
263 |
There are multiple ways to create additional processes to execute some |
264 |
jobs: |
265 |
|
266 |
=over 4 |
267 |
|
268 |
=item fork a new process from the "default" template process, load code, |
269 |
run it |
270 |
|
271 |
This module has a "default" template process which it executes when it is |
272 |
needed the first time. Forking from this process shares the memory used |
273 |
for the perl interpreter with the new process, but loading modules takes |
274 |
time, and the memory is not shared with anything else. |
275 |
|
276 |
This is ideal for when you only need one extra process of a kind, with the |
277 |
option of starting and stopping it on demand. |
278 |
|
279 |
Example: |
280 |
|
281 |
AnyEvent::Fork |
282 |
->new |
283 |
->require ("Some::Module") |
284 |
->run ("Some::Module::run", sub { |
285 |
my ($fork_fh) = @_; |
286 |
}); |
287 |
|
288 |
=item fork a new template process, load code, then fork processes off of |
289 |
it and run the code |
290 |
|
291 |
When you need to have a bunch of processes that all execute the same (or |
292 |
very similar) tasks, then a good way is to create a new template process |
293 |
for them, loading all the modules you need, and then create your worker |
294 |
processes from this new template process. |
295 |
|
296 |
This way, all code (and data structures) that can be shared (e.g. the |
297 |
modules you loaded) is shared between the processes, and each new process |
298 |
consumes relatively little memory of its own. |
299 |
|
300 |
The disadvantage of this approach is that you need to create a template |
301 |
process for the sole purpose of forking new processes from it, but if you |
302 |
only need a fixed number of processes you can create them, and then destroy |
303 |
the template process. |
304 |
|
305 |
Example: |
306 |
|
307 |
my $template = AnyEvent::Fork->new->require ("Some::Module"); |
308 |
|
309 |
for (1..10) { |
310 |
$template->fork->run ("Some::Module::run", sub { |
311 |
my ($fork_fh) = @_; |
312 |
}); |
313 |
} |
314 |
|
315 |
# at this point, you can keep $template around to fork new processes |
316 |
# later, or you can destroy it, which causes it to vanish. |
317 |
|
318 |
=item execute a new perl interpreter, load some code, run it |
319 |
|
320 |
This is relatively slow, and doesn't allow you to share memory between |
321 |
multiple processes. |
322 |
|
323 |
The only advantage is that you don't have to have a template process |
324 |
hanging around all the time to fork off some new processes, which might be |
325 |
an advantage when there are long time spans where no extra processes are |
326 |
needed. |
327 |
|
328 |
Example: |
329 |
|
330 |
AnyEvent::Fork |
331 |
->new_exec |
332 |
->require ("Some::Module") |
333 |
->run ("Some::Module::run", sub { |
334 |
my ($fork_fh) = @_; |
335 |
}); |
336 |
|
337 |
=back |
338 |
|
339 |
=head1 THE C<AnyEvent::Fork> CLASS |
340 |
|
341 |
This module exports nothing, and only implements a single class - |
342 |
C<AnyEvent::Fork>. |
343 |
|
344 |
There are two class constructors that both create new processes - C<new> |
345 |
and C<new_exec>. The C<fork> method creates a new process by forking an |
346 |
existing one and could be considered a third constructor. |
347 |
|
348 |
Most of the remaining methods deal with preparing the new process, by |
349 |
loading code, evaluating code and sending data to the new process. They |
350 |
usually return the process object, so you can chain method calls. |
351 |
|
352 |
If a process object is destroyed before calling its C<run> method, then |
353 |
the process simply exits. After C<run> is called, all responsibility is |
354 |
passed to the specified function. |
355 |
|
356 |
As long as there is any outstanding work to be done, process objects |
357 |
resist being destroyed, so there is no reason to store them unless you |
358 |
need them later - configure and forget works just fine. |
359 |
|
360 |
=over 4 |
361 |
|
362 |
=cut |
363 |
|
364 |
package AnyEvent::Fork; |
365 |
|
366 |
use common::sense; |
367 |
|
368 |
use Errno (); |
369 |
|
370 |
use AnyEvent; |
371 |
use AnyEvent::Util (); |
372 |
|
373 |
use IO::FDPass; |
374 |
|
375 |
our $VERSION = 0.6; |
376 |
|
377 |
our $PERL; # the path to the perl interpreter, deduces with various forms of magic |
378 |
|
379 |
=over 4 |
380 |
|
381 |
=back |
382 |
|
383 |
=cut |
384 |
|
385 |
# the early fork template process |
386 |
our $EARLY; |
387 |
|
388 |
# the empty template process |
389 |
our $TEMPLATE; |
390 |
|
391 |
sub _cmd { |
392 |
my $self = shift; |
393 |
|
394 |
# ideally, we would want to use "a (w/a)*" as format string, but perl |
395 |
# versions from at least 5.8.9 to 5.16.3 are all buggy and can't unpack |
396 |
# it. |
397 |
push @{ $self->[2] }, pack "a L/a*", $_[0], $_[1]; |
398 |
|
399 |
$self->[3] ||= AE::io $self->[1], 1, sub { |
400 |
do { |
401 |
# send the next "thing" in the queue - either a reference to an fh, |
402 |
# or a plain string. |
403 |
|
404 |
if (ref $self->[2][0]) { |
405 |
# send fh |
406 |
unless (IO::FDPass::send fileno $self->[1], fileno ${ $self->[2][0] }) { |
407 |
return if $! == Errno::EAGAIN || $! == Errno::EWOULDBLOCK; |
408 |
undef $self->[3]; |
409 |
die "AnyEvent::Fork: file descriptor send failure: $!"; |
410 |
} |
411 |
|
412 |
shift @{ $self->[2] }; |
413 |
|
414 |
} else { |
415 |
# send string |
416 |
my $len = syswrite $self->[1], $self->[2][0]; |
417 |
|
418 |
unless ($len) { |
419 |
return if $! == Errno::EAGAIN || $! == Errno::EWOULDBLOCK; |
420 |
undef $self->[3]; |
421 |
die "AnyEvent::Fork: command write failure: $!"; |
422 |
} |
423 |
|
424 |
substr $self->[2][0], 0, $len, ""; |
425 |
shift @{ $self->[2] } unless length $self->[2][0]; |
426 |
} |
427 |
} while @{ $self->[2] }; |
428 |
|
429 |
# everything written |
430 |
undef $self->[3]; |
431 |
|
432 |
# invoke run callback, if any |
433 |
$self->[4]->($self->[1]) if $self->[4]; |
434 |
}; |
435 |
|
436 |
() # make sure we don't leak the watcher |
437 |
} |
438 |
|
439 |
sub _new { |
440 |
my ($self, $fh, $pid) = @_; |
441 |
|
442 |
AnyEvent::Util::fh_nonblocking $fh, 1; |
443 |
|
444 |
$self = bless [ |
445 |
$pid, |
446 |
$fh, |
447 |
[], # write queue - strings or fd's |
448 |
undef, # AE watcher |
449 |
], $self; |
450 |
|
451 |
$self |
452 |
} |
453 |
|
454 |
# fork template from current process, used by AnyEvent::Fork::Early/Template |
455 |
sub _new_fork { |
456 |
my ($fh, $slave) = AnyEvent::Util::portable_socketpair; |
457 |
my $parent = $$; |
458 |
|
459 |
my $pid = fork; |
460 |
|
461 |
if ($pid eq 0) { |
462 |
require AnyEvent::Fork::Serve; |
463 |
$AnyEvent::Fork::Serve::OWNER = $parent; |
464 |
close $fh; |
465 |
$0 = "$_[1] of $parent"; |
466 |
$SIG{CHLD} = 'IGNORE'; |
467 |
AnyEvent::Fork::Serve::serve ($slave); |
468 |
exit 0; |
469 |
} elsif (!$pid) { |
470 |
die "AnyEvent::Fork::Early/Template: unable to fork template process: $!"; |
471 |
} |
472 |
|
473 |
AnyEvent::Fork->_new ($fh, $pid) |
474 |
} |
475 |
|
476 |
=item my $proc = new AnyEvent::Fork |
477 |
|
478 |
Create a new "empty" perl interpreter process and returns its process |
479 |
object for further manipulation. |
480 |
|
481 |
The new process is forked from a template process that is kept around |
482 |
for this purpose. When it doesn't exist yet, it is created by a call to |
483 |
C<new_exec> first and then stays around for future calls. |
484 |
|
485 |
=cut |
486 |
|
487 |
sub new { |
488 |
my $class = shift; |
489 |
|
490 |
$TEMPLATE ||= $class->new_exec; |
491 |
$TEMPLATE->fork |
492 |
} |
493 |
|
494 |
=item $new_proc = $proc->fork |
495 |
|
496 |
Forks C<$proc>, creating a new process, and returns the process object |
497 |
of the new process. |
498 |
|
499 |
If any of the C<send_> functions have been called before fork, then they |
500 |
will be cloned in the child. For example, in a pre-forked server, you |
501 |
might C<send_fh> the listening socket into the template process, and then |
502 |
keep calling C<fork> and C<run>. |
503 |
|
504 |
=cut |
505 |
|
506 |
sub fork { |
507 |
my ($self) = @_; |
508 |
|
509 |
my ($fh, $slave) = AnyEvent::Util::portable_socketpair; |
510 |
|
511 |
$self->send_fh ($slave); |
512 |
$self->_cmd ("f"); |
513 |
|
514 |
AnyEvent::Fork->_new ($fh) |
515 |
} |
516 |
|
517 |
=item my $proc = new_exec AnyEvent::Fork |
518 |
|
519 |
Create a new "empty" perl interpreter process and returns its process |
520 |
object for further manipulation. |
521 |
|
522 |
Unlike the C<new> method, this method I<always> spawns a new perl process |
523 |
(except in some cases, see L<AnyEvent::Fork::Early> for details). This |
524 |
reduces the amount of memory sharing that is possible, and is also slower. |
525 |
|
526 |
You should use C<new> whenever possible, except when having a template |
527 |
process around is unacceptable. |
528 |
|
529 |
The path to the perl interpreter is divined using various methods - first |
530 |
C<$^X> is investigated to see if the path ends with something that sounds |
531 |
as if it were the perl interpreter. Failing this, the module falls back to |
532 |
using C<$Config::Config{perlpath}>. |
533 |
|
534 |
=cut |
535 |
|
536 |
sub new_exec { |
537 |
my ($self) = @_; |
538 |
|
539 |
return $EARLY->fork |
540 |
if $EARLY; |
541 |
|
542 |
# first find path of perl |
543 |
my $perl = $; |
544 |
|
545 |
# first we try $^X, but the path must be absolute (always on win32), and end in sth. |
546 |
# that looks like perl. this obviously only works for posix and win32 |
547 |
unless ( |
548 |
($^O eq "MSWin32" || $perl =~ m%^/%) |
549 |
&& $perl =~ m%[/\\]perl(?:[0-9]+(\.[0-9]+)+)?(\.exe)?$%i |
550 |
) { |
551 |
# if it doesn't look perlish enough, try Config |
552 |
require Config; |
553 |
$perl = $Config::Config{perlpath}; |
554 |
$perl =~ s/(?:\Q$Config::Config{_exe}\E)?$/$Config::Config{_exe}/; |
555 |
} |
556 |
|
557 |
require Proc::FastSpawn; |
558 |
|
559 |
my ($fh, $slave) = AnyEvent::Util::portable_socketpair; |
560 |
Proc::FastSpawn::fd_inherit (fileno $slave); |
561 |
|
562 |
# new fh's should always be set cloexec (due to $^F), |
563 |
# but hey, not on win32, so we always clear the inherit flag. |
564 |
Proc::FastSpawn::fd_inherit (fileno $fh, 0); |
565 |
|
566 |
# quick. also doesn't work in win32. of course. what did you expect |
567 |
#local $ENV{PERL5LIB} = join ":", grep !ref, @INC; |
568 |
my %env = %ENV; |
569 |
$env{PERL5LIB} = join +($^O eq "MSWin32" ? ";" : ":"), grep !ref, @INC; |
570 |
|
571 |
my $pid = Proc::FastSpawn::spawn ( |
572 |
$perl, |
573 |
["perl", "-MAnyEvent::Fork::Serve", "-e", "AnyEvent::Fork::Serve::me", fileno $slave, $$], |
574 |
[map "$_=$env{$_}", keys %env], |
575 |
) or die "unable to spawn AnyEvent::Fork server: $!"; |
576 |
|
577 |
$self->_new ($fh, $pid) |
578 |
} |
579 |
|
580 |
=item $pid = $proc->pid |
581 |
|
582 |
Returns the process id of the process I<iff it is a direct child of the |
583 |
process running AnyEvent::Fork>, and C<undef> otherwise. |
584 |
|
585 |
Normally, only processes created via C<< AnyEvent::Fork->new_exec >> and |
586 |
L<AnyEvent::Fork::Template> are direct children, and you are responsible |
587 |
to clean up their zombies when they die. |
588 |
|
589 |
All other processes are not direct children, and will be cleaned up by |
590 |
AnyEvent::Fork itself. |
591 |
|
592 |
=cut |
593 |
|
594 |
sub pid { |
595 |
$_[0][0] |
596 |
} |
597 |
|
598 |
=item $proc = $proc->eval ($perlcode, @args) |
599 |
|
600 |
Evaluates the given C<$perlcode> as ... perl code, while setting C<@_> to |
601 |
the strings specified by C<@args>, in the "main" package. |
602 |
|
603 |
This call is meant to do any custom initialisation that might be required |
604 |
(for example, the C<require> method uses it). It's not supposed to be used |
605 |
to completely take over the process, use C<run> for that. |
606 |
|
607 |
The code will usually be executed after this call returns, and there is no |
608 |
way to pass anything back to the calling process. Any evaluation errors |
609 |
will be reported to stderr and cause the process to exit. |
610 |
|
611 |
If you want to execute some code (that isn't in a module) to take over the |
612 |
process, you should compile a function via C<eval> first, and then call |
613 |
it via C<run>. This also gives you access to any arguments passed via the |
614 |
C<send_xxx> methods, such as file handles. See the L<use AnyEvent::Fork as |
615 |
a faster fork+exec> example to see it in action. |
616 |
|
617 |
Returns the process object for easy chaining of method calls. |
618 |
|
619 |
=cut |
620 |
|
621 |
sub eval { |
622 |
my ($self, $code, @args) = @_; |
623 |
|
624 |
$self->_cmd (e => pack "(w/a*)*", $code, @args); |
625 |
|
626 |
$self |
627 |
} |
628 |
|
629 |
=item $proc = $proc->require ($module, ...) |
630 |
|
631 |
Tries to load the given module(s) into the process |
632 |
|
633 |
Returns the process object for easy chaining of method calls. |
634 |
|
635 |
=cut |
636 |
|
637 |
sub require { |
638 |
my ($self, @modules) = @_; |
639 |
|
640 |
s%::%/%g for @modules; |
641 |
$self->eval ('require "$_.pm" for @_', @modules); |
642 |
|
643 |
$self |
644 |
} |
645 |
|
646 |
=item $proc = $proc->send_fh ($handle, ...) |
647 |
|
648 |
Send one or more file handles (I<not> file descriptors) to the process, |
649 |
to prepare a call to C<run>. |
650 |
|
651 |
The process object keeps a reference to the handles until they have |
652 |
been passed over to the process, so you must not explicitly close the |
653 |
handles. This is most easily accomplished by simply not storing the file |
654 |
handles anywhere after passing them to this method - when AnyEvent::Fork |
655 |
is finished using them, perl will automatically close them. |
656 |
|
657 |
Returns the process object for easy chaining of method calls. |
658 |
|
659 |
Example: pass a file handle to a process, and release it without |
660 |
closing. It will be closed automatically when it is no longer used. |
661 |
|
662 |
$proc->send_fh ($my_fh); |
663 |
undef $my_fh; # free the reference if you want, but DO NOT CLOSE IT |
664 |
|
665 |
=cut |
666 |
|
667 |
sub send_fh { |
668 |
my ($self, @fh) = @_; |
669 |
|
670 |
for my $fh (@fh) { |
671 |
$self->_cmd ("h"); |
672 |
push @{ $self->[2] }, \$fh; |
673 |
} |
674 |
|
675 |
$self |
676 |
} |
677 |
|
678 |
=item $proc = $proc->send_arg ($string, ...) |
679 |
|
680 |
Send one or more argument strings to the process, to prepare a call to |
681 |
C<run>. The strings can be any octet strings. |
682 |
|
683 |
The protocol is optimised to pass a moderate number of relatively short |
684 |
strings - while you can pass up to 4GB of data in one go, this is more |
685 |
meant to pass some ID information or other startup info, not big chunks of |
686 |
data. |
687 |
|
688 |
Returns the process object for easy chaining of method calls. |
689 |
|
690 |
=cut |
691 |
|
692 |
sub send_arg { |
693 |
my ($self, @arg) = @_; |
694 |
|
695 |
$self->_cmd (a => pack "(w/a*)*", @arg); |
696 |
|
697 |
$self |
698 |
} |
699 |
|
700 |
=item $proc->run ($func, $cb->($fh)) |
701 |
|
702 |
Enter the function specified by the function name in C<$func> in the |
703 |
process. The function is called with the communication socket as first |
704 |
argument, followed by all file handles and string arguments sent earlier |
705 |
via C<send_fh> and C<send_arg> methods, in the order they were called. |
706 |
|
707 |
The process object becomes unusable on return from this function - any |
708 |
further method calls result in undefined behaviour. |
709 |
|
710 |
The function name should be fully qualified, but if it isn't, it will be |
711 |
looked up in the C<main> package. |
712 |
|
713 |
If the called function returns, doesn't exist, or any error occurs, the |
714 |
process exits. |
715 |
|
716 |
Preparing the process is done in the background - when all commands have |
717 |
been sent, the callback is invoked with the local communications socket |
718 |
as argument. At this point you can start using the socket in any way you |
719 |
like. |
720 |
|
721 |
If the communication socket isn't used, it should be closed on both sides, |
722 |
to save on kernel memory. |
723 |
|
724 |
The socket is non-blocking in the parent, and blocking in the newly |
725 |
created process. The close-on-exec flag is set in both. |
726 |
|
727 |
Even if not used otherwise, the socket can be a good indicator for the |
728 |
existence of the process - if the other process exits, you get a readable |
729 |
event on it, because exiting the process closes the socket (if it didn't |
730 |
create any children using fork). |
731 |
|
732 |
Example: create a template for a process pool, pass a few strings, some |
733 |
file handles, then fork, pass one more string, and run some code. |
734 |
|
735 |
my $pool = AnyEvent::Fork |
736 |
->new |
737 |
->send_arg ("str1", "str2") |
738 |
->send_fh ($fh1, $fh2); |
739 |
|
740 |
for (1..2) { |
741 |
$pool |
742 |
->fork |
743 |
->send_arg ("str3") |
744 |
->run ("Some::function", sub { |
745 |
my ($fh) = @_; |
746 |
|
747 |
# fh is nonblocking, but we trust that the OS can accept these |
748 |
# few octets anyway. |
749 |
syswrite $fh, "hi #$_\n"; |
750 |
|
751 |
# $fh is being closed here, as we don't store it anywhere |
752 |
}); |
753 |
} |
754 |
|
755 |
# Some::function might look like this - all parameters passed before fork |
756 |
# and after will be passed, in order, after the communications socket. |
757 |
sub Some::function { |
758 |
my ($fh, $str1, $str2, $fh1, $fh2, $str3) = @_; |
759 |
|
760 |
print scalar <$fh>; # prints "hi #1\n" and "hi #2\n" in any order |
761 |
} |
762 |
|
763 |
=cut |
764 |
|
765 |
sub run { |
766 |
my ($self, $func, $cb) = @_; |
767 |
|
768 |
$self->[4] = $cb; |
769 |
$self->_cmd (r => $func); |
770 |
} |
771 |
|
772 |
=back |
773 |
|
774 |
=head1 PERFORMANCE |
775 |
|
776 |
Now for some unscientific benchmark numbers (all done on an amd64 |
777 |
GNU/Linux box). These are intended to give you an idea of the relative |
778 |
performance you can expect, they are not meant to be absolute performance |
779 |
numbers. |
780 |
|
781 |
OK, so, I ran a simple benchmark that creates a socket pair, forks, calls |
782 |
exit in the child and waits for the socket to close in the parent. I did |
783 |
load AnyEvent, EV and AnyEvent::Fork, for a total process size of 5100kB. |
784 |
|
785 |
2079 new processes per second, using manual socketpair + fork |
786 |
|
787 |
Then I did the same thing, but instead of calling fork, I called |
788 |
AnyEvent::Fork->new->run ("CORE::exit") and then again waited for the |
789 |
socket form the child to close on exit. This does the same thing as manual |
790 |
socket pair + fork, except that what is forked is the template process |
791 |
(2440kB), and the socket needs to be passed to the server at the other end |
792 |
of the socket first. |
793 |
|
794 |
2307 new processes per second, using AnyEvent::Fork->new |
795 |
|
796 |
And finally, using C<new_exec> instead C<new>, using vforks+execs to exec |
797 |
a new perl interpreter and compile the small server each time, I get: |
798 |
|
799 |
479 vfork+execs per second, using AnyEvent::Fork->new_exec |
800 |
|
801 |
So how can C<< AnyEvent->new >> be faster than a standard fork, even |
802 |
though it uses the same operations, but adds a lot of overhead? |
803 |
|
804 |
The difference is simply the process size: forking the 5MB process takes |
805 |
so much longer than forking the 2.5MB template process that the extra |
806 |
overhead introduced is canceled out. |
807 |
|
808 |
If the benchmark process grows, the normal fork becomes even slower: |
809 |
|
810 |
1340 new processes, manual fork of a 20MB process |
811 |
731 new processes, manual fork of a 200MB process |
812 |
235 new processes, manual fork of a 2000MB process |
813 |
|
814 |
What that means (to me) is that I can use this module without having a bad |
815 |
conscience because of the extra overhead required to start new processes. |
816 |
|
817 |
=head1 TYPICAL PROBLEMS |
818 |
|
819 |
This section lists typical problems that remain. I hope by recognising |
820 |
them, most can be avoided. |
821 |
|
822 |
=over 4 |
823 |
|
824 |
=item leaked file descriptors for exec'ed processes |
825 |
|
826 |
POSIX systems inherit file descriptors by default when exec'ing a new |
827 |
process. While perl itself laudably sets the close-on-exec flags on new |
828 |
file handles, most C libraries don't care, and even if all cared, it's |
829 |
often not possible to set the flag in a race-free manner. |
830 |
|
831 |
That means some file descriptors can leak through. And since it isn't |
832 |
possible to know which file descriptors are "good" and "necessary" (or |
833 |
even to know which file descriptors are open), there is no good way to |
834 |
close the ones that might harm. |
835 |
|
836 |
As an example of what "harm" can be done consider a web server that |
837 |
accepts connections and afterwards some module uses AnyEvent::Fork for the |
838 |
first time, causing it to fork and exec a new process, which might inherit |
839 |
the network socket. When the server closes the socket, it is still open |
840 |
in the child (which doesn't even know that) and the client might conclude |
841 |
that the connection is still fine. |
842 |
|
843 |
For the main program, there are multiple remedies available - |
844 |
L<AnyEvent::Fork::Early> is one, creating a process early and not using |
845 |
C<new_exec> is another, as in both cases, the first process can be exec'ed |
846 |
well before many random file descriptors are open. |
847 |
|
848 |
In general, the solution for these kind of problems is to fix the |
849 |
libraries or the code that leaks those file descriptors. |
850 |
|
851 |
Fortunately, most of these leaked descriptors do no harm, other than |
852 |
sitting on some resources. |
853 |
|
854 |
=item leaked file descriptors for fork'ed processes |
855 |
|
856 |
Normally, L<AnyEvent::Fork> does start new processes by exec'ing them, |
857 |
which closes file descriptors not marked for being inherited. |
858 |
|
859 |
However, L<AnyEvent::Fork::Early> and L<AnyEvent::Fork::Template> offer |
860 |
a way to create these processes by forking, and this leaks more file |
861 |
descriptors than exec'ing them, as there is no way to mark descriptors as |
862 |
"close on fork". |
863 |
|
864 |
An example would be modules like L<EV>, L<IO::AIO> or L<Gtk2>. Both create |
865 |
pipes for internal uses, and L<Gtk2> might open a connection to the X |
866 |
server. L<EV> and L<IO::AIO> can deal with fork, but Gtk2 might have |
867 |
trouble with a fork. |
868 |
|
869 |
The solution is to either not load these modules before use'ing |
870 |
L<AnyEvent::Fork::Early> or L<AnyEvent::Fork::Template>, or to delay |
871 |
initialising them, for example, by calling C<init Gtk2> manually. |
872 |
|
873 |
=item exiting calls object destructors |
874 |
|
875 |
This only applies to users of L<AnyEvent::Fork:Early> and |
876 |
L<AnyEvent::Fork::Template>, or when initialiasing code creates objects |
877 |
that reference external resources. |
878 |
|
879 |
When a process created by AnyEvent::Fork exits, it might do so by calling |
880 |
exit, or simply letting perl reach the end of the program. At which point |
881 |
Perl runs all destructors. |
882 |
|
883 |
Not all destructors are fork-safe - for example, an object that represents |
884 |
the connection to an X display might tell the X server to free resources, |
885 |
which is inconvenient when the "real" object in the parent still needs to |
886 |
use them. |
887 |
|
888 |
This is obviously not a problem for L<AnyEvent::Fork::Early>, as you used |
889 |
it as the very first thing, right? |
890 |
|
891 |
It is a problem for L<AnyEvent::Fork::Template> though - and the solution |
892 |
is to not create objects with nontrivial destructors that might have an |
893 |
effect outside of Perl. |
894 |
|
895 |
=back |
896 |
|
897 |
=head1 PORTABILITY NOTES |
898 |
|
899 |
Native win32 perls are somewhat supported (AnyEvent::Fork::Early is a nop, |
900 |
and ::Template is not going to work), and it cost a lot of blood and sweat |
901 |
to make it so, mostly due to the bloody broken perl that nobody seems to |
902 |
care about. The fork emulation is a bad joke - I have yet to see something |
903 |
useful that you can do with it without running into memory corruption |
904 |
issues or other braindamage. Hrrrr. |
905 |
|
906 |
Cygwin perl is not supported at the moment due to some hilarious |
907 |
shortcomings of its API - see L<IO::FDPoll> for more details. |
908 |
|
909 |
=head1 SEE ALSO |
910 |
|
911 |
L<AnyEvent::Fork::Early> (to avoid executing a perl interpreter), |
912 |
L<AnyEvent::Fork::Template> (to create a process by forking the main |
913 |
program at a convenient time). |
914 |
|
915 |
=head1 AUTHOR |
916 |
|
917 |
Marc Lehmann <schmorp@schmorp.de> |
918 |
http://home.schmorp.de/ |
919 |
|
920 |
=cut |
921 |
|
922 |
1 |
923 |
|