ViewVC Help
View File | Revision Log | Show Annotations | Download File
/cvs/AnyEvent-Fork/README
Revision: 1.4
Committed: Sat Apr 6 03:42:26 2013 UTC (11 years, 1 month ago) by root
Branch: MAIN
CVS Tags: rel-0_5
Changes since 1.3: +106 -15 lines
Log Message:
0.5

File Contents

# Content
1 NAME
2 AnyEvent::Fork - everything you wanted to use fork() for, but couldn't
3
4 SYNOPSIS
5 use AnyEvent::Fork;
6
7 ##################################################################
8 # create a single new process, tell it to run your worker function
9
10 AnyEvent::Fork
11 ->new
12 ->require ("MyModule")
13 ->run ("MyModule::worker, sub {
14 my ($master_filehandle) = @_;
15
16 # now $master_filehandle is connected to the
17 # $slave_filehandle in the new process.
18 });
19
20 # MyModule::worker might look like this
21 sub MyModule::worker {
22 my ($slave_filehandle) = @_;
23
24 # now $slave_filehandle is connected to the $master_filehandle
25 # in the original prorcess. have fun!
26 }
27
28 ##################################################################
29 # create a pool of server processes all accepting on the same socket
30
31 # create listener socket
32 my $listener = ...;
33
34 # create a pool template, initialise it and give it the socket
35 my $pool = AnyEvent::Fork
36 ->new
37 ->require ("Some::Stuff", "My::Server")
38 ->send_fh ($listener);
39
40 # now create 10 identical workers
41 for my $id (1..10) {
42 $pool
43 ->fork
44 ->send_arg ($id)
45 ->run ("My::Server::run");
46 }
47
48 # now do other things - maybe use the filehandle provided by run
49 # to wait for the processes to die. or whatever.
50
51 # My::Server::run might look like this
52 sub My::Server::run {
53 my ($slave, $listener, $id) = @_;
54
55 close $slave; # we do not use the socket, so close it to save resources
56
57 # we could go ballistic and use e.g. AnyEvent here, or IO::AIO,
58 # or anything we usually couldn't do in a process forked normally.
59 while (my $socket = $listener->accept) {
60 # do sth. with new socket
61 }
62 }
63
64 DESCRIPTION
65 This module allows you to create new processes, without actually forking
66 them from your current process (avoiding the problems of forking), but
67 preserving most of the advantages of fork.
68
69 It can be used to create new worker processes or new independent
70 subprocesses for short- and long-running jobs, process pools (e.g. for
71 use in pre-forked servers) but also to spawn new external processes
72 (such as CGI scripts from a web server), which can be faster (and more
73 well behaved) than using fork+exec in big processes.
74
75 Special care has been taken to make this module useful from other
76 modules, while still supporting specialised environments such as
77 App::Staticperl or PAR::Packer.
78
79 WHAT THIS MODULE IS NOT
80 This module only creates processes and lets you pass file handles and
81 strings to it, and run perl code. It does not implement any kind of RPC
82 - there is no back channel from the process back to you, and there is no
83 RPC or message passing going on.
84
85 If you need some form of RPC, you can either implement it yourself in
86 whatever way you like, use some message-passing module such as
87 AnyEvent::MP, some pipe such as AnyEvent::ZeroMQ, use AnyEvent::Handle
88 on both sides to send e.g. JSON or Storable messages, and so on.
89
90 PROBLEM STATEMENT
91 There are two ways to implement parallel processing on UNIX like
92 operating systems - fork and process, and fork+exec and process. They
93 have different advantages and disadvantages that I describe below,
94 together with how this module tries to mitigate the disadvantages.
95
96 Forking from a big process can be very slow (a 5GB process needs 0.05s
97 to fork on my 3.6GHz amd64 GNU/Linux box for example). This overhead is
98 often shared with exec (because you have to fork first), but in some
99 circumstances (e.g. when vfork is used), fork+exec can be much faster.
100 This module can help here by telling a small(er) helper process to
101 fork, or fork+exec instead.
102
103 Forking usually creates a copy-on-write copy of the parent process.
104 Memory (for example, modules or data files that have been will not take
105 additional memory). When exec'ing a new process, modules and data files
106 might need to be loaded again, at extra CPU and memory cost. Likewise
107 when forking, all data structures are copied as well - if the program
108 frees them and replaces them by new data, the child processes will
109 retain the memory even if it isn't used.
110 This module allows the main program to do a controlled fork, and
111 allows modules to exec processes safely at any time. When creating a
112 custom process pool you can take advantage of data sharing via fork
113 without risking to share large dynamic data structures that will
114 blow up child memory usage.
115
116 Exec'ing a new perl process might be difficult and slow. For example, it
117 is not easy to find the correct path to the perl interpreter, and all
118 modules have to be loaded from disk again. Long running processes might
119 run into problems when perl is upgraded for example.
120 This module supports creating pre-initialised perl processes to be
121 used as template, and also tries hard to identify the correct path
122 to the perl interpreter. With a cooperative main program, exec'ing
123 the interpreter might not even be necessary.
124
125 Forking might be impossible when a program is running. For example,
126 POSIX makes it almost impossible to fork from a multi-threaded program
127 and do anything useful in the child - strictly speaking, if your perl
128 program uses posix threads (even indirectly via e.g. IO::AIO or
129 threads), you cannot call fork on the perl level anymore, at all.
130 This module can safely fork helper processes at any time, by calling
131 fork+exec in C, in a POSIX-compatible way.
132
133 Parallel processing with fork might be inconvenient or difficult to
134 implement. For example, when a program uses an event loop and creates
135 watchers it becomes very hard to use the event loop from a child
136 program, as the watchers already exist but are only meaningful in the
137 parent. Worse, a module might want to use such a system, not knowing
138 whether another module or the main program also does, leading to
139 problems.
140 This module only lets the main program create pools by forking
141 (because only the main program can know when it is still safe to do
142 so) - all other pools are created by fork+exec, after which such
143 modules can again be loaded.
144
145 CONCEPTS
146 This module can create new processes either by executing a new perl
147 process, or by forking from an existing "template" process.
148
149 Each such process comes with its own file handle that can be used to
150 communicate with it (it's actually a socket - one end in the new
151 process, one end in the main process), and among the things you can do
152 in it are load modules, fork new processes, send file handles to it, and
153 execute functions.
154
155 There are multiple ways to create additional processes to execute some
156 jobs:
157
158 fork a new process from the "default" template process, load code, run
159 it
160 This module has a "default" template process which it executes when
161 it is needed the first time. Forking from this process shares the
162 memory used for the perl interpreter with the new process, but
163 loading modules takes time, and the memory is not shared with
164 anything else.
165
166 This is ideal for when you only need one extra process of a kind,
167 with the option of starting and stopping it on demand.
168
169 Example:
170
171 AnyEvent::Fork
172 ->new
173 ->require ("Some::Module")
174 ->run ("Some::Module::run", sub {
175 my ($fork_fh) = @_;
176 });
177
178 fork a new template process, load code, then fork processes off of it
179 and run the code
180 When you need to have a bunch of processes that all execute the same
181 (or very similar) tasks, then a good way is to create a new template
182 process for them, loading all the modules you need, and then create
183 your worker processes from this new template process.
184
185 This way, all code (and data structures) that can be shared (e.g.
186 the modules you loaded) is shared between the processes, and each
187 new process consumes relatively little memory of its own.
188
189 The disadvantage of this approach is that you need to create a
190 template process for the sole purpose of forking new processes from
191 it, but if you only need a fixed number of processes you can create
192 them, and then destroy the template process.
193
194 Example:
195
196 my $template = AnyEvent::Fork->new->require ("Some::Module");
197
198 for (1..10) {
199 $template->fork->run ("Some::Module::run", sub {
200 my ($fork_fh) = @_;
201 });
202 }
203
204 # at this point, you can keep $template around to fork new processes
205 # later, or you can destroy it, which causes it to vanish.
206
207 execute a new perl interpreter, load some code, run it
208 This is relatively slow, and doesn't allow you to share memory
209 between multiple processes.
210
211 The only advantage is that you don't have to have a template process
212 hanging around all the time to fork off some new processes, which
213 might be an advantage when there are long time spans where no extra
214 processes are needed.
215
216 Example:
217
218 AnyEvent::Fork
219 ->new_exec
220 ->require ("Some::Module")
221 ->run ("Some::Module::run", sub {
222 my ($fork_fh) = @_;
223 });
224
225 FUNCTIONS
226 my $pool = new AnyEvent::Fork key => value...
227 Create a new process pool. The following named parameters are
228 supported:
229
230 my $proc = new AnyEvent::Fork
231 Create a new "empty" perl interpreter process and returns its
232 process object for further manipulation.
233
234 The new process is forked from a template process that is kept
235 around for this purpose. When it doesn't exist yet, it is created by
236 a call to "new_exec" and kept around for future calls.
237
238 When the process object is destroyed, it will release the file
239 handle that connects it with the new process. When the new process
240 has not yet called "run", then the process will exit. Otherwise,
241 what happens depends entirely on the code that is executed.
242
243 $new_proc = $proc->fork
244 Forks $proc, creating a new process, and returns the process object
245 of the new process.
246
247 If any of the "send_" functions have been called before fork, then
248 they will be cloned in the child. For example, in a pre-forked
249 server, you might "send_fh" the listening socket into the template
250 process, and then keep calling "fork" and "run".
251
252 my $proc = new_exec AnyEvent::Fork
253 Create a new "empty" perl interpreter process and returns its
254 process object for further manipulation.
255
256 Unlike the "new" method, this method *always* spawns a new perl
257 process (except in some cases, see AnyEvent::Fork::Early for
258 details). This reduces the amount of memory sharing that is
259 possible, and is also slower.
260
261 You should use "new" whenever possible, except when having a
262 template process around is unacceptable.
263
264 The path to the perl interpreter is divined using various methods -
265 first $^X is investigated to see if the path ends with something
266 that sounds as if it were the perl interpreter. Failing this, the
267 module falls back to using $Config::Config{perlpath}.
268
269 $pid = $proc->pid
270 Returns the process id of the process *iff it is a direct child of
271 the process* running AnyEvent::Fork, and "undef" otherwise.
272
273 Normally, only processes created via "AnyEvent::Fork->new_exec" and
274 AnyEvent::Fork::Template are direct children, and you are
275 responsible to clean up their zombies when they die.
276
277 All other processes are not direct children, and will be cleaned up
278 by AnyEvent::Fork.
279
280 $proc = $proc->eval ($perlcode, @args)
281 Evaluates the given $perlcode as ... perl code, while setting @_ to
282 the strings specified by @args.
283
284 This call is meant to do any custom initialisation that might be
285 required (for example, the "require" method uses it). It's not
286 supposed to be used to completely take over the process, use "run"
287 for that.
288
289 The code will usually be executed after this call returns, and there
290 is no way to pass anything back to the calling process. Any
291 evaluation errors will be reported to stderr and cause the process
292 to exit.
293
294 Returns the process object for easy chaining of method calls.
295
296 $proc = $proc->require ($module, ...)
297 Tries to load the given module(s) into the process
298
299 Returns the process object for easy chaining of method calls.
300
301 $proc = $proc->send_fh ($handle, ...)
302 Send one or more file handles (*not* file descriptors) to the
303 process, to prepare a call to "run".
304
305 The process object keeps a reference to the handles until this is
306 done, so you must not explicitly close the handles. This is most
307 easily accomplished by simply not storing the file handles anywhere
308 after passing them to this method.
309
310 Returns the process object for easy chaining of method calls.
311
312 Example: pass a file handle to a process, and release it without
313 closing. It will be closed automatically when it is no longer used.
314
315 $proc->send_fh ($my_fh);
316 undef $my_fh; # free the reference if you want, but DO NOT CLOSE IT
317
318 $proc = $proc->send_arg ($string, ...)
319 Send one or more argument strings to the process, to prepare a call
320 to "run". The strings can be any octet string.
321
322 The protocol is optimised to pass a moderate number of relatively
323 short strings - while you can pass up to 4GB of data in one go, this
324 is more meant to pass some ID information or other startup info, not
325 big chunks of data.
326
327 Returns the process object for easy chaining of method calls.
328
329 $proc->run ($func, $cb->($fh))
330 Enter the function specified by the fully qualified name in $func in
331 the process. The function is called with the communication socket as
332 first argument, followed by all file handles and string arguments
333 sent earlier via "send_fh" and "send_arg" methods, in the order they
334 were called.
335
336 If the called function returns, the process exits.
337
338 Preparing the process can take time - when the process is ready, the
339 callback is invoked with the local communications socket as
340 argument.
341
342 The process object becomes unusable on return from this function.
343
344 If the communication socket isn't used, it should be closed on both
345 sides, to save on kernel memory.
346
347 The socket is non-blocking in the parent, and blocking in the newly
348 created process. The close-on-exec flag is set on both. Even if not
349 used otherwise, the socket can be a good indicator for the existence
350 of the process - if the other process exits, you get a readable
351 event on it, because exiting the process closes the socket (if it
352 didn't create any children using fork).
353
354 Example: create a template for a process pool, pass a few strings,
355 some file handles, then fork, pass one more string, and run some
356 code.
357
358 my $pool = AnyEvent::Fork
359 ->new
360 ->send_arg ("str1", "str2")
361 ->send_fh ($fh1, $fh2);
362
363 for (1..2) {
364 $pool
365 ->fork
366 ->send_arg ("str3")
367 ->run ("Some::function", sub {
368 my ($fh) = @_;
369
370 # fh is nonblocking, but we trust that the OS can accept these
371 # extra 3 octets anyway.
372 syswrite $fh, "hi #$_\n";
373
374 # $fh is being closed here, as we don't store it anywhere
375 });
376 }
377
378 # Some::function might look like this - all parameters passed before fork
379 # and after will be passed, in order, after the communications socket.
380 sub Some::function {
381 my ($fh, $str1, $str2, $fh1, $fh2, $str3) = @_;
382
383 print scalar <$fh>; # prints "hi 1\n" and "hi 2\n"
384 }
385
386 PERFORMANCE
387 Now for some unscientific benchmark numbers (all done on an amd64
388 GNU/Linux box). These are intended to give you an idea of the relative
389 performance you can expect, they are not meant to be absolute
390 performance numbers.
391
392 OK, so, I ran a simple benchmark that creates a socket pair, forks,
393 calls exit in the child and waits for the socket to close in the parent.
394 I did load AnyEvent, EV and AnyEvent::Fork, for a total process size of
395 5100kB.
396
397 2079 new processes per second, using manual socketpair + fork
398
399 Then I did the same thing, but instead of calling fork, I called
400 AnyEvent::Fork->new->run ("CORE::exit") and then again waited for the
401 socket form the child to close on exit. This does the same thing as
402 manual socket pair + fork, except that what is forked is the template
403 process (2440kB), and the socket needs to be passed to the server at the
404 other end of the socket first.
405
406 2307 new processes per second, using AnyEvent::Fork->new
407
408 And finally, using "new_exec" instead "new", using vforks+execs to exec
409 a new perl interpreter and compile the small server each time, I get:
410
411 479 vfork+execs per second, using AnyEvent::Fork->new_exec
412
413 So how can "AnyEvent->new" be faster than a standard fork, even though
414 it uses the same operations, but adds a lot of overhead?
415
416 The difference is simply the process size: forking the 6MB process takes
417 so much longer than forking the 2.5MB template process that the overhead
418 introduced is canceled out.
419
420 If the benchmark process grows, the normal fork becomes even slower:
421
422 1340 new processes, manual fork in a 20MB process
423 731 new processes, manual fork in a 200MB process
424 235 new processes, manual fork in a 2000MB process
425
426 What that means (to me) is that I can use this module without having a
427 very bad conscience because of the extra overhead required to start new
428 processes.
429
430 TYPICAL PROBLEMS
431 This section lists typical problems that remain. I hope by recognising
432 them, most can be avoided.
433
434 "leaked" file descriptors for exec'ed processes
435 POSIX systems inherit file descriptors by default when exec'ing a
436 new process. While perl itself laudably sets the close-on-exec flags
437 on new file handles, most C libraries don't care, and even if all
438 cared, it's often not possible to set the flag in a race-free
439 manner.
440
441 That means some file descriptors can leak through. And since it
442 isn't possible to know which file descriptors are "good" and
443 "necessary" (or even to know which file descriptors are open), there
444 is no good way to close the ones that might harm.
445
446 As an example of what "harm" can be done consider a web server that
447 accepts connections and afterwards some module uses AnyEvent::Fork
448 for the first time, causing it to fork and exec a new process, which
449 might inherit the network socket. When the server closes the socket,
450 it is still open in the child (which doesn't even know that) and the
451 client might conclude that the connection is still fine.
452
453 For the main program, there are multiple remedies available -
454 AnyEvent::Fork::Early is one, creating a process early and not using
455 "new_exec" is another, as in both cases, the first process can be
456 exec'ed well before many random file descriptors are open.
457
458 In general, the solution for these kind of problems is to fix the
459 libraries or the code that leaks those file descriptors.
460
461 Fortunately, most of these leaked descriptors do no harm, other than
462 sitting on some resources.
463
464 "leaked" file descriptors for fork'ed processes
465 Normally, AnyEvent::Fork does start new processes by exec'ing them,
466 which closes file descriptors not marked for being inherited.
467
468 However, AnyEvent::Fork::Early and AnyEvent::Fork::Template offer a
469 way to create these processes by forking, and this leaks more file
470 descriptors than exec'ing them, as there is no way to mark
471 descriptors as "close on fork".
472
473 An example would be modules like EV, IO::AIO or Gtk2. Both create
474 pipes for internal uses, and Gtk2 might open a connection to the X
475 server. EV and IO::AIO can deal with fork, but Gtk2 might have
476 trouble with a fork.
477
478 The solution is to either not load these modules before use'ing
479 AnyEvent::Fork::Early or AnyEvent::Fork::Template, or to delay
480 initialising them, for example, by calling "init Gtk2" manually.
481
482 exit runs destructors
483 This only applies to users of Lc<AnyEvent::Fork:Early> and
484 AnyEvent::Fork::Template.
485
486 When a process created by AnyEvent::Fork exits, it might do so by
487 calling exit, or simply letting perl reach the end of the program.
488 At which point Perl runs all destructors.
489
490 Not all destructors are fork-safe - for example, an object that
491 represents the connection to an X display might tell the X server to
492 free resources, which is inconvenient when the "real" object in the
493 parent still needs to use them.
494
495 This is obviously not a problem for AnyEvent::Fork::Early, as you
496 used it as the very first thing, right?
497
498 It is a problem for AnyEvent::Fork::Template though - and the
499 solution is to not create objects with nontrivial destructors that
500 might have an effect outside of Perl.
501
502 PORTABILITY NOTES
503 Native win32 perls are somewhat supported (AnyEvent::Fork::Early is a
504 nop, and ::Template is not going to work), and it cost a lot of blood
505 and sweat to make it so, mostly due to the bloody broken perl that
506 nobody seems to care about. The fork emulation is a bad joke - I have
507 yet to see something useful that you can do with it without running into
508 memory corruption issues or other braindamage. Hrrrr.
509
510 Cygwin perl is not supported at the moment, as it should implement fd
511 passing, but doesn't, and rolling my own is hard, as cygwin doesn't
512 support enough functionality to do it.
513
514 SEE ALSO
515 AnyEvent::Fork::Early (to avoid executing a perl interpreter),
516 AnyEvent::Fork::Template (to create a process by forking the main
517 program at a convenient time).
518
519 AUTHOR
520 Marc Lehmann <schmorp@schmorp.de>
521 http://home.schmorp.de/
522