ViewVC Help
View File | Revision Log | Show Annotations | Download File
/cvs/Coro/README
Revision: 1.29
Committed: Sat Feb 19 06:51:22 2011 UTC (13 years, 3 months ago) by root
Branch: MAIN
CVS Tags: rel-5_371, rel-5_372, rel-5_37
Changes since 1.28: +244 -9 lines
Log Message:
5.37

File Contents

# Content
1 NAME
2 Coro - the only real threads in perl
3
4 SYNOPSIS
5 use Coro;
6
7 async {
8 # some asynchronous thread of execution
9 print "2\n";
10 cede; # yield back to main
11 print "4\n";
12 };
13 print "1\n";
14 cede; # yield to coro
15 print "3\n";
16 cede; # and again
17
18 # use locking
19 use Coro::Semaphore;
20 my $lock = new Coro::Semaphore;
21 my $locked;
22
23 $lock->down;
24 $locked = 1;
25 $lock->up;
26
27 DESCRIPTION
28 For a tutorial-style introduction, please read the Coro::Intro manpage.
29 This manpage mainly contains reference information.
30
31 This module collection manages continuations in general, most often in
32 the form of cooperative threads (also called coros, or simply "coro" in
33 the documentation). They are similar to kernel threads but don't (in
34 general) run in parallel at the same time even on SMP machines. The
35 specific flavor of thread offered by this module also guarantees you
36 that it will not switch between threads unless necessary, at
37 easily-identified points in your program, so locking and parallel access
38 are rarely an issue, making thread programming much safer and easier
39 than using other thread models.
40
41 Unlike the so-called "Perl threads" (which are not actually real threads
42 but only the windows process emulation (see section of same name for
43 more details) ported to UNIX, and as such act as processes), Coro
44 provides a full shared address space, which makes communication between
45 threads very easy. And coro threads are fast, too: disabling the Windows
46 process emulation code in your perl and using Coro can easily result in
47 a two to four times speed increase for your programs. A parallel matrix
48 multiplication benchmark (very communication-intensive) runs over 300
49 times faster on a single core than perls pseudo-threads on a quad core
50 using all four cores.
51
52 Coro achieves that by supporting multiple running interpreters that
53 share data, which is especially useful to code pseudo-parallel processes
54 and for event-based programming, such as multiple HTTP-GET requests
55 running concurrently. See Coro::AnyEvent to learn more on how to
56 integrate Coro into an event-based environment.
57
58 In this module, a thread is defined as "callchain + lexical variables +
59 some package variables + C stack), that is, a thread has its own
60 callchain, its own set of lexicals and its own set of perls most
61 important global variables (see Coro::State for more configuration and
62 background info).
63
64 See also the "SEE ALSO" section at the end of this document - the Coro
65 module family is quite large.
66
67 CORO THREAD LIFE CYCLE
68 During the long and exciting (or not) life of a coro thread, it goes
69 through a number of states:
70
71 1. Creation
72 The first thing in the life of a coro thread is it's creation -
73 obviously. The typical way to create a thread is to call the "async
74 BLOCK" function:
75
76 async {
77 # thread code goes here
78 };
79
80 You can also pass arguments, which are put in @_:
81
82 async {
83 print $_[1]; # prints 2
84 } 1, 2, 3;
85
86 This creates a new coro thread and puts it into the ready queue,
87 meaning it will run as soon as the CPU is free for it.
88
89 "async" will return a coro object - you can store this for future
90 reference or ignore it, the thread itself will keep a reference to
91 it's thread object - threads are alive on their own.
92
93 Another way to create a thread is to call the "new" constructor with
94 a code-reference:
95
96 new Coro sub {
97 # thread code goes here
98 }, @optional_arguments;
99
100 This is quite similar to calling "async", but the important
101 difference is that the new thread is not put into the ready queue,
102 so the thread will not run until somebody puts it there. "async" is,
103 therefore, identical to this sequence:
104
105 my $coro = new Coro sub {
106 # thread code goes here
107 };
108 $coro->ready;
109 return $coro;
110
111 2. Startup
112 When a new coro thread is created, only a copy of the code reference
113 and the arguments are stored, no extra memory for stacks and so on
114 is allocated, keeping the coro thread in a low-memory state.
115
116 Only when it actually starts executing will all the resources be
117 finally allocated.
118
119 The optional arguments specified at coro creation are available in
120 @_, similar to function calls.
121
122 3. Running / Blocking
123 A lot can happen after the coro thread has started running. Quite
124 usually, it will not run to the end in one go (because you could use
125 a function instead), but it will give up the CPU regularly because
126 it waits for external events.
127
128 As long as a coro thread runs, it's coro object is available in the
129 global variable $Coro::current.
130
131 The low-level way to give up the CPU is to call the scheduler, which
132 selects a new coro thread to run:
133
134 Coro::schedule;
135
136 Since running threads are not in the ready queue, calling the
137 scheduler without doing anything else will block the coro thread
138 forever - you need to arrange either for the coro to put woken up
139 (readied) by some other event or some other thread, or you can put
140 it into the ready queue before scheduling:
141
142 # this is exactly what Coro::cede does
143 $Coro::current->ready;
144 Coro::schedule;
145
146 All the higher-level synchronisation methods (Coro::Semaphore,
147 Coro::rouse_*...) are actually implemented via "->ready" and
148 "Coro::schedule".
149
150 While the coro thread is running it also might get assigned a
151 C-level thread, or the C-level thread might be unassigned from it,
152 as the Coro runtime wishes. A C-level thread needs to be assigned
153 when your perl thread calls into some C-level function and that
154 function in turn calls perl and perl then wants to switch
155 coroutines. This happens most often when you run an event loop and
156 block in the callback, or when perl itself calls some function such
157 as "AUTOLOAD" or methods via the "tie" mechanism.
158
159 4. Termination
160 Many threads actually terminate after some time. There are a number
161 of ways to terminate a coro thread, the simplest is returning from
162 the top-level code reference:
163
164 async {
165 # after returning from here, the coro thread is terminated
166 };
167
168 async {
169 return if 0.5 < rand; # terminate a little earlier, maybe
170 print "got a chance to print this\n";
171 # or here
172 };
173
174 Any values returned from the coroutine can be recovered using
175 "->join":
176
177 my $coro = async {
178 "hello, world\n" # return a string
179 };
180
181 my $hello_world = $coro->join;
182
183 print $hello_world;
184
185 Another way to terminate is to call "Coro::terminate", which at any
186 subroutine call nesting level:
187
188 async {
189 Coro::terminate "return value 1", "return value 2";
190 };
191
192 And yet another way is to "->cancel" the coro thread from another
193 thread:
194
195 my $coro = async {
196 exit 1;
197 };
198
199 $coro->cancel; # an also accept values for ->join to retrieve
200
201 Cancellation *can* be dangerous - it's a bit like calling "exit"
202 without actually exiting, and might leave C libraries and XS modules
203 in a weird state. Unlike other thread implementations, however, Coro
204 is exceptionally safe with regards to cancellation, as perl will
205 always be in a consistent state.
206
207 So, cancelling a thread that runs in an XS event loop might not be
208 the best idea, but any other combination that deals with perl only
209 (cancelling when a thread is in a "tie" method or an "AUTOLOAD" for
210 example) is safe.
211
212 5. Cleanup
213 Threads will allocate various resources. Most but not all will be
214 returned when a thread terminates, during clean-up.
215
216 Cleanup is quite similar to throwing an uncaught exception: perl
217 will work it's way up through all subroutine calls and blocks. On
218 it's way, it will release all "my" variables, undo all "local"'s and
219 free any other resources truly local to the thread.
220
221 So, a common way to free resources is to keep them referenced only
222 by my variables:
223
224 async {
225 my $big_cache = new Cache ...;
226 };
227
228 If there are no other references, then the $big_cache object will be
229 freed when the thread terminates, regardless of how it does so.
230
231 What it does "NOT" do is unlock any Coro::Semaphores or similar
232 resources, but that's where the "guard" methods come in handy:
233
234 my $sem = new Coro::Semaphore;
235
236 async {
237 my $lock_guard = $sem->guard;
238 # if we reutrn, or die or get cancelled, here,
239 # then the semaphore will be "up"ed.
240 };
241
242 The "Guard::guard" function comes in handy for any custom cleanup
243 you might want to do:
244
245 async {
246 my $window = new Gtk2::Window "toplevel";
247 # The window will not be cleaned up automatically, even when $window
248 # gets freed, so use a guard to ensure it's destruction
249 # in case of an error:
250 my $window_guard = Guard::guard { $window->destroy };
251
252 # we are safe here
253 };
254
255 Last not least, "local" can often be handy, too, e.g. when
256 temporarily replacing the coro thread description:
257
258 sub myfunction {
259 local $Coro::current->{desc} = "inside myfunction(@_)";
260
261 # if we return or die here, the description will be restored
262 }
263
264 6. Viva La Zombie Muerte
265 Even after a thread has terminated and cleaned up it's resources,
266 the coro object still is there and stores the return values of the
267 thread. Only in this state will the coro object be "reference
268 counted" in the normal perl sense: the thread code keeps a reference
269 to it when it is active, but not after it has terminated.
270
271 The means the coro object gets freed automatically when the thread
272 has terminated and cleaned up and there arenot other references.
273
274 If there are, the coro object will stay around, and you can call
275 "->join" as many times as you wish to retrieve the result values:
276
277 async {
278 print "hi\n";
279 1
280 };
281
282 # run the async above, and free everything before returning
283 # from Coro::cede:
284 Coro::cede;
285
286 {
287 my $coro = async {
288 print "hi\n";
289 1
290 };
291
292 # run the async above, and clean up, but do not free the coro
293 # object:
294 Coro::cede;
295
296 # optionally retrieve the result values
297 my @results = $coro->join;
298
299 # now $coro goes out of scope, and presumably gets freed
300 };
301
302 GLOBAL VARIABLES
303 $Coro::main
304 This variable stores the Coro object that represents the main
305 program. While you cna "ready" it and do most other things you can
306 do to coro, it is mainly useful to compare again $Coro::current, to
307 see whether you are running in the main program or not.
308
309 $Coro::current
310 The Coro object representing the current coro (the last coro that
311 the Coro scheduler switched to). The initial value is $Coro::main
312 (of course).
313
314 This variable is strictly *read-only*. You can take copies of the
315 value stored in it and use it as any other Coro object, but you must
316 not otherwise modify the variable itself.
317
318 $Coro::idle
319 This variable is mainly useful to integrate Coro into event loops.
320 It is usually better to rely on Coro::AnyEvent or Coro::EV, as this
321 is pretty low-level functionality.
322
323 This variable stores a Coro object that is put into the ready queue
324 when there are no other ready threads (without invoking any ready
325 hooks).
326
327 The default implementation dies with "FATAL: deadlock detected.",
328 followed by a thread listing, because the program has no other way
329 to continue.
330
331 This hook is overwritten by modules such as "Coro::EV" and
332 "Coro::AnyEvent" to wait on an external event that hopefully wakes
333 up a coro so the scheduler can run it.
334
335 See Coro::EV or Coro::AnyEvent for examples of using this technique.
336
337 SIMPLE CORO CREATION
338 async { ... } [@args...]
339 Create a new coro and return its Coro object (usually unused). The
340 coro will be put into the ready queue, so it will start running
341 automatically on the next scheduler run.
342
343 The first argument is a codeblock/closure that should be executed in
344 the coro. When it returns argument returns the coro is automatically
345 terminated.
346
347 The remaining arguments are passed as arguments to the closure.
348
349 See the "Coro::State::new" constructor for info about the coro
350 environment in which coro are executed.
351
352 Calling "exit" in a coro will do the same as calling exit outside
353 the coro. Likewise, when the coro dies, the program will exit, just
354 as it would in the main program.
355
356 If you do not want that, you can provide a default "die" handler, or
357 simply avoid dieing (by use of "eval").
358
359 Example: Create a new coro that just prints its arguments.
360
361 async {
362 print "@_\n";
363 } 1,2,3,4;
364
365 async_pool { ... } [@args...]
366 Similar to "async", but uses a coro pool, so you should not call
367 terminate or join on it (although you are allowed to), and you get a
368 coro that might have executed other code already (which can be good
369 or bad :).
370
371 On the plus side, this function is about twice as fast as creating
372 (and destroying) a completely new coro, so if you need a lot of
373 generic coros in quick successsion, use "async_pool", not "async".
374
375 The code block is executed in an "eval" context and a warning will
376 be issued in case of an exception instead of terminating the
377 program, as "async" does. As the coro is being reused, stuff like
378 "on_destroy" will not work in the expected way, unless you call
379 terminate or cancel, which somehow defeats the purpose of pooling
380 (but is fine in the exceptional case).
381
382 The priority will be reset to 0 after each run, tracing will be
383 disabled, the description will be reset and the default output
384 filehandle gets restored, so you can change all these. Otherwise the
385 coro will be re-used "as-is": most notably if you change other
386 per-coro global stuff such as $/ you *must needs* revert that
387 change, which is most simply done by using local as in: "local $/".
388
389 The idle pool size is limited to 8 idle coros (this can be adjusted
390 by changing $Coro::POOL_SIZE), but there can be as many non-idle
391 coros as required.
392
393 If you are concerned about pooled coros growing a lot because a
394 single "async_pool" used a lot of stackspace you can e.g.
395 "async_pool { terminate }" once per second or so to slowly replenish
396 the pool. In addition to that, when the stacks used by a handler
397 grows larger than 32kb (adjustable via $Coro::POOL_RSS) it will also
398 be destroyed.
399
400 STATIC METHODS
401 Static methods are actually functions that implicitly operate on the
402 current coro.
403
404 schedule
405 Calls the scheduler. The scheduler will find the next coro that is
406 to be run from the ready queue and switches to it. The next coro to
407 be run is simply the one with the highest priority that is longest
408 in its ready queue. If there is no coro ready, it will call the
409 $Coro::idle hook.
410
411 Please note that the current coro will *not* be put into the ready
412 queue, so calling this function usually means you will never be
413 called again unless something else (e.g. an event handler) calls
414 "->ready", thus waking you up.
415
416 This makes "schedule" *the* generic method to use to block the
417 current coro and wait for events: first you remember the current
418 coro in a variable, then arrange for some callback of yours to call
419 "->ready" on that once some event happens, and last you call
420 "schedule" to put yourself to sleep. Note that a lot of things can
421 wake your coro up, so you need to check whether the event indeed
422 happened, e.g. by storing the status in a variable.
423
424 See HOW TO WAIT FOR A CALLBACK, below, for some ways to wait for
425 callbacks.
426
427 cede
428 "Cede" to other coros. This function puts the current coro into the
429 ready queue and calls "schedule", which has the effect of giving up
430 the current "timeslice" to other coros of the same or higher
431 priority. Once your coro gets its turn again it will automatically
432 be resumed.
433
434 This function is often called "yield" in other languages.
435
436 Coro::cede_notself
437 Works like cede, but is not exported by default and will cede to
438 *any* coro, regardless of priority. This is useful sometimes to
439 ensure progress is made.
440
441 terminate [arg...]
442 Terminates the current coro with the given status values (see
443 cancel).
444
445 Coro::on_enter BLOCK, Coro::on_leave BLOCK
446 These function install enter and leave winders in the current scope.
447 The enter block will be executed when on_enter is called and
448 whenever the current coro is re-entered by the scheduler, while the
449 leave block is executed whenever the current coro is blocked by the
450 scheduler, and also when the containing scope is exited (by whatever
451 means, be it exit, die, last etc.).
452
453 *Neither invoking the scheduler, nor exceptions, are allowed within
454 those BLOCKs*. That means: do not even think about calling "die"
455 without an eval, and do not even think of entering the scheduler in
456 any way.
457
458 Since both BLOCKs are tied to the current scope, they will
459 automatically be removed when the current scope exits.
460
461 These functions implement the same concept as "dynamic-wind" in
462 scheme does, and are useful when you want to localise some resource
463 to a specific coro.
464
465 They slow down thread switching considerably for coros that use them
466 (about 40% for a BLOCK with a single assignment, so thread switching
467 is still reasonably fast if the handlers are fast).
468
469 These functions are best understood by an example: The following
470 function will change the current timezone to
471 "Antarctica/South_Pole", which requires a call to "tzset", but by
472 using "on_enter" and "on_leave", which remember/change the current
473 timezone and restore the previous value, respectively, the timezone
474 is only changed for the coro that installed those handlers.
475
476 use POSIX qw(tzset);
477
478 async {
479 my $old_tz; # store outside TZ value here
480
481 Coro::on_enter {
482 $old_tz = $ENV{TZ}; # remember the old value
483
484 $ENV{TZ} = "Antarctica/South_Pole";
485 tzset; # enable new value
486 };
487
488 Coro::on_leave {
489 $ENV{TZ} = $old_tz;
490 tzset; # restore old value
491 };
492
493 # at this place, the timezone is Antarctica/South_Pole,
494 # without disturbing the TZ of any other coro.
495 };
496
497 This can be used to localise about any resource (locale, uid,
498 current working directory etc.) to a block, despite the existance of
499 other coros.
500
501 Another interesting example implements time-sliced multitasking
502 using interval timers (this could obviously be optimised, but does
503 the job):
504
505 # "timeslice" the given block
506 sub timeslice(&) {
507 use Time::HiRes ();
508
509 Coro::on_enter {
510 # on entering the thread, we set an VTALRM handler to cede
511 $SIG{VTALRM} = sub { cede };
512 # and then start the interval timer
513 Time::HiRes::setitimer &Time::HiRes::ITIMER_VIRTUAL, 0.01, 0.01;
514 };
515 Coro::on_leave {
516 # on leaving the thread, we stop the interval timer again
517 Time::HiRes::setitimer &Time::HiRes::ITIMER_VIRTUAL, 0, 0;
518 };
519
520 &{+shift};
521 }
522
523 # use like this:
524 timeslice {
525 # The following is an endless loop that would normally
526 # monopolise the process. Since it runs in a timesliced
527 # environment, it will regularly cede to other threads.
528 while () { }
529 };
530
531 killall
532 Kills/terminates/cancels all coros except the currently running one.
533
534 Note that while this will try to free some of the main interpreter
535 resources if the calling coro isn't the main coro, but one cannot
536 free all of them, so if a coro that is not the main coro calls this
537 function, there will be some one-time resource leak.
538
539 CORO OBJECT METHODS
540 These are the methods you can call on coro objects (or to create them).
541
542 new Coro \&sub [, @args...]
543 Create a new coro and return it. When the sub returns, the coro
544 automatically terminates as if "terminate" with the returned values
545 were called. To make the coro run you must first put it into the
546 ready queue by calling the ready method.
547
548 See "async" and "Coro::State::new" for additional info about the
549 coro environment.
550
551 $success = $coro->ready
552 Put the given coro into the end of its ready queue (there is one
553 queue for each priority) and return true. If the coro is already in
554 the ready queue, do nothing and return false.
555
556 This ensures that the scheduler will resume this coro automatically
557 once all the coro of higher priority and all coro of the same
558 priority that were put into the ready queue earlier have been
559 resumed.
560
561 $coro->suspend
562 Suspends the specified coro. A suspended coro works just like any
563 other coro, except that the scheduler will not select a suspended
564 coro for execution.
565
566 Suspending a coro can be useful when you want to keep the coro from
567 running, but you don't want to destroy it, or when you want to
568 temporarily freeze a coro (e.g. for debugging) to resume it later.
569
570 A scenario for the former would be to suspend all (other) coros
571 after a fork and keep them alive, so their destructors aren't
572 called, but new coros can be created.
573
574 $coro->resume
575 If the specified coro was suspended, it will be resumed. Note that
576 when the coro was in the ready queue when it was suspended, it might
577 have been unreadied by the scheduler, so an activation might have
578 been lost.
579
580 To avoid this, it is best to put a suspended coro into the ready
581 queue unconditionally, as every synchronisation mechanism must
582 protect itself against spurious wakeups, and the one in the Coro
583 family certainly do that.
584
585 $is_ready = $coro->is_ready
586 Returns true iff the Coro object is in the ready queue. Unless the
587 Coro object gets destroyed, it will eventually be scheduled by the
588 scheduler.
589
590 $is_running = $coro->is_running
591 Returns true iff the Coro object is currently running. Only one Coro
592 object can ever be in the running state (but it currently is
593 possible to have multiple running Coro::States).
594
595 $is_suspended = $coro->is_suspended
596 Returns true iff this Coro object has been suspended. Suspended
597 Coros will not ever be scheduled.
598
599 $coro->cancel (arg...)
600 Terminates the given Coro and makes it return the given arguments as
601 status (default: the empty list). Never returns if the Coro is the
602 current Coro.
603
604 $coro->schedule_to
605 Puts the current coro to sleep (like "Coro::schedule"), but instead
606 of continuing with the next coro from the ready queue, always switch
607 to the given coro object (regardless of priority etc.). The
608 readyness state of that coro isn't changed.
609
610 This is an advanced method for special cases - I'd love to hear
611 about any uses for this one.
612
613 $coro->cede_to
614 Like "schedule_to", but puts the current coro into the ready queue.
615 This has the effect of temporarily switching to the given coro, and
616 continuing some time later.
617
618 This is an advanced method for special cases - I'd love to hear
619 about any uses for this one.
620
621 $coro->throw ([$scalar])
622 If $throw is specified and defined, it will be thrown as an
623 exception inside the coro at the next convenient point in time.
624 Otherwise clears the exception object.
625
626 Coro will check for the exception each time a schedule-like-function
627 returns, i.e. after each "schedule", "cede",
628 "Coro::Semaphore->down", "Coro::Handle->readable" and so on. Most of
629 these functions detect this case and return early in case an
630 exception is pending.
631
632 The exception object will be thrown "as is" with the specified
633 scalar in $@, i.e. if it is a string, no line number or newline will
634 be appended (unlike with "die").
635
636 This can be used as a softer means than "cancel" to ask a coro to
637 end itself, although there is no guarantee that the exception will
638 lead to termination, and if the exception isn't caught it might well
639 end the whole program.
640
641 You might also think of "throw" as being the moral equivalent of
642 "kill"ing a coro with a signal (in this case, a scalar).
643
644 $coro->join
645 Wait until the coro terminates and return any values given to the
646 "terminate" or "cancel" functions. "join" can be called concurrently
647 from multiple coro, and all will be resumed and given the status
648 return once the $coro terminates.
649
650 $coro->on_destroy (\&cb)
651 Registers a callback that is called when this coro thread gets
652 destroyed, but before it is joined. The callback gets passed the
653 terminate arguments, if any, and *must not* die, under any
654 circumstances.
655
656 There can be any number of "on_destroy" callbacks per coro.
657
658 $oldprio = $coro->prio ($newprio)
659 Sets (or gets, if the argument is missing) the priority of the coro
660 thread. Higher priority coro get run before lower priority coros.
661 Priorities are small signed integers (currently -4 .. +3), that you
662 can refer to using PRIO_xxx constants (use the import tag :prio to
663 get then):
664
665 PRIO_MAX > PRIO_HIGH > PRIO_NORMAL > PRIO_LOW > PRIO_IDLE > PRIO_MIN
666 3 > 1 > 0 > -1 > -3 > -4
667
668 # set priority to HIGH
669 current->prio (PRIO_HIGH);
670
671 The idle coro thread ($Coro::idle) always has a lower priority than
672 any existing coro.
673
674 Changing the priority of the current coro will take effect
675 immediately, but changing the priority of a coro in the ready queue
676 (but not running) will only take effect after the next schedule (of
677 that coro). This is a bug that will be fixed in some future version.
678
679 $newprio = $coro->nice ($change)
680 Similar to "prio", but subtract the given value from the priority
681 (i.e. higher values mean lower priority, just as in UNIX's nice
682 command).
683
684 $olddesc = $coro->desc ($newdesc)
685 Sets (or gets in case the argument is missing) the description for
686 this coro thread. This is just a free-form string you can associate
687 with a coro.
688
689 This method simply sets the "$coro->{desc}" member to the given
690 string. You can modify this member directly if you wish, and in
691 fact, this is often preferred to indicate major processing states
692 that cna then be seen for example in a Coro::Debug session:
693
694 sub my_long_function {
695 local $Coro::current->{desc} = "now in my_long_function";
696 ...
697 $Coro::current->{desc} = "my_long_function: phase 1";
698 ...
699 $Coro::current->{desc} = "my_long_function: phase 2";
700 ...
701 }
702
703 GLOBAL FUNCTIONS
704 Coro::nready
705 Returns the number of coro that are currently in the ready state,
706 i.e. that can be switched to by calling "schedule" directory or
707 indirectly. The value 0 means that the only runnable coro is the
708 currently running one, so "cede" would have no effect, and
709 "schedule" would cause a deadlock unless there is an idle handler
710 that wakes up some coro.
711
712 my $guard = Coro::guard { ... }
713 This function still exists, but is deprecated. Please use the
714 "Guard::guard" function instead.
715
716 unblock_sub { ... }
717 This utility function takes a BLOCK or code reference and "unblocks"
718 it, returning a new coderef. Unblocking means that calling the new
719 coderef will return immediately without blocking, returning nothing,
720 while the original code ref will be called (with parameters) from
721 within another coro.
722
723 The reason this function exists is that many event libraries (such
724 as the venerable Event module) are not thread-safe (a weaker form of
725 reentrancy). This means you must not block within event callbacks,
726 otherwise you might suffer from crashes or worse. The only event
727 library currently known that is safe to use without "unblock_sub" is
728 EV (but you might still run into deadlocks if all event loops are
729 blocked).
730
731 Coro will try to catch you when you block in the event loop
732 ("FATAL:$Coro::IDLE blocked itself"), but this is just best effort
733 and only works when you do not run your own event loop.
734
735 This function allows your callbacks to block by executing them in
736 another coro where it is safe to block. One example where blocking
737 is handy is when you use the Coro::AIO functions to save results to
738 disk, for example.
739
740 In short: simply use "unblock_sub { ... }" instead of "sub { ... }"
741 when creating event callbacks that want to block.
742
743 If your handler does not plan to block (e.g. simply sends a message
744 to another coro, or puts some other coro into the ready queue),
745 there is no reason to use "unblock_sub".
746
747 Note that you also need to use "unblock_sub" for any other callbacks
748 that are indirectly executed by any C-based event loop. For example,
749 when you use a module that uses AnyEvent (and you use
750 Coro::AnyEvent) and it provides callbacks that are the result of
751 some event callback, then you must not block either, or use
752 "unblock_sub".
753
754 $cb = rouse_cb
755 Create and return a "rouse callback". That's a code reference that,
756 when called, will remember a copy of its arguments and notify the
757 owner coro of the callback.
758
759 See the next function.
760
761 @args = rouse_wait [$cb]
762 Wait for the specified rouse callback (or the last one that was
763 created in this coro).
764
765 As soon as the callback is invoked (or when the callback was invoked
766 before "rouse_wait"), it will return the arguments originally passed
767 to the rouse callback. In scalar context, that means you get the
768 *last* argument, just as if "rouse_wait" had a "return ($a1, $a2,
769 $a3...)" statement at the end.
770
771 See the section HOW TO WAIT FOR A CALLBACK for an actual usage
772 example.
773
774 HOW TO WAIT FOR A CALLBACK
775 It is very common for a coro to wait for some callback to be called.
776 This occurs naturally when you use coro in an otherwise event-based
777 program, or when you use event-based libraries.
778
779 These typically register a callback for some event, and call that
780 callback when the event occured. In a coro, however, you typically want
781 to just wait for the event, simplyifying things.
782
783 For example "AnyEvent->child" registers a callback to be called when a
784 specific child has exited:
785
786 my $child_watcher = AnyEvent->child (pid => $pid, cb => sub { ... });
787
788 But from within a coro, you often just want to write this:
789
790 my $status = wait_for_child $pid;
791
792 Coro offers two functions specifically designed to make this easy,
793 "Coro::rouse_cb" and "Coro::rouse_wait".
794
795 The first function, "rouse_cb", generates and returns a callback that,
796 when invoked, will save its arguments and notify the coro that created
797 the callback.
798
799 The second function, "rouse_wait", waits for the callback to be called
800 (by calling "schedule" to go to sleep) and returns the arguments
801 originally passed to the callback.
802
803 Using these functions, it becomes easy to write the "wait_for_child"
804 function mentioned above:
805
806 sub wait_for_child($) {
807 my ($pid) = @_;
808
809 my $watcher = AnyEvent->child (pid => $pid, cb => Coro::rouse_cb);
810
811 my ($rpid, $rstatus) = Coro::rouse_wait;
812 $rstatus
813 }
814
815 In the case where "rouse_cb" and "rouse_wait" are not flexible enough,
816 you can roll your own, using "schedule":
817
818 sub wait_for_child($) {
819 my ($pid) = @_;
820
821 # store the current coro in $current,
822 # and provide result variables for the closure passed to ->child
823 my $current = $Coro::current;
824 my ($done, $rstatus);
825
826 # pass a closure to ->child
827 my $watcher = AnyEvent->child (pid => $pid, cb => sub {
828 $rstatus = $_[1]; # remember rstatus
829 $done = 1; # mark $rstatus as valud
830 });
831
832 # wait until the closure has been called
833 schedule while !$done;
834
835 $rstatus
836 }
837
838 BUGS/LIMITATIONS
839 fork with pthread backend
840 When Coro is compiled using the pthread backend (which isn't
841 recommended but required on many BSDs as their libcs are completely
842 broken), then coro will not survive a fork. There is no known
843 workaround except to fix your libc and use a saner backend.
844
845 perl process emulation ("threads")
846 This module is not perl-pseudo-thread-safe. You should only ever use
847 this module from the first thread (this requirement might be removed
848 in the future to allow per-thread schedulers, but Coro::State does
849 not yet allow this). I recommend disabling thread support and using
850 processes, as having the windows process emulation enabled under
851 unix roughly halves perl performance, even when not used.
852
853 coro switching is not signal safe
854 You must not switch to another coro from within a signal handler
855 (only relevant with %SIG - most event libraries provide safe
856 signals), *unless* you are sure you are not interrupting a Coro
857 function.
858
859 That means you *MUST NOT* call any function that might "block" the
860 current coro - "cede", "schedule" "Coro::Semaphore->down" or
861 anything that calls those. Everything else, including calling
862 "ready", works.
863
864 WINDOWS PROCESS EMULATION
865 A great many people seem to be confused about ithreads (for example,
866 Chip Salzenberg called me unintelligent, incapable, stupid and gullible,
867 while in the same mail making rather confused statements about perl
868 ithreads (for example, that memory or files would be shared), showing
869 his lack of understanding of this area - if it is hard to understand for
870 Chip, it is probably not obvious to everybody).
871
872 What follows is an ultra-condensed version of my talk about threads in
873 scripting languages given on the perl workshop 2009:
874
875 The so-called "ithreads" were originally implemented for two reasons:
876 first, to (badly) emulate unix processes on native win32 perls, and
877 secondly, to replace the older, real thread model ("5.005-threads").
878
879 It does that by using threads instead of OS processes. The difference
880 between processes and threads is that threads share memory (and other
881 state, such as files) between threads within a single process, while
882 processes do not share anything (at least not semantically). That means
883 that modifications done by one thread are seen by others, while
884 modifications by one process are not seen by other processes.
885
886 The "ithreads" work exactly like that: when creating a new ithreads
887 process, all state is copied (memory is copied physically, files and
888 code is copied logically). Afterwards, it isolates all modifications. On
889 UNIX, the same behaviour can be achieved by using operating system
890 processes, except that UNIX typically uses hardware built into the
891 system to do this efficiently, while the windows process emulation
892 emulates this hardware in software (rather efficiently, but of course it
893 is still much slower than dedicated hardware).
894
895 As mentioned before, loading code, modifying code, modifying data
896 structures and so on is only visible in the ithreads process doing the
897 modification, not in other ithread processes within the same OS process.
898
899 This is why "ithreads" do not implement threads for perl at all, only
900 processes. What makes it so bad is that on non-windows platforms, you
901 can actually take advantage of custom hardware for this purpose (as
902 evidenced by the forks module, which gives you the (i-) threads API,
903 just much faster).
904
905 Sharing data is in the i-threads model is done by transfering data
906 structures between threads using copying semantics, which is very slow -
907 shared data simply does not exist. Benchmarks using i-threads which are
908 communication-intensive show extremely bad behaviour with i-threads (in
909 fact, so bad that Coro, which cannot take direct advantage of multiple
910 CPUs, is often orders of magnitude faster because it shares data using
911 real threads, refer to my talk for details).
912
913 As summary, i-threads *use* threads to implement processes, while the
914 compatible forks module *uses* processes to emulate, uhm, processes.
915 I-threads slow down every perl program when enabled, and outside of
916 windows, serve no (or little) practical purpose, but disadvantages every
917 single-threaded Perl program.
918
919 This is the reason that I try to avoid the name "ithreads", as it is
920 misleading as it implies that it implements some kind of thread model
921 for perl, and prefer the name "windows process emulation", which
922 describes the actual use and behaviour of it much better.
923
924 SEE ALSO
925 Event-Loop integration: Coro::AnyEvent, Coro::EV, Coro::Event.
926
927 Debugging: Coro::Debug.
928
929 Support/Utility: Coro::Specific, Coro::Util.
930
931 Locking and IPC: Coro::Signal, Coro::Channel, Coro::Semaphore,
932 Coro::SemaphoreSet, Coro::RWLock.
933
934 I/O and Timers: Coro::Timer, Coro::Handle, Coro::Socket, Coro::AIO.
935
936 Compatibility with other modules: Coro::LWP (but see also AnyEvent::HTTP
937 for a better-working alternative), Coro::BDB, Coro::Storable,
938 Coro::Select.
939
940 XS API: Coro::MakeMaker.
941
942 Low level Configuration, Thread Environment, Continuations: Coro::State.
943
944 AUTHOR
945 Marc Lehmann <schmorp@schmorp.de>
946 http://home.schmorp.de/
947