ViewVC Help
View File | Revision Log | Show Annotations | Download File
/cvs/Coro/Coro.pm
Revision: 1.316
Committed: Wed Mar 6 06:00:08 2013 UTC (11 years, 2 months ago) by root
Branch: MAIN
CVS Tags: rel-6_28
Changes since 1.315: +1 -1 lines
Log Message:
6.28

File Contents

# Content
1 =head1 NAME
2
3 Coro - the only real threads in perl
4
5 =head1 SYNOPSIS
6
7 use Coro;
8
9 async {
10 # some asynchronous thread of execution
11 print "2\n";
12 cede; # yield back to main
13 print "4\n";
14 };
15 print "1\n";
16 cede; # yield to coro
17 print "3\n";
18 cede; # and again
19
20 # use locking
21 my $lock = new Coro::Semaphore;
22 my $locked;
23
24 $lock->down;
25 $locked = 1;
26 $lock->up;
27
28 =head1 DESCRIPTION
29
30 For a tutorial-style introduction, please read the L<Coro::Intro>
31 manpage. This manpage mainly contains reference information.
32
33 This module collection manages continuations in general, most often in
34 the form of cooperative threads (also called coros, or simply "coro"
35 in the documentation). They are similar to kernel threads but don't (in
36 general) run in parallel at the same time even on SMP machines. The
37 specific flavor of thread offered by this module also guarantees you that
38 it will not switch between threads unless necessary, at easily-identified
39 points in your program, so locking and parallel access are rarely an
40 issue, making thread programming much safer and easier than using other
41 thread models.
42
43 Unlike the so-called "Perl threads" (which are not actually real threads
44 but only the windows process emulation (see section of same name for
45 more details) ported to UNIX, and as such act as processes), Coro
46 provides a full shared address space, which makes communication between
47 threads very easy. And coro threads are fast, too: disabling the Windows
48 process emulation code in your perl and using Coro can easily result in
49 a two to four times speed increase for your programs. A parallel matrix
50 multiplication benchmark (very communication-intensive) runs over 300
51 times faster on a single core than perls pseudo-threads on a quad core
52 using all four cores.
53
54 Coro achieves that by supporting multiple running interpreters that share
55 data, which is especially useful to code pseudo-parallel processes and
56 for event-based programming, such as multiple HTTP-GET requests running
57 concurrently. See L<Coro::AnyEvent> to learn more on how to integrate Coro
58 into an event-based environment.
59
60 In this module, a thread is defined as "callchain + lexical variables +
61 some package variables + C stack), that is, a thread has its own callchain,
62 its own set of lexicals and its own set of perls most important global
63 variables (see L<Coro::State> for more configuration and background info).
64
65 See also the C<SEE ALSO> section at the end of this document - the Coro
66 module family is quite large.
67
68 =head1 CORO THREAD LIFE CYCLE
69
70 During the long and exciting (or not) life of a coro thread, it goes
71 through a number of states:
72
73 =over 4
74
75 =item 1. Creation
76
77 The first thing in the life of a coro thread is it's creation -
78 obviously. The typical way to create a thread is to call the C<async
79 BLOCK> function:
80
81 async {
82 # thread code goes here
83 };
84
85 You can also pass arguments, which are put in C<@_>:
86
87 async {
88 print $_[1]; # prints 2
89 } 1, 2, 3;
90
91 This creates a new coro thread and puts it into the ready queue, meaning
92 it will run as soon as the CPU is free for it.
93
94 C<async> will return a Coro object - you can store this for future
95 reference or ignore it - a thread that is running, ready to run or waiting
96 for some event is alive on it's own.
97
98 Another way to create a thread is to call the C<new> constructor with a
99 code-reference:
100
101 new Coro sub {
102 # thread code goes here
103 }, @optional_arguments;
104
105 This is quite similar to calling C<async>, but the important difference is
106 that the new thread is not put into the ready queue, so the thread will
107 not run until somebody puts it there. C<async> is, therefore, identical to
108 this sequence:
109
110 my $coro = new Coro sub {
111 # thread code goes here
112 };
113 $coro->ready;
114 return $coro;
115
116 =item 2. Startup
117
118 When a new coro thread is created, only a copy of the code reference
119 and the arguments are stored, no extra memory for stacks and so on is
120 allocated, keeping the coro thread in a low-memory state.
121
122 Only when it actually starts executing will all the resources be finally
123 allocated.
124
125 The optional arguments specified at coro creation are available in C<@_>,
126 similar to function calls.
127
128 =item 3. Running / Blocking
129
130 A lot can happen after the coro thread has started running. Quite usually,
131 it will not run to the end in one go (because you could use a function
132 instead), but it will give up the CPU regularly because it waits for
133 external events.
134
135 As long as a coro thread runs, its Coro object is available in the global
136 variable C<$Coro::current>.
137
138 The low-level way to give up the CPU is to call the scheduler, which
139 selects a new coro thread to run:
140
141 Coro::schedule;
142
143 Since running threads are not in the ready queue, calling the scheduler
144 without doing anything else will block the coro thread forever - you need
145 to arrange either for the coro to put woken up (readied) by some other
146 event or some other thread, or you can put it into the ready queue before
147 scheduling:
148
149 # this is exactly what Coro::cede does
150 $Coro::current->ready;
151 Coro::schedule;
152
153 All the higher-level synchronisation methods (Coro::Semaphore,
154 Coro::rouse_*...) are actually implemented via C<< ->ready >> and C<<
155 Coro::schedule >>.
156
157 While the coro thread is running it also might get assigned a C-level
158 thread, or the C-level thread might be unassigned from it, as the Coro
159 runtime wishes. A C-level thread needs to be assigned when your perl
160 thread calls into some C-level function and that function in turn calls
161 perl and perl then wants to switch coroutines. This happens most often
162 when you run an event loop and block in the callback, or when perl
163 itself calls some function such as C<AUTOLOAD> or methods via the C<tie>
164 mechanism.
165
166 =item 4. Termination
167
168 Many threads actually terminate after some time. There are a number of
169 ways to terminate a coro thread, the simplest is returning from the
170 top-level code reference:
171
172 async {
173 # after returning from here, the coro thread is terminated
174 };
175
176 async {
177 return if 0.5 < rand; # terminate a little earlier, maybe
178 print "got a chance to print this\n";
179 # or here
180 };
181
182 Any values returned from the coroutine can be recovered using C<< ->join
183 >>:
184
185 my $coro = async {
186 "hello, world\n" # return a string
187 };
188
189 my $hello_world = $coro->join;
190
191 print $hello_world;
192
193 Another way to terminate is to call C<< Coro::terminate >>, which at any
194 subroutine call nesting level:
195
196 async {
197 Coro::terminate "return value 1", "return value 2";
198 };
199
200 Yet another way is to C<< ->cancel >> (or C<< ->safe_cancel >>) the coro
201 thread from another thread:
202
203 my $coro = async {
204 exit 1;
205 };
206
207 $coro->cancel; # also accepts values for ->join to retrieve
208
209 Cancellation I<can> be dangerous - it's a bit like calling C<exit> without
210 actually exiting, and might leave C libraries and XS modules in a weird
211 state. Unlike other thread implementations, however, Coro is exceptionally
212 safe with regards to cancellation, as perl will always be in a consistent
213 state, and for those cases where you want to do truly marvellous things
214 with your coro while it is being cancelled - that is, make sure all
215 cleanup code is executed from the thread being cancelled - there is even a
216 C<< ->safe_cancel >> method.
217
218 So, cancelling a thread that runs in an XS event loop might not be the
219 best idea, but any other combination that deals with perl only (cancelling
220 when a thread is in a C<tie> method or an C<AUTOLOAD> for example) is
221 safe.
222
223 Last not least, a coro thread object that isn't referenced is C<<
224 ->cancel >>'ed automatically - just like other objects in Perl. This
225 is not such a common case, however - a running thread is referencedy by
226 C<$Coro::current>, a thread ready to run is referenced by the ready queue,
227 a thread waiting on a lock or semaphore is referenced by being in some
228 wait list and so on. But a thread that isn't in any of those queues gets
229 cancelled:
230
231 async {
232 schedule; # cede to other coros, don't go into the ready queue
233 };
234
235 cede;
236 # now the async above is destroyed, as it is not referenced by anything.
237
238 A slightly embellished example might make it clearer:
239
240 async {
241 my $guard = Guard::guard { print "destroyed\n" };
242 schedule while 1;
243 };
244
245 cede;
246
247 Superficially one might not expect any output - since the C<async>
248 implements an endless loop, the C<$guard> will not be cleaned up. However,
249 since the thread object returned by C<async> is not stored anywhere, the
250 thread is initially referenced because it is in the ready queue, when it
251 runs it is referenced by C<$Coro::current>, but when it calls C<schedule>,
252 it gets C<cancel>ed causing the guard object to be destroyed (see the next
253 section), and printing it's message.
254
255 If this seems a bit drastic, remember that this only happens when nothing
256 references the thread anymore, which means there is no way to further
257 execute it, ever. The only options at this point are leaking the thread,
258 or cleaning it up, which brings us to...
259
260 =item 5. Cleanup
261
262 Threads will allocate various resources. Most but not all will be returned
263 when a thread terminates, during clean-up.
264
265 Cleanup is quite similar to throwing an uncaught exception: perl will
266 work it's way up through all subroutine calls and blocks. On it's way, it
267 will release all C<my> variables, undo all C<local>'s and free any other
268 resources truly local to the thread.
269
270 So, a common way to free resources is to keep them referenced only by my
271 variables:
272
273 async {
274 my $big_cache = new Cache ...;
275 };
276
277 If there are no other references, then the C<$big_cache> object will be
278 freed when the thread terminates, regardless of how it does so.
279
280 What it does C<NOT> do is unlock any Coro::Semaphores or similar
281 resources, but that's where the C<guard> methods come in handy:
282
283 my $sem = new Coro::Semaphore;
284
285 async {
286 my $lock_guard = $sem->guard;
287 # if we return, or die or get cancelled, here,
288 # then the semaphore will be "up"ed.
289 };
290
291 The C<Guard::guard> function comes in handy for any custom cleanup you
292 might want to do (but you cannot switch to other coroutines from those
293 code blocks):
294
295 async {
296 my $window = new Gtk2::Window "toplevel";
297 # The window will not be cleaned up automatically, even when $window
298 # gets freed, so use a guard to ensure it's destruction
299 # in case of an error:
300 my $window_guard = Guard::guard { $window->destroy };
301
302 # we are safe here
303 };
304
305 Last not least, C<local> can often be handy, too, e.g. when temporarily
306 replacing the coro thread description:
307
308 sub myfunction {
309 local $Coro::current->{desc} = "inside myfunction(@_)";
310
311 # if we return or die here, the description will be restored
312 }
313
314 =item 6. Viva La Zombie Muerte
315
316 Even after a thread has terminated and cleaned up its resources, the Coro
317 object still is there and stores the return values of the thread.
318
319 When there are no other references, it will simply be cleaned up and
320 freed.
321
322 If there areany references, the Coro object will stay around, and you
323 can call C<< ->join >> as many times as you wish to retrieve the result
324 values:
325
326 async {
327 print "hi\n";
328 1
329 };
330
331 # run the async above, and free everything before returning
332 # from Coro::cede:
333 Coro::cede;
334
335 {
336 my $coro = async {
337 print "hi\n";
338 1
339 };
340
341 # run the async above, and clean up, but do not free the coro
342 # object:
343 Coro::cede;
344
345 # optionally retrieve the result values
346 my @results = $coro->join;
347
348 # now $coro goes out of scope, and presumably gets freed
349 };
350
351 =back
352
353 =cut
354
355 package Coro;
356
357 use common::sense;
358
359 use Carp ();
360
361 use Guard ();
362
363 use Coro::State;
364
365 use base qw(Coro::State Exporter);
366
367 our $idle; # idle handler
368 our $main; # main coro
369 our $current; # current coro
370
371 our $VERSION = 6.28;
372
373 our @EXPORT = qw(async async_pool cede schedule terminate current unblock_sub rouse_cb rouse_wait);
374 our %EXPORT_TAGS = (
375 prio => [qw(PRIO_MAX PRIO_HIGH PRIO_NORMAL PRIO_LOW PRIO_IDLE PRIO_MIN)],
376 );
377 our @EXPORT_OK = (@{$EXPORT_TAGS{prio}}, qw(nready));
378
379 =head1 GLOBAL VARIABLES
380
381 =over 4
382
383 =item $Coro::main
384
385 This variable stores the Coro object that represents the main
386 program. While you can C<ready> it and do most other things you can do to
387 coro, it is mainly useful to compare again C<$Coro::current>, to see
388 whether you are running in the main program or not.
389
390 =cut
391
392 # $main is now being initialised by Coro::State
393
394 =item $Coro::current
395
396 The Coro object representing the current coro (the last
397 coro that the Coro scheduler switched to). The initial value is
398 C<$Coro::main> (of course).
399
400 This variable is B<strictly> I<read-only>. You can take copies of the
401 value stored in it and use it as any other Coro object, but you must
402 not otherwise modify the variable itself.
403
404 =cut
405
406 sub current() { $current } # [DEPRECATED]
407
408 =item $Coro::idle
409
410 This variable is mainly useful to integrate Coro into event loops. It is
411 usually better to rely on L<Coro::AnyEvent> or L<Coro::EV>, as this is
412 pretty low-level functionality.
413
414 This variable stores a Coro object that is put into the ready queue when
415 there are no other ready threads (without invoking any ready hooks).
416
417 The default implementation dies with "FATAL: deadlock detected.", followed
418 by a thread listing, because the program has no other way to continue.
419
420 This hook is overwritten by modules such as C<Coro::EV> and
421 C<Coro::AnyEvent> to wait on an external event that hopefully wakes up a
422 coro so the scheduler can run it.
423
424 See L<Coro::EV> or L<Coro::AnyEvent> for examples of using this technique.
425
426 =cut
427
428 # ||= because other modules could have provided their own by now
429 $idle ||= new Coro sub {
430 require Coro::Debug;
431 die "FATAL: deadlock detected.\n"
432 . Coro::Debug::ps_listing ();
433 };
434
435 # this coro is necessary because a coro
436 # cannot destroy itself.
437 our @destroy;
438 our $manager;
439
440 $manager = new Coro sub {
441 while () {
442 _destroy shift @destroy
443 while @destroy;
444
445 &schedule;
446 }
447 };
448 $manager->{desc} = "[coro manager]";
449 $manager->prio (PRIO_MAX);
450
451 =back
452
453 =head1 SIMPLE CORO CREATION
454
455 =over 4
456
457 =item async { ... } [@args...]
458
459 Create a new coro and return its Coro object (usually
460 unused). The coro will be put into the ready queue, so
461 it will start running automatically on the next scheduler run.
462
463 The first argument is a codeblock/closure that should be executed in the
464 coro. When it returns argument returns the coro is automatically
465 terminated.
466
467 The remaining arguments are passed as arguments to the closure.
468
469 See the C<Coro::State::new> constructor for info about the coro
470 environment in which coro are executed.
471
472 Calling C<exit> in a coro will do the same as calling exit outside
473 the coro. Likewise, when the coro dies, the program will exit,
474 just as it would in the main program.
475
476 If you do not want that, you can provide a default C<die> handler, or
477 simply avoid dieing (by use of C<eval>).
478
479 Example: Create a new coro that just prints its arguments.
480
481 async {
482 print "@_\n";
483 } 1,2,3,4;
484
485 =item async_pool { ... } [@args...]
486
487 Similar to C<async>, but uses a coro pool, so you should not call
488 terminate or join on it (although you are allowed to), and you get a
489 coro that might have executed other code already (which can be good
490 or bad :).
491
492 On the plus side, this function is about twice as fast as creating (and
493 destroying) a completely new coro, so if you need a lot of generic
494 coros in quick successsion, use C<async_pool>, not C<async>.
495
496 The code block is executed in an C<eval> context and a warning will be
497 issued in case of an exception instead of terminating the program, as
498 C<async> does. As the coro is being reused, stuff like C<on_destroy>
499 will not work in the expected way, unless you call terminate or cancel,
500 which somehow defeats the purpose of pooling (but is fine in the
501 exceptional case).
502
503 The priority will be reset to C<0> after each run, tracing will be
504 disabled, the description will be reset and the default output filehandle
505 gets restored, so you can change all these. Otherwise the coro will
506 be re-used "as-is": most notably if you change other per-coro global
507 stuff such as C<$/> you I<must needs> revert that change, which is most
508 simply done by using local as in: C<< local $/ >>.
509
510 The idle pool size is limited to C<8> idle coros (this can be
511 adjusted by changing $Coro::POOL_SIZE), but there can be as many non-idle
512 coros as required.
513
514 If you are concerned about pooled coros growing a lot because a
515 single C<async_pool> used a lot of stackspace you can e.g. C<async_pool
516 { terminate }> once per second or so to slowly replenish the pool. In
517 addition to that, when the stacks used by a handler grows larger than 32kb
518 (adjustable via $Coro::POOL_RSS) it will also be destroyed.
519
520 =cut
521
522 our $POOL_SIZE = 8;
523 our $POOL_RSS = 32 * 1024;
524 our @async_pool;
525
526 sub pool_handler {
527 while () {
528 eval {
529 &{&_pool_handler} while 1;
530 };
531
532 warn $@ if $@;
533 }
534 }
535
536 =back
537
538 =head1 STATIC METHODS
539
540 Static methods are actually functions that implicitly operate on the
541 current coro.
542
543 =over 4
544
545 =item schedule
546
547 Calls the scheduler. The scheduler will find the next coro that is
548 to be run from the ready queue and switches to it. The next coro
549 to be run is simply the one with the highest priority that is longest
550 in its ready queue. If there is no coro ready, it will call the
551 C<$Coro::idle> hook.
552
553 Please note that the current coro will I<not> be put into the ready
554 queue, so calling this function usually means you will never be called
555 again unless something else (e.g. an event handler) calls C<< ->ready >>,
556 thus waking you up.
557
558 This makes C<schedule> I<the> generic method to use to block the current
559 coro and wait for events: first you remember the current coro in
560 a variable, then arrange for some callback of yours to call C<< ->ready
561 >> on that once some event happens, and last you call C<schedule> to put
562 yourself to sleep. Note that a lot of things can wake your coro up,
563 so you need to check whether the event indeed happened, e.g. by storing the
564 status in a variable.
565
566 See B<HOW TO WAIT FOR A CALLBACK>, below, for some ways to wait for callbacks.
567
568 =item cede
569
570 "Cede" to other coros. This function puts the current coro into
571 the ready queue and calls C<schedule>, which has the effect of giving
572 up the current "timeslice" to other coros of the same or higher
573 priority. Once your coro gets its turn again it will automatically be
574 resumed.
575
576 This function is often called C<yield> in other languages.
577
578 =item Coro::cede_notself
579
580 Works like cede, but is not exported by default and will cede to I<any>
581 coro, regardless of priority. This is useful sometimes to ensure
582 progress is made.
583
584 =item terminate [arg...]
585
586 Terminates the current coro with the given status values (see
587 L<cancel>). The values will not be copied, but referenced directly.
588
589 =item Coro::on_enter BLOCK, Coro::on_leave BLOCK
590
591 These function install enter and leave winders in the current scope. The
592 enter block will be executed when on_enter is called and whenever the
593 current coro is re-entered by the scheduler, while the leave block is
594 executed whenever the current coro is blocked by the scheduler, and
595 also when the containing scope is exited (by whatever means, be it exit,
596 die, last etc.).
597
598 I<Neither invoking the scheduler, nor exceptions, are allowed within those
599 BLOCKs>. That means: do not even think about calling C<die> without an
600 eval, and do not even think of entering the scheduler in any way.
601
602 Since both BLOCKs are tied to the current scope, they will automatically
603 be removed when the current scope exits.
604
605 These functions implement the same concept as C<dynamic-wind> in scheme
606 does, and are useful when you want to localise some resource to a specific
607 coro.
608
609 They slow down thread switching considerably for coros that use them
610 (about 40% for a BLOCK with a single assignment, so thread switching is
611 still reasonably fast if the handlers are fast).
612
613 These functions are best understood by an example: The following function
614 will change the current timezone to "Antarctica/South_Pole", which
615 requires a call to C<tzset>, but by using C<on_enter> and C<on_leave>,
616 which remember/change the current timezone and restore the previous
617 value, respectively, the timezone is only changed for the coro that
618 installed those handlers.
619
620 use POSIX qw(tzset);
621
622 async {
623 my $old_tz; # store outside TZ value here
624
625 Coro::on_enter {
626 $old_tz = $ENV{TZ}; # remember the old value
627
628 $ENV{TZ} = "Antarctica/South_Pole";
629 tzset; # enable new value
630 };
631
632 Coro::on_leave {
633 $ENV{TZ} = $old_tz;
634 tzset; # restore old value
635 };
636
637 # at this place, the timezone is Antarctica/South_Pole,
638 # without disturbing the TZ of any other coro.
639 };
640
641 This can be used to localise about any resource (locale, uid, current
642 working directory etc.) to a block, despite the existance of other
643 coros.
644
645 Another interesting example implements time-sliced multitasking using
646 interval timers (this could obviously be optimised, but does the job):
647
648 # "timeslice" the given block
649 sub timeslice(&) {
650 use Time::HiRes ();
651
652 Coro::on_enter {
653 # on entering the thread, we set an VTALRM handler to cede
654 $SIG{VTALRM} = sub { cede };
655 # and then start the interval timer
656 Time::HiRes::setitimer &Time::HiRes::ITIMER_VIRTUAL, 0.01, 0.01;
657 };
658 Coro::on_leave {
659 # on leaving the thread, we stop the interval timer again
660 Time::HiRes::setitimer &Time::HiRes::ITIMER_VIRTUAL, 0, 0;
661 };
662
663 &{+shift};
664 }
665
666 # use like this:
667 timeslice {
668 # The following is an endless loop that would normally
669 # monopolise the process. Since it runs in a timesliced
670 # environment, it will regularly cede to other threads.
671 while () { }
672 };
673
674
675 =item killall
676
677 Kills/terminates/cancels all coros except the currently running one.
678
679 Note that while this will try to free some of the main interpreter
680 resources if the calling coro isn't the main coro, but one
681 cannot free all of them, so if a coro that is not the main coro
682 calls this function, there will be some one-time resource leak.
683
684 =cut
685
686 sub killall {
687 for (Coro::State::list) {
688 $_->cancel
689 if $_ != $current && UNIVERSAL::isa $_, "Coro";
690 }
691 }
692
693 =back
694
695 =head1 CORO OBJECT METHODS
696
697 These are the methods you can call on coro objects (or to create
698 them).
699
700 =over 4
701
702 =item new Coro \&sub [, @args...]
703
704 Create a new coro and return it. When the sub returns, the coro
705 automatically terminates as if C<terminate> with the returned values were
706 called. To make the coro run you must first put it into the ready
707 queue by calling the ready method.
708
709 See C<async> and C<Coro::State::new> for additional info about the
710 coro environment.
711
712 =cut
713
714 sub _coro_run {
715 terminate &{+shift};
716 }
717
718 =item $success = $coro->ready
719
720 Put the given coro into the end of its ready queue (there is one
721 queue for each priority) and return true. If the coro is already in
722 the ready queue, do nothing and return false.
723
724 This ensures that the scheduler will resume this coro automatically
725 once all the coro of higher priority and all coro of the same
726 priority that were put into the ready queue earlier have been resumed.
727
728 =item $coro->suspend
729
730 Suspends the specified coro. A suspended coro works just like any other
731 coro, except that the scheduler will not select a suspended coro for
732 execution.
733
734 Suspending a coro can be useful when you want to keep the coro from
735 running, but you don't want to destroy it, or when you want to temporarily
736 freeze a coro (e.g. for debugging) to resume it later.
737
738 A scenario for the former would be to suspend all (other) coros after a
739 fork and keep them alive, so their destructors aren't called, but new
740 coros can be created.
741
742 =item $coro->resume
743
744 If the specified coro was suspended, it will be resumed. Note that when
745 the coro was in the ready queue when it was suspended, it might have been
746 unreadied by the scheduler, so an activation might have been lost.
747
748 To avoid this, it is best to put a suspended coro into the ready queue
749 unconditionally, as every synchronisation mechanism must protect itself
750 against spurious wakeups, and the one in the Coro family certainly do
751 that.
752
753 =item $state->is_new
754
755 Returns true iff this Coro object is "new", i.e. has never been run
756 yet. Those states basically consist of only the code reference to call and
757 the arguments, but consumes very little other resources. New states will
758 automatically get assigned a perl interpreter when they are transfered to.
759
760 =item $state->is_zombie
761
762 Returns true iff the Coro object has been cancelled, i.e.
763 it's resources freed because they were C<cancel>'ed, C<terminate>'d,
764 C<safe_cancel>'ed or simply went out of scope.
765
766 The name "zombie" stems from UNIX culture, where a process that has
767 exited and only stores and exit status and no other resources is called a
768 "zombie".
769
770 =item $is_ready = $coro->is_ready
771
772 Returns true iff the Coro object is in the ready queue. Unless the Coro
773 object gets destroyed, it will eventually be scheduled by the scheduler.
774
775 =item $is_running = $coro->is_running
776
777 Returns true iff the Coro object is currently running. Only one Coro object
778 can ever be in the running state (but it currently is possible to have
779 multiple running Coro::States).
780
781 =item $is_suspended = $coro->is_suspended
782
783 Returns true iff this Coro object has been suspended. Suspended Coros will
784 not ever be scheduled.
785
786 =item $coro->cancel (arg...)
787
788 Terminates the given Coro thread and makes it return the given arguments as
789 status (default: an empty list). Never returns if the Coro is the
790 current Coro.
791
792 This is a rather brutal way to free a coro, with some limitations - if
793 the thread is inside a C callback that doesn't expect to be canceled,
794 bad things can happen, or if the cancelled thread insists on running
795 complicated cleanup handlers that rely on its thread context, things will
796 not work.
797
798 Any cleanup code being run (e.g. from C<guard> blocks) will be run without
799 a thread context, and is not allowed to switch to other threads. On the
800 plus side, C<< ->cancel >> will always clean up the thread, no matter
801 what. If your cleanup code is complex or you want to avoid cancelling a
802 C-thread that doesn't know how to clean up itself, it can be better to C<<
803 ->throw >> an exception, or use C<< ->safe_cancel >>.
804
805 The arguments to C<< ->cancel >> are not copied, but instead will
806 be referenced directly (e.g. if you pass C<$var> and after the call
807 change that variable, then you might change the return values passed to
808 e.g. C<join>, so don't do that).
809
810 The resources of the Coro are usually freed (or destructed) before this
811 call returns, but this can be delayed for an indefinite amount of time, as
812 in some cases the manager thread has to run first to actually destruct the
813 Coro object.
814
815 =item $coro->safe_cancel ($arg...)
816
817 Works mostly like C<< ->cancel >>, but is inherently "safer", and
818 consequently, can fail with an exception in cases the thread is not in a
819 cancellable state.
820
821 This method works a bit like throwing an exception that cannot be caught
822 - specifically, it will clean up the thread from within itself, so
823 all cleanup handlers (e.g. C<guard> blocks) are run with full thread
824 context and can block if they wish. The downside is that there is no
825 guarantee that the thread can be cancelled when you call this method, and
826 therefore, it might fail. It is also considerably slower than C<cancel> or
827 C<terminate>.
828
829 A thread is in a safe-cancellable state if it either hasn't been run yet,
830 or it has no C context attached and is inside an SLF function.
831
832 The latter two basically mean that the thread isn't currently inside a
833 perl callback called from some C function (usually via some XS modules)
834 and isn't currently executing inside some C function itself (via Coro's XS
835 API).
836
837 This call returns true when it could cancel the thread, or croaks with an
838 error otherwise (i.e. it either returns true or doesn't return at all).
839
840 Why the weird interface? Well, there are two common models on how and
841 when to cancel things. In the first, you have the expectation that your
842 coro thread can be cancelled when you want to cancel it - if the thread
843 isn't cancellable, this would be a bug somewhere, so C<< ->safe_cancel >>
844 croaks to notify of the bug.
845
846 In the second model you sometimes want to ask nicely to cancel a thread,
847 but if it's not a good time, well, then don't cancel. This can be done
848 relatively easy like this:
849
850 if (! eval { $coro->safe_cancel }) {
851 warn "unable to cancel thread: $@";
852 }
853
854 However, what you never should do is first try to cancel "safely" and
855 if that fails, cancel the "hard" way with C<< ->cancel >>. That makes
856 no sense: either you rely on being able to execute cleanup code in your
857 thread context, or you don't. If you do, then C<< ->safe_cancel >> is the
858 only way, and if you don't, then C<< ->cancel >> is always faster and more
859 direct.
860
861 =item $coro->schedule_to
862
863 Puts the current coro to sleep (like C<Coro::schedule>), but instead
864 of continuing with the next coro from the ready queue, always switch to
865 the given coro object (regardless of priority etc.). The readyness
866 state of that coro isn't changed.
867
868 This is an advanced method for special cases - I'd love to hear about any
869 uses for this one.
870
871 =item $coro->cede_to
872
873 Like C<schedule_to>, but puts the current coro into the ready
874 queue. This has the effect of temporarily switching to the given
875 coro, and continuing some time later.
876
877 This is an advanced method for special cases - I'd love to hear about any
878 uses for this one.
879
880 =item $coro->throw ([$scalar])
881
882 If C<$throw> is specified and defined, it will be thrown as an exception
883 inside the coro at the next convenient point in time. Otherwise
884 clears the exception object.
885
886 Coro will check for the exception each time a schedule-like-function
887 returns, i.e. after each C<schedule>, C<cede>, C<< Coro::Semaphore->down
888 >>, C<< Coro::Handle->readable >> and so on. Most of those functions (all
889 that are part of Coro itself) detect this case and return early in case an
890 exception is pending.
891
892 The exception object will be thrown "as is" with the specified scalar in
893 C<$@>, i.e. if it is a string, no line number or newline will be appended
894 (unlike with C<die>).
895
896 This can be used as a softer means than either C<cancel> or C<safe_cancel
897 >to ask a coro to end itself, although there is no guarantee that the
898 exception will lead to termination, and if the exception isn't caught it
899 might well end the whole program.
900
901 You might also think of C<throw> as being the moral equivalent of
902 C<kill>ing a coro with a signal (in this case, a scalar).
903
904 =item $coro->join
905
906 Wait until the coro terminates and return any values given to the
907 C<terminate> or C<cancel> functions. C<join> can be called concurrently
908 from multiple threads, and all will be resumed and given the status
909 return once the C<$coro> terminates.
910
911 =item $coro->on_destroy (\&cb)
912
913 Registers a callback that is called when this coro thread gets destroyed,
914 that is, after it's resources have been freed but before it is joined. The
915 callback gets passed the terminate/cancel arguments, if any, and I<must
916 not> die, under any circumstances.
917
918 There can be any number of C<on_destroy> callbacks per coro, and there is
919 no way currently to remove a callback once added.
920
921 =item $oldprio = $coro->prio ($newprio)
922
923 Sets (or gets, if the argument is missing) the priority of the
924 coro thread. Higher priority coro get run before lower priority
925 coros. Priorities are small signed integers (currently -4 .. +3),
926 that you can refer to using PRIO_xxx constants (use the import tag :prio
927 to get then):
928
929 PRIO_MAX > PRIO_HIGH > PRIO_NORMAL > PRIO_LOW > PRIO_IDLE > PRIO_MIN
930 3 > 1 > 0 > -1 > -3 > -4
931
932 # set priority to HIGH
933 current->prio (PRIO_HIGH);
934
935 The idle coro thread ($Coro::idle) always has a lower priority than any
936 existing coro.
937
938 Changing the priority of the current coro will take effect immediately,
939 but changing the priority of a coro in the ready queue (but not running)
940 will only take effect after the next schedule (of that coro). This is a
941 bug that will be fixed in some future version.
942
943 =item $newprio = $coro->nice ($change)
944
945 Similar to C<prio>, but subtract the given value from the priority (i.e.
946 higher values mean lower priority, just as in UNIX's nice command).
947
948 =item $olddesc = $coro->desc ($newdesc)
949
950 Sets (or gets in case the argument is missing) the description for this
951 coro thread. This is just a free-form string you can associate with a
952 coro.
953
954 This method simply sets the C<< $coro->{desc} >> member to the given
955 string. You can modify this member directly if you wish, and in fact, this
956 is often preferred to indicate major processing states that can then be
957 seen for example in a L<Coro::Debug> session:
958
959 sub my_long_function {
960 local $Coro::current->{desc} = "now in my_long_function";
961 ...
962 $Coro::current->{desc} = "my_long_function: phase 1";
963 ...
964 $Coro::current->{desc} = "my_long_function: phase 2";
965 ...
966 }
967
968 =cut
969
970 sub desc {
971 my $old = $_[0]{desc};
972 $_[0]{desc} = $_[1] if @_ > 1;
973 $old;
974 }
975
976 sub transfer {
977 require Carp;
978 Carp::croak ("You must not call ->transfer on Coro objects. Use Coro::State objects or the ->schedule_to method. Caught");
979 }
980
981 =back
982
983 =head1 GLOBAL FUNCTIONS
984
985 =over 4
986
987 =item Coro::nready
988
989 Returns the number of coro that are currently in the ready state,
990 i.e. that can be switched to by calling C<schedule> directory or
991 indirectly. The value C<0> means that the only runnable coro is the
992 currently running one, so C<cede> would have no effect, and C<schedule>
993 would cause a deadlock unless there is an idle handler that wakes up some
994 coro.
995
996 =item my $guard = Coro::guard { ... }
997
998 This function still exists, but is deprecated. Please use the
999 C<Guard::guard> function instead.
1000
1001 =cut
1002
1003 BEGIN { *guard = \&Guard::guard }
1004
1005 =item unblock_sub { ... }
1006
1007 This utility function takes a BLOCK or code reference and "unblocks" it,
1008 returning a new coderef. Unblocking means that calling the new coderef
1009 will return immediately without blocking, returning nothing, while the
1010 original code ref will be called (with parameters) from within another
1011 coro.
1012
1013 The reason this function exists is that many event libraries (such as
1014 the venerable L<Event|Event> module) are not thread-safe (a weaker form
1015 of reentrancy). This means you must not block within event callbacks,
1016 otherwise you might suffer from crashes or worse. The only event library
1017 currently known that is safe to use without C<unblock_sub> is L<EV> (but
1018 you might still run into deadlocks if all event loops are blocked).
1019
1020 Coro will try to catch you when you block in the event loop
1021 ("FATAL:$Coro::IDLE blocked itself"), but this is just best effort and
1022 only works when you do not run your own event loop.
1023
1024 This function allows your callbacks to block by executing them in another
1025 coro where it is safe to block. One example where blocking is handy
1026 is when you use the L<Coro::AIO|Coro::AIO> functions to save results to
1027 disk, for example.
1028
1029 In short: simply use C<unblock_sub { ... }> instead of C<sub { ... }> when
1030 creating event callbacks that want to block.
1031
1032 If your handler does not plan to block (e.g. simply sends a message to
1033 another coro, or puts some other coro into the ready queue), there is
1034 no reason to use C<unblock_sub>.
1035
1036 Note that you also need to use C<unblock_sub> for any other callbacks that
1037 are indirectly executed by any C-based event loop. For example, when you
1038 use a module that uses L<AnyEvent> (and you use L<Coro::AnyEvent>) and it
1039 provides callbacks that are the result of some event callback, then you
1040 must not block either, or use C<unblock_sub>.
1041
1042 =cut
1043
1044 our @unblock_queue;
1045
1046 # we create a special coro because we want to cede,
1047 # to reduce pressure on the coro pool (because most callbacks
1048 # return immediately and can be reused) and because we cannot cede
1049 # inside an event callback.
1050 our $unblock_scheduler = new Coro sub {
1051 while () {
1052 while (my $cb = pop @unblock_queue) {
1053 &async_pool (@$cb);
1054
1055 # for short-lived callbacks, this reduces pressure on the coro pool
1056 # as the chance is very high that the async_poll coro will be back
1057 # in the idle state when cede returns
1058 cede;
1059 }
1060 schedule; # sleep well
1061 }
1062 };
1063 $unblock_scheduler->{desc} = "[unblock_sub scheduler]";
1064
1065 sub unblock_sub(&) {
1066 my $cb = shift;
1067
1068 sub {
1069 unshift @unblock_queue, [$cb, @_];
1070 $unblock_scheduler->ready;
1071 }
1072 }
1073
1074 =item $cb = rouse_cb
1075
1076 Create and return a "rouse callback". That's a code reference that,
1077 when called, will remember a copy of its arguments and notify the owner
1078 coro of the callback.
1079
1080 See the next function.
1081
1082 =item @args = rouse_wait [$cb]
1083
1084 Wait for the specified rouse callback (or the last one that was created in
1085 this coro).
1086
1087 As soon as the callback is invoked (or when the callback was invoked
1088 before C<rouse_wait>), it will return the arguments originally passed to
1089 the rouse callback. In scalar context, that means you get the I<last>
1090 argument, just as if C<rouse_wait> had a C<return ($a1, $a2, $a3...)>
1091 statement at the end.
1092
1093 See the section B<HOW TO WAIT FOR A CALLBACK> for an actual usage example.
1094
1095 =back
1096
1097 =cut
1098
1099 for my $module (qw(Channel RWLock Semaphore SemaphoreSet Signal Specific)) {
1100 my $old = defined &{"Coro::$module\::new"} && \&{"Coro::$module\::new"};
1101
1102 *{"Coro::$module\::new"} = sub {
1103 require "Coro/$module.pm";
1104
1105 # some modules have their new predefined in State.xs, some don't
1106 *{"Coro::$module\::new"} = $old
1107 if $old;
1108
1109 goto &{"Coro::$module\::new"};
1110 };
1111 }
1112
1113 1;
1114
1115 =head1 HOW TO WAIT FOR A CALLBACK
1116
1117 It is very common for a coro to wait for some callback to be
1118 called. This occurs naturally when you use coro in an otherwise
1119 event-based program, or when you use event-based libraries.
1120
1121 These typically register a callback for some event, and call that callback
1122 when the event occured. In a coro, however, you typically want to
1123 just wait for the event, simplyifying things.
1124
1125 For example C<< AnyEvent->child >> registers a callback to be called when
1126 a specific child has exited:
1127
1128 my $child_watcher = AnyEvent->child (pid => $pid, cb => sub { ... });
1129
1130 But from within a coro, you often just want to write this:
1131
1132 my $status = wait_for_child $pid;
1133
1134 Coro offers two functions specifically designed to make this easy,
1135 C<rouse_cb> and C<rouse_wait>.
1136
1137 The first function, C<rouse_cb>, generates and returns a callback that,
1138 when invoked, will save its arguments and notify the coro that
1139 created the callback.
1140
1141 The second function, C<rouse_wait>, waits for the callback to be called
1142 (by calling C<schedule> to go to sleep) and returns the arguments
1143 originally passed to the callback.
1144
1145 Using these functions, it becomes easy to write the C<wait_for_child>
1146 function mentioned above:
1147
1148 sub wait_for_child($) {
1149 my ($pid) = @_;
1150
1151 my $watcher = AnyEvent->child (pid => $pid, cb => rouse_cb);
1152
1153 my ($rpid, $rstatus) = rouse_wait;
1154 $rstatus
1155 }
1156
1157 In the case where C<rouse_cb> and C<rouse_wait> are not flexible enough,
1158 you can roll your own, using C<schedule> and C<ready>:
1159
1160 sub wait_for_child($) {
1161 my ($pid) = @_;
1162
1163 # store the current coro in $current,
1164 # and provide result variables for the closure passed to ->child
1165 my $current = $Coro::current;
1166 my ($done, $rstatus);
1167
1168 # pass a closure to ->child
1169 my $watcher = AnyEvent->child (pid => $pid, cb => sub {
1170 $rstatus = $_[1]; # remember rstatus
1171 $done = 1; # mark $rstatus as valid
1172 $current->ready; # wake up the waiting thread
1173 });
1174
1175 # wait until the closure has been called
1176 schedule while !$done;
1177
1178 $rstatus
1179 }
1180
1181
1182 =head1 BUGS/LIMITATIONS
1183
1184 =over 4
1185
1186 =item fork with pthread backend
1187
1188 When Coro is compiled using the pthread backend (which isn't recommended
1189 but required on many BSDs as their libcs are completely broken), then
1190 coro will not survive a fork. There is no known workaround except to
1191 fix your libc and use a saner backend.
1192
1193 =item perl process emulation ("threads")
1194
1195 This module is not perl-pseudo-thread-safe. You should only ever use this
1196 module from the first thread (this requirement might be removed in the
1197 future to allow per-thread schedulers, but Coro::State does not yet allow
1198 this). I recommend disabling thread support and using processes, as having
1199 the windows process emulation enabled under unix roughly halves perl
1200 performance, even when not used.
1201
1202 Attempts to use threads created in another emulated process will crash
1203 ("cleanly", with a null pointer exception).
1204
1205 =item coro switching is not signal safe
1206
1207 You must not switch to another coro from within a signal handler (only
1208 relevant with %SIG - most event libraries provide safe signals), I<unless>
1209 you are sure you are not interrupting a Coro function.
1210
1211 That means you I<MUST NOT> call any function that might "block" the
1212 current coro - C<cede>, C<schedule> C<< Coro::Semaphore->down >> or
1213 anything that calls those. Everything else, including calling C<ready>,
1214 works.
1215
1216 =back
1217
1218
1219 =head1 WINDOWS PROCESS EMULATION
1220
1221 A great many people seem to be confused about ithreads (for example, Chip
1222 Salzenberg called me unintelligent, incapable, stupid and gullible,
1223 while in the same mail making rather confused statements about perl
1224 ithreads (for example, that memory or files would be shared), showing his
1225 lack of understanding of this area - if it is hard to understand for Chip,
1226 it is probably not obvious to everybody).
1227
1228 What follows is an ultra-condensed version of my talk about threads in
1229 scripting languages given on the perl workshop 2009:
1230
1231 The so-called "ithreads" were originally implemented for two reasons:
1232 first, to (badly) emulate unix processes on native win32 perls, and
1233 secondly, to replace the older, real thread model ("5.005-threads").
1234
1235 It does that by using threads instead of OS processes. The difference
1236 between processes and threads is that threads share memory (and other
1237 state, such as files) between threads within a single process, while
1238 processes do not share anything (at least not semantically). That
1239 means that modifications done by one thread are seen by others, while
1240 modifications by one process are not seen by other processes.
1241
1242 The "ithreads" work exactly like that: when creating a new ithreads
1243 process, all state is copied (memory is copied physically, files and code
1244 is copied logically). Afterwards, it isolates all modifications. On UNIX,
1245 the same behaviour can be achieved by using operating system processes,
1246 except that UNIX typically uses hardware built into the system to do this
1247 efficiently, while the windows process emulation emulates this hardware in
1248 software (rather efficiently, but of course it is still much slower than
1249 dedicated hardware).
1250
1251 As mentioned before, loading code, modifying code, modifying data
1252 structures and so on is only visible in the ithreads process doing the
1253 modification, not in other ithread processes within the same OS process.
1254
1255 This is why "ithreads" do not implement threads for perl at all, only
1256 processes. What makes it so bad is that on non-windows platforms, you can
1257 actually take advantage of custom hardware for this purpose (as evidenced
1258 by the forks module, which gives you the (i-) threads API, just much
1259 faster).
1260
1261 Sharing data is in the i-threads model is done by transfering data
1262 structures between threads using copying semantics, which is very slow -
1263 shared data simply does not exist. Benchmarks using i-threads which are
1264 communication-intensive show extremely bad behaviour with i-threads (in
1265 fact, so bad that Coro, which cannot take direct advantage of multiple
1266 CPUs, is often orders of magnitude faster because it shares data using
1267 real threads, refer to my talk for details).
1268
1269 As summary, i-threads *use* threads to implement processes, while
1270 the compatible forks module *uses* processes to emulate, uhm,
1271 processes. I-threads slow down every perl program when enabled, and
1272 outside of windows, serve no (or little) practical purpose, but
1273 disadvantages every single-threaded Perl program.
1274
1275 This is the reason that I try to avoid the name "ithreads", as it is
1276 misleading as it implies that it implements some kind of thread model for
1277 perl, and prefer the name "windows process emulation", which describes the
1278 actual use and behaviour of it much better.
1279
1280 =head1 SEE ALSO
1281
1282 Event-Loop integration: L<Coro::AnyEvent>, L<Coro::EV>, L<Coro::Event>.
1283
1284 Debugging: L<Coro::Debug>.
1285
1286 Support/Utility: L<Coro::Specific>, L<Coro::Util>.
1287
1288 Locking and IPC: L<Coro::Signal>, L<Coro::Channel>, L<Coro::Semaphore>,
1289 L<Coro::SemaphoreSet>, L<Coro::RWLock>.
1290
1291 I/O and Timers: L<Coro::Timer>, L<Coro::Handle>, L<Coro::Socket>, L<Coro::AIO>.
1292
1293 Compatibility with other modules: L<Coro::LWP> (but see also L<AnyEvent::HTTP> for
1294 a better-working alternative), L<Coro::BDB>, L<Coro::Storable>,
1295 L<Coro::Select>.
1296
1297 XS API: L<Coro::MakeMaker>.
1298
1299 Low level Configuration, Thread Environment, Continuations: L<Coro::State>.
1300
1301 =head1 AUTHOR
1302
1303 Marc Lehmann <schmorp@schmorp.de>
1304 http://home.schmorp.de/
1305
1306 =cut
1307