ViewVC Help
View File | Revision Log | Show Annotations | Download File
/cvs/Coro/Coro.pm
Revision: 1.284
Committed: Sun Feb 13 04:39:14 2011 UTC (13 years, 3 months ago) by root
Branch: MAIN
CVS Tags: rel-5_36
Changes since 1.283: +12 -10 lines
Log Message:
5.26

File Contents

# Content
1 =head1 NAME
2
3 Coro - the only real threads in perl
4
5 =head1 SYNOPSIS
6
7 use Coro;
8
9 async {
10 # some asynchronous thread of execution
11 print "2\n";
12 cede; # yield back to main
13 print "4\n";
14 };
15 print "1\n";
16 cede; # yield to coro
17 print "3\n";
18 cede; # and again
19
20 # use locking
21 use Coro::Semaphore;
22 my $lock = new Coro::Semaphore;
23 my $locked;
24
25 $lock->down;
26 $locked = 1;
27 $lock->up;
28
29 =head1 DESCRIPTION
30
31 For a tutorial-style introduction, please read the L<Coro::Intro>
32 manpage. This manpage mainly contains reference information.
33
34 This module collection manages continuations in general, most often in
35 the form of cooperative threads (also called coros, or simply "coro"
36 in the documentation). They are similar to kernel threads but don't (in
37 general) run in parallel at the same time even on SMP machines. The
38 specific flavor of thread offered by this module also guarantees you that
39 it will not switch between threads unless necessary, at easily-identified
40 points in your program, so locking and parallel access are rarely an
41 issue, making thread programming much safer and easier than using other
42 thread models.
43
44 Unlike the so-called "Perl threads" (which are not actually real threads
45 but only the windows process emulation (see section of same name for more
46 details) ported to unix, and as such act as processes), Coro provides
47 a full shared address space, which makes communication between threads
48 very easy. And Coro's threads are fast, too: disabling the Windows
49 process emulation code in your perl and using Coro can easily result in
50 a two to four times speed increase for your programs. A parallel matrix
51 multiplication benchmark runs over 300 times faster on a single core than
52 perl's pseudo-threads on a quad core using all four cores.
53
54 Coro achieves that by supporting multiple running interpreters that share
55 data, which is especially useful to code pseudo-parallel processes and
56 for event-based programming, such as multiple HTTP-GET requests running
57 concurrently. See L<Coro::AnyEvent> to learn more on how to integrate Coro
58 into an event-based environment.
59
60 In this module, a thread is defined as "callchain + lexical variables +
61 some package variables + C stack), that is, a thread has its own callchain,
62 its own set of lexicals and its own set of perls most important global
63 variables (see L<Coro::State> for more configuration and background info).
64
65 See also the C<SEE ALSO> section at the end of this document - the Coro
66 module family is quite large.
67
68 =cut
69
70 package Coro;
71
72 use common::sense;
73
74 use Carp ();
75
76 use Guard ();
77
78 use Coro::State;
79
80 use base qw(Coro::State Exporter);
81
82 our $idle; # idle handler
83 our $main; # main coro
84 our $current; # current coro
85
86 our $VERSION = 5.26;
87
88 our @EXPORT = qw(async async_pool cede schedule terminate current unblock_sub rouse_cb rouse_wait);
89 our %EXPORT_TAGS = (
90 prio => [qw(PRIO_MAX PRIO_HIGH PRIO_NORMAL PRIO_LOW PRIO_IDLE PRIO_MIN)],
91 );
92 our @EXPORT_OK = (@{$EXPORT_TAGS{prio}}, qw(nready));
93
94 =head1 GLOBAL VARIABLES
95
96 =over 4
97
98 =item $Coro::main
99
100 This variable stores the Coro object that represents the main
101 program. While you cna C<ready> it and do most other things you can do to
102 coro, it is mainly useful to compare again C<$Coro::current>, to see
103 whether you are running in the main program or not.
104
105 =cut
106
107 # $main is now being initialised by Coro::State
108
109 =item $Coro::current
110
111 The Coro object representing the current coro (the last
112 coro that the Coro scheduler switched to). The initial value is
113 C<$Coro::main> (of course).
114
115 This variable is B<strictly> I<read-only>. You can take copies of the
116 value stored in it and use it as any other Coro object, but you must
117 not otherwise modify the variable itself.
118
119 =cut
120
121 sub current() { $current } # [DEPRECATED]
122
123 =item $Coro::idle
124
125 This variable is mainly useful to integrate Coro into event loops. It is
126 usually better to rely on L<Coro::AnyEvent> or L<Coro::EV>, as this is
127 pretty low-level functionality.
128
129 This variable stores a Coro object that is put into the ready queue when
130 there are no other ready threads (without invoking any ready hooks).
131
132 The default implementation dies with "FATAL: deadlock detected.", followed
133 by a thread listing, because the program has no other way to continue.
134
135 This hook is overwritten by modules such as C<Coro::EV> and
136 C<Coro::AnyEvent> to wait on an external event that hopefully wake up a
137 coro so the scheduler can run it.
138
139 See L<Coro::EV> or L<Coro::AnyEvent> for examples of using this technique.
140
141 =cut
142
143 # ||= because other modules could have provided their own by now
144 $idle ||= new Coro sub {
145 require Coro::Debug;
146 die "FATAL: deadlock detected.\n"
147 . Coro::Debug::ps_listing ();
148 };
149
150 # this coro is necessary because a coro
151 # cannot destroy itself.
152 our @destroy;
153 our $manager;
154
155 $manager = new Coro sub {
156 while () {
157 Coro::State::cancel shift @destroy
158 while @destroy;
159
160 &schedule;
161 }
162 };
163 $manager->{desc} = "[coro manager]";
164 $manager->prio (PRIO_MAX);
165
166 =back
167
168 =head1 SIMPLE CORO CREATION
169
170 =over 4
171
172 =item async { ... } [@args...]
173
174 Create a new coro and return its Coro object (usually
175 unused). The coro will be put into the ready queue, so
176 it will start running automatically on the next scheduler run.
177
178 The first argument is a codeblock/closure that should be executed in the
179 coro. When it returns argument returns the coro is automatically
180 terminated.
181
182 The remaining arguments are passed as arguments to the closure.
183
184 See the C<Coro::State::new> constructor for info about the coro
185 environment in which coro are executed.
186
187 Calling C<exit> in a coro will do the same as calling exit outside
188 the coro. Likewise, when the coro dies, the program will exit,
189 just as it would in the main program.
190
191 If you do not want that, you can provide a default C<die> handler, or
192 simply avoid dieing (by use of C<eval>).
193
194 Example: Create a new coro that just prints its arguments.
195
196 async {
197 print "@_\n";
198 } 1,2,3,4;
199
200 =item async_pool { ... } [@args...]
201
202 Similar to C<async>, but uses a coro pool, so you should not call
203 terminate or join on it (although you are allowed to), and you get a
204 coro that might have executed other code already (which can be good
205 or bad :).
206
207 On the plus side, this function is about twice as fast as creating (and
208 destroying) a completely new coro, so if you need a lot of generic
209 coros in quick successsion, use C<async_pool>, not C<async>.
210
211 The code block is executed in an C<eval> context and a warning will be
212 issued in case of an exception instead of terminating the program, as
213 C<async> does. As the coro is being reused, stuff like C<on_destroy>
214 will not work in the expected way, unless you call terminate or cancel,
215 which somehow defeats the purpose of pooling (but is fine in the
216 exceptional case).
217
218 The priority will be reset to C<0> after each run, tracing will be
219 disabled, the description will be reset and the default output filehandle
220 gets restored, so you can change all these. Otherwise the coro will
221 be re-used "as-is": most notably if you change other per-coro global
222 stuff such as C<$/> you I<must needs> revert that change, which is most
223 simply done by using local as in: C<< local $/ >>.
224
225 The idle pool size is limited to C<8> idle coros (this can be
226 adjusted by changing $Coro::POOL_SIZE), but there can be as many non-idle
227 coros as required.
228
229 If you are concerned about pooled coros growing a lot because a
230 single C<async_pool> used a lot of stackspace you can e.g. C<async_pool
231 { terminate }> once per second or so to slowly replenish the pool. In
232 addition to that, when the stacks used by a handler grows larger than 32kb
233 (adjustable via $Coro::POOL_RSS) it will also be destroyed.
234
235 =cut
236
237 our $POOL_SIZE = 8;
238 our $POOL_RSS = 32 * 1024;
239 our @async_pool;
240
241 sub pool_handler {
242 while () {
243 eval {
244 &{&_pool_handler} while 1;
245 };
246
247 warn $@ if $@;
248 }
249 }
250
251 =back
252
253 =head1 STATIC METHODS
254
255 Static methods are actually functions that implicitly operate on the
256 current coro.
257
258 =over 4
259
260 =item schedule
261
262 Calls the scheduler. The scheduler will find the next coro that is
263 to be run from the ready queue and switches to it. The next coro
264 to be run is simply the one with the highest priority that is longest
265 in its ready queue. If there is no coro ready, it will call the
266 C<$Coro::idle> hook.
267
268 Please note that the current coro will I<not> be put into the ready
269 queue, so calling this function usually means you will never be called
270 again unless something else (e.g. an event handler) calls C<< ->ready >>,
271 thus waking you up.
272
273 This makes C<schedule> I<the> generic method to use to block the current
274 coro and wait for events: first you remember the current coro in
275 a variable, then arrange for some callback of yours to call C<< ->ready
276 >> on that once some event happens, and last you call C<schedule> to put
277 yourself to sleep. Note that a lot of things can wake your coro up,
278 so you need to check whether the event indeed happened, e.g. by storing the
279 status in a variable.
280
281 See B<HOW TO WAIT FOR A CALLBACK>, below, for some ways to wait for callbacks.
282
283 =item cede
284
285 "Cede" to other coros. This function puts the current coro into
286 the ready queue and calls C<schedule>, which has the effect of giving
287 up the current "timeslice" to other coros of the same or higher
288 priority. Once your coro gets its turn again it will automatically be
289 resumed.
290
291 This function is often called C<yield> in other languages.
292
293 =item Coro::cede_notself
294
295 Works like cede, but is not exported by default and will cede to I<any>
296 coro, regardless of priority. This is useful sometimes to ensure
297 progress is made.
298
299 =item terminate [arg...]
300
301 Terminates the current coro with the given status values (see L<cancel>).
302
303 =item Coro::on_enter BLOCK, Coro::on_leave BLOCK
304
305 These function install enter and leave winders in the current scope. The
306 enter block will be executed when on_enter is called and whenever the
307 current coro is re-entered by the scheduler, while the leave block is
308 executed whenever the current coro is blocked by the scheduler, and
309 also when the containing scope is exited (by whatever means, be it exit,
310 die, last etc.).
311
312 I<Neither invoking the scheduler, nor exceptions, are allowed within those
313 BLOCKs>. That means: do not even think about calling C<die> without an
314 eval, and do not even think of entering the scheduler in any way.
315
316 Since both BLOCKs are tied to the current scope, they will automatically
317 be removed when the current scope exits.
318
319 These functions implement the same concept as C<dynamic-wind> in scheme
320 does, and are useful when you want to localise some resource to a specific
321 coro.
322
323 They slow down thread switching considerably for coros that use them
324 (about 40% for a BLOCK with a single assignment, so thread switching is
325 still reasonably fast if the handlers are fast).
326
327 These functions are best understood by an example: The following function
328 will change the current timezone to "Antarctica/South_Pole", which
329 requires a call to C<tzset>, but by using C<on_enter> and C<on_leave>,
330 which remember/change the current timezone and restore the previous
331 value, respectively, the timezone is only changed for the coro that
332 installed those handlers.
333
334 use POSIX qw(tzset);
335
336 async {
337 my $old_tz; # store outside TZ value here
338
339 Coro::on_enter {
340 $old_tz = $ENV{TZ}; # remember the old value
341
342 $ENV{TZ} = "Antarctica/South_Pole";
343 tzset; # enable new value
344 };
345
346 Coro::on_leave {
347 $ENV{TZ} = $old_tz;
348 tzset; # restore old value
349 };
350
351 # at this place, the timezone is Antarctica/South_Pole,
352 # without disturbing the TZ of any other coro.
353 };
354
355 This can be used to localise about any resource (locale, uid, current
356 working directory etc.) to a block, despite the existance of other
357 coros.
358
359 Another interesting example implements time-sliced multitasking using
360 interval timers (this could obviously be optimised, but does the job):
361
362 # "timeslice" the given block
363 sub timeslice(&) {
364 use Time::HiRes ();
365
366 Coro::on_enter {
367 # on entering the thread, we set an VTALRM handler to cede
368 $SIG{VTALRM} = sub { cede };
369 # and then start the interval timer
370 Time::HiRes::setitimer &Time::HiRes::ITIMER_VIRTUAL, 0.01, 0.01;
371 };
372 Coro::on_leave {
373 # on leaving the thread, we stop the interval timer again
374 Time::HiRes::setitimer &Time::HiRes::ITIMER_VIRTUAL, 0, 0;
375 };
376
377 &{+shift};
378 }
379
380 # use like this:
381 timeslice {
382 # The following is an endless loop that would normally
383 # monopolise the process. Since it runs in a timesliced
384 # environment, it will regularly cede to other threads.
385 while () { }
386 };
387
388
389 =item killall
390
391 Kills/terminates/cancels all coros except the currently running one.
392
393 Note that while this will try to free some of the main interpreter
394 resources if the calling coro isn't the main coro, but one
395 cannot free all of them, so if a coro that is not the main coro
396 calls this function, there will be some one-time resource leak.
397
398 =cut
399
400 sub killall {
401 for (Coro::State::list) {
402 $_->cancel
403 if $_ != $current && UNIVERSAL::isa $_, "Coro";
404 }
405 }
406
407 =back
408
409 =head1 CORO OBJECT METHODS
410
411 These are the methods you can call on coro objects (or to create
412 them).
413
414 =over 4
415
416 =item new Coro \&sub [, @args...]
417
418 Create a new coro and return it. When the sub returns, the coro
419 automatically terminates as if C<terminate> with the returned values were
420 called. To make the coro run you must first put it into the ready
421 queue by calling the ready method.
422
423 See C<async> and C<Coro::State::new> for additional info about the
424 coro environment.
425
426 =cut
427
428 sub _coro_run {
429 terminate &{+shift};
430 }
431
432 =item $success = $coro->ready
433
434 Put the given coro into the end of its ready queue (there is one
435 queue for each priority) and return true. If the coro is already in
436 the ready queue, do nothing and return false.
437
438 This ensures that the scheduler will resume this coro automatically
439 once all the coro of higher priority and all coro of the same
440 priority that were put into the ready queue earlier have been resumed.
441
442 =item $coro->suspend
443
444 Suspends the specified coro. A suspended coro works just like any other
445 coro, except that the scheduler will not select a suspended coro for
446 execution.
447
448 Suspending a coro can be useful when you want to keep the coro from
449 running, but you don't want to destroy it, or when you want to temporarily
450 freeze a coro (e.g. for debugging) to resume it later.
451
452 A scenario for the former would be to suspend all (other) coros after a
453 fork and keep them alive, so their destructors aren't called, but new
454 coros can be created.
455
456 =item $coro->resume
457
458 If the specified coro was suspended, it will be resumed. Note that when
459 the coro was in the ready queue when it was suspended, it might have been
460 unreadied by the scheduler, so an activation might have been lost.
461
462 To avoid this, it is best to put a suspended coro into the ready queue
463 unconditionally, as every synchronisation mechanism must protect itself
464 against spurious wakeups, and the one in the Coro family certainly do
465 that.
466
467 =item $is_ready = $coro->is_ready
468
469 Returns true iff the Coro object is in the ready queue. Unless the Coro
470 object gets destroyed, it will eventually be scheduled by the scheduler.
471
472 =item $is_running = $coro->is_running
473
474 Returns true iff the Coro object is currently running. Only one Coro object
475 can ever be in the running state (but it currently is possible to have
476 multiple running Coro::States).
477
478 =item $is_suspended = $coro->is_suspended
479
480 Returns true iff this Coro object has been suspended. Suspended Coros will
481 not ever be scheduled.
482
483 =item $coro->cancel (arg...)
484
485 Terminates the given Coro and makes it return the given arguments as
486 status (default: the empty list). Never returns if the Coro is the
487 current Coro.
488
489 =cut
490
491 sub cancel {
492 my $self = shift;
493
494 if ($current == $self) {
495 terminate @_;
496 } else {
497 $self->{_status} = [@_];
498 Coro::State::cancel $self;
499 }
500 }
501
502 =item $coro->schedule_to
503
504 Puts the current coro to sleep (like C<Coro::schedule>), but instead
505 of continuing with the next coro from the ready queue, always switch to
506 the given coro object (regardless of priority etc.). The readyness
507 state of that coro isn't changed.
508
509 This is an advanced method for special cases - I'd love to hear about any
510 uses for this one.
511
512 =item $coro->cede_to
513
514 Like C<schedule_to>, but puts the current coro into the ready
515 queue. This has the effect of temporarily switching to the given
516 coro, and continuing some time later.
517
518 This is an advanced method for special cases - I'd love to hear about any
519 uses for this one.
520
521 =item $coro->throw ([$scalar])
522
523 If C<$throw> is specified and defined, it will be thrown as an exception
524 inside the coro at the next convenient point in time. Otherwise
525 clears the exception object.
526
527 Coro will check for the exception each time a schedule-like-function
528 returns, i.e. after each C<schedule>, C<cede>, C<< Coro::Semaphore->down
529 >>, C<< Coro::Handle->readable >> and so on. Most of these functions
530 detect this case and return early in case an exception is pending.
531
532 The exception object will be thrown "as is" with the specified scalar in
533 C<$@>, i.e. if it is a string, no line number or newline will be appended
534 (unlike with C<die>).
535
536 This can be used as a softer means than C<cancel> to ask a coro to
537 end itself, although there is no guarantee that the exception will lead to
538 termination, and if the exception isn't caught it might well end the whole
539 program.
540
541 You might also think of C<throw> as being the moral equivalent of
542 C<kill>ing a coro with a signal (in this case, a scalar).
543
544 =item $coro->join
545
546 Wait until the coro terminates and return any values given to the
547 C<terminate> or C<cancel> functions. C<join> can be called concurrently
548 from multiple coro, and all will be resumed and given the status
549 return once the C<$coro> terminates.
550
551 =cut
552
553 sub join {
554 my $self = shift;
555
556 unless ($self->{_status}) {
557 my $current = $current;
558
559 push @{$self->{_on_destroy}}, sub {
560 $current->ready;
561 undef $current;
562 };
563
564 &schedule while $current;
565 }
566
567 wantarray ? @{$self->{_status}} : $self->{_status}[0];
568 }
569
570 =item $coro->on_destroy (\&cb)
571
572 Registers a callback that is called when this coro thread gets destroyed,
573 but before it is joined. The callback gets passed the terminate arguments,
574 if any, and I<must not> die, under any circumstances.
575
576 There can be any number of C<on_destroy> callbacks per coro.
577
578 =cut
579
580 sub on_destroy {
581 my ($self, $cb) = @_;
582
583 push @{ $self->{_on_destroy} }, $cb;
584 }
585
586 =item $oldprio = $coro->prio ($newprio)
587
588 Sets (or gets, if the argument is missing) the priority of the
589 coro thread. Higher priority coro get run before lower priority
590 coros. Priorities are small signed integers (currently -4 .. +3),
591 that you can refer to using PRIO_xxx constants (use the import tag :prio
592 to get then):
593
594 PRIO_MAX > PRIO_HIGH > PRIO_NORMAL > PRIO_LOW > PRIO_IDLE > PRIO_MIN
595 3 > 1 > 0 > -1 > -3 > -4
596
597 # set priority to HIGH
598 current->prio (PRIO_HIGH);
599
600 The idle coro thread ($Coro::idle) always has a lower priority than any
601 existing coro.
602
603 Changing the priority of the current coro will take effect immediately,
604 but changing the priority of a coro in the ready queue (but not running)
605 will only take effect after the next schedule (of that coro). This is a
606 bug that will be fixed in some future version.
607
608 =item $newprio = $coro->nice ($change)
609
610 Similar to C<prio>, but subtract the given value from the priority (i.e.
611 higher values mean lower priority, just as in UNIX's nice command).
612
613 =item $olddesc = $coro->desc ($newdesc)
614
615 Sets (or gets in case the argument is missing) the description for this
616 coro thread. This is just a free-form string you can associate with a
617 coro.
618
619 This method simply sets the C<< $coro->{desc} >> member to the given
620 string. You can modify this member directly if you wish, and in fact, this
621 is often preferred to indicate major processing states that cna then be
622 seen for example in a L<Coro::Debug> session:
623
624 sub my_long_function {
625 local $Coro::current->{desc} = "now in my_long_function";
626 ...
627 $Coro::current->{desc} = "my_long_function: phase 1";
628 ...
629 $Coro::current->{desc} = "my_long_function: phase 2";
630 ...
631 }
632
633 =cut
634
635 sub desc {
636 my $old = $_[0]{desc};
637 $_[0]{desc} = $_[1] if @_ > 1;
638 $old;
639 }
640
641 sub transfer {
642 require Carp;
643 Carp::croak ("You must not call ->transfer on Coro objects. Use Coro::State objects or the ->schedule_to method. Caught");
644 }
645
646 =back
647
648 =head1 GLOBAL FUNCTIONS
649
650 =over 4
651
652 =item Coro::nready
653
654 Returns the number of coro that are currently in the ready state,
655 i.e. that can be switched to by calling C<schedule> directory or
656 indirectly. The value C<0> means that the only runnable coro is the
657 currently running one, so C<cede> would have no effect, and C<schedule>
658 would cause a deadlock unless there is an idle handler that wakes up some
659 coro.
660
661 =item my $guard = Coro::guard { ... }
662
663 This function still exists, but is deprecated. Please use the
664 C<Guard::guard> function instead.
665
666 =cut
667
668 BEGIN { *guard = \&Guard::guard }
669
670 =item unblock_sub { ... }
671
672 This utility function takes a BLOCK or code reference and "unblocks" it,
673 returning a new coderef. Unblocking means that calling the new coderef
674 will return immediately without blocking, returning nothing, while the
675 original code ref will be called (with parameters) from within another
676 coro.
677
678 The reason this function exists is that many event libraries (such as
679 the venerable L<Event|Event> module) are not thread-safe (a weaker form
680 of reentrancy). This means you must not block within event callbacks,
681 otherwise you might suffer from crashes or worse. The only event library
682 currently known that is safe to use without C<unblock_sub> is L<EV> (but
683 you might still run into deadlocks if all event loops are blocked).
684
685 Coro will try to catch you when you block in the event loop
686 ("FATAL:$Coro::IDLE blocked itself"), but this is just best effort and
687 only works when you do not run your own event loop.
688
689 This function allows your callbacks to block by executing them in another
690 coro where it is safe to block. One example where blocking is handy
691 is when you use the L<Coro::AIO|Coro::AIO> functions to save results to
692 disk, for example.
693
694 In short: simply use C<unblock_sub { ... }> instead of C<sub { ... }> when
695 creating event callbacks that want to block.
696
697 If your handler does not plan to block (e.g. simply sends a message to
698 another coro, or puts some other coro into the ready queue), there is
699 no reason to use C<unblock_sub>.
700
701 Note that you also need to use C<unblock_sub> for any other callbacks that
702 are indirectly executed by any C-based event loop. For example, when you
703 use a module that uses L<AnyEvent> (and you use L<Coro::AnyEvent>) and it
704 provides callbacks that are the result of some event callback, then you
705 must not block either, or use C<unblock_sub>.
706
707 =cut
708
709 our @unblock_queue;
710
711 # we create a special coro because we want to cede,
712 # to reduce pressure on the coro pool (because most callbacks
713 # return immediately and can be reused) and because we cannot cede
714 # inside an event callback.
715 our $unblock_scheduler = new Coro sub {
716 while () {
717 while (my $cb = pop @unblock_queue) {
718 &async_pool (@$cb);
719
720 # for short-lived callbacks, this reduces pressure on the coro pool
721 # as the chance is very high that the async_poll coro will be back
722 # in the idle state when cede returns
723 cede;
724 }
725 schedule; # sleep well
726 }
727 };
728 $unblock_scheduler->{desc} = "[unblock_sub scheduler]";
729
730 sub unblock_sub(&) {
731 my $cb = shift;
732
733 sub {
734 unshift @unblock_queue, [$cb, @_];
735 $unblock_scheduler->ready;
736 }
737 }
738
739 =item $cb = rouse_cb
740
741 Create and return a "rouse callback". That's a code reference that,
742 when called, will remember a copy of its arguments and notify the owner
743 coro of the callback.
744
745 See the next function.
746
747 =item @args = rouse_wait [$cb]
748
749 Wait for the specified rouse callback (or the last one that was created in
750 this coro).
751
752 As soon as the callback is invoked (or when the callback was invoked
753 before C<rouse_wait>), it will return the arguments originally passed to
754 the rouse callback. In scalar context, that means you get the I<last>
755 argument, just as if C<rouse_wait> had a C<return ($a1, $a2, $a3...)>
756 statement at the end.
757
758 See the section B<HOW TO WAIT FOR A CALLBACK> for an actual usage example.
759
760 =back
761
762 =cut
763
764 for my $module (qw(Channel RWLock Semaphore SemaphoreSet Signal Specific)) {
765 my $old = defined &{"Coro::$module\::new"} && \&{"Coro::$module\::new"};
766
767 *{"Coro::$module\::new"} = sub {
768 require "Coro/$module.pm";
769
770 # some modules have their new predefined in State.xs, some don't
771 *{"Coro::$module\::new"} = $old
772 if $old;
773
774 goto &{"Coro::$module\::new"};
775 };
776 }
777
778 1;
779
780 =head1 HOW TO WAIT FOR A CALLBACK
781
782 It is very common for a coro to wait for some callback to be
783 called. This occurs naturally when you use coro in an otherwise
784 event-based program, or when you use event-based libraries.
785
786 These typically register a callback for some event, and call that callback
787 when the event occured. In a coro, however, you typically want to
788 just wait for the event, simplyifying things.
789
790 For example C<< AnyEvent->child >> registers a callback to be called when
791 a specific child has exited:
792
793 my $child_watcher = AnyEvent->child (pid => $pid, cb => sub { ... });
794
795 But from within a coro, you often just want to write this:
796
797 my $status = wait_for_child $pid;
798
799 Coro offers two functions specifically designed to make this easy,
800 C<Coro::rouse_cb> and C<Coro::rouse_wait>.
801
802 The first function, C<rouse_cb>, generates and returns a callback that,
803 when invoked, will save its arguments and notify the coro that
804 created the callback.
805
806 The second function, C<rouse_wait>, waits for the callback to be called
807 (by calling C<schedule> to go to sleep) and returns the arguments
808 originally passed to the callback.
809
810 Using these functions, it becomes easy to write the C<wait_for_child>
811 function mentioned above:
812
813 sub wait_for_child($) {
814 my ($pid) = @_;
815
816 my $watcher = AnyEvent->child (pid => $pid, cb => Coro::rouse_cb);
817
818 my ($rpid, $rstatus) = Coro::rouse_wait;
819 $rstatus
820 }
821
822 In the case where C<rouse_cb> and C<rouse_wait> are not flexible enough,
823 you can roll your own, using C<schedule>:
824
825 sub wait_for_child($) {
826 my ($pid) = @_;
827
828 # store the current coro in $current,
829 # and provide result variables for the closure passed to ->child
830 my $current = $Coro::current;
831 my ($done, $rstatus);
832
833 # pass a closure to ->child
834 my $watcher = AnyEvent->child (pid => $pid, cb => sub {
835 $rstatus = $_[1]; # remember rstatus
836 $done = 1; # mark $rstatus as valud
837 });
838
839 # wait until the closure has been called
840 schedule while !$done;
841
842 $rstatus
843 }
844
845
846 =head1 BUGS/LIMITATIONS
847
848 =over 4
849
850 =item fork with pthread backend
851
852 When Coro is compiled using the pthread backend (which isn't recommended
853 but required on many BSDs as their libcs are completely broken), then
854 coro will not survive a fork. There is no known workaround except to
855 fix your libc and use a saner backend.
856
857 =item perl process emulation ("threads")
858
859 This module is not perl-pseudo-thread-safe. You should only ever use this
860 module from the first thread (this requirement might be removed in the
861 future to allow per-thread schedulers, but Coro::State does not yet allow
862 this). I recommend disabling thread support and using processes, as having
863 the windows process emulation enabled under unix roughly halves perl
864 performance, even when not used.
865
866 =item coro switching is not signal safe
867
868 You must not switch to another coro from within a signal handler (only
869 relevant with %SIG - most event libraries provide safe signals), I<unless>
870 you are sure you are not interrupting a Coro function.
871
872 That means you I<MUST NOT> call any function that might "block" the
873 current coro - C<cede>, C<schedule> C<< Coro::Semaphore->down >> or
874 anything that calls those. Everything else, including calling C<ready>,
875 works.
876
877 =back
878
879
880 =head1 WINDOWS PROCESS EMULATION
881
882 A great many people seem to be confused about ithreads (for example, Chip
883 Salzenberg called me unintelligent, incapable, stupid and gullible,
884 while in the same mail making rather confused statements about perl
885 ithreads (for example, that memory or files would be shared), showing his
886 lack of understanding of this area - if it is hard to understand for Chip,
887 it is probably not obvious to everybody).
888
889 What follows is an ultra-condensed version of my talk about threads in
890 scripting languages given on the perl workshop 2009:
891
892 The so-called "ithreads" were originally implemented for two reasons:
893 first, to (badly) emulate unix processes on native win32 perls, and
894 secondly, to replace the older, real thread model ("5.005-threads").
895
896 It does that by using threads instead of OS processes. The difference
897 between processes and threads is that threads share memory (and other
898 state, such as files) between threads within a single process, while
899 processes do not share anything (at least not semantically). That
900 means that modifications done by one thread are seen by others, while
901 modifications by one process are not seen by other processes.
902
903 The "ithreads" work exactly like that: when creating a new ithreads
904 process, all state is copied (memory is copied physically, files and code
905 is copied logically). Afterwards, it isolates all modifications. On UNIX,
906 the same behaviour can be achieved by using operating system processes,
907 except that UNIX typically uses hardware built into the system to do this
908 efficiently, while the windows process emulation emulates this hardware in
909 software (rather efficiently, but of course it is still much slower than
910 dedicated hardware).
911
912 As mentioned before, loading code, modifying code, modifying data
913 structures and so on is only visible in the ithreads process doing the
914 modification, not in other ithread processes within the same OS process.
915
916 This is why "ithreads" do not implement threads for perl at all, only
917 processes. What makes it so bad is that on non-windows platforms, you can
918 actually take advantage of custom hardware for this purpose (as evidenced
919 by the forks module, which gives you the (i-) threads API, just much
920 faster).
921
922 Sharing data is in the i-threads model is done by transfering data
923 structures between threads using copying semantics, which is very slow -
924 shared data simply does not exist. Benchmarks using i-threads which are
925 communication-intensive show extremely bad behaviour with i-threads (in
926 fact, so bad that Coro, which cannot take direct advantage of multiple
927 CPUs, is often orders of magnitude faster because it shares data using
928 real threads, refer to my talk for details).
929
930 As summary, i-threads *use* threads to implement processes, while
931 the compatible forks module *uses* processes to emulate, uhm,
932 processes. I-threads slow down every perl program when enabled, and
933 outside of windows, serve no (or little) practical purpose, but
934 disadvantages every single-threaded Perl program.
935
936 This is the reason that I try to avoid the name "ithreads", as it is
937 misleading as it implies that it implements some kind of thread model for
938 perl, and prefer the name "windows process emulation", which describes the
939 actual use and behaviour of it much better.
940
941 =head1 SEE ALSO
942
943 Event-Loop integration: L<Coro::AnyEvent>, L<Coro::EV>, L<Coro::Event>.
944
945 Debugging: L<Coro::Debug>.
946
947 Support/Utility: L<Coro::Specific>, L<Coro::Util>.
948
949 Locking and IPC: L<Coro::Signal>, L<Coro::Channel>, L<Coro::Semaphore>,
950 L<Coro::SemaphoreSet>, L<Coro::RWLock>.
951
952 I/O and Timers: L<Coro::Timer>, L<Coro::Handle>, L<Coro::Socket>, L<Coro::AIO>.
953
954 Compatibility with other modules: L<Coro::LWP> (but see also L<AnyEvent::HTTP> for
955 a better-working alternative), L<Coro::BDB>, L<Coro::Storable>,
956 L<Coro::Select>.
957
958 XS API: L<Coro::MakeMaker>.
959
960 Low level Configuration, Thread Environment, Continuations: L<Coro::State>.
961
962 =head1 AUTHOR
963
964 Marc Lehmann <schmorp@schmorp.de>
965 http://home.schmorp.de/
966
967 =cut
968