ViewVC Help
View File | Revision Log | Show Annotations | Download File
/cvs/Coro/README
Revision: 1.28
Committed: Sun Feb 13 04:39:15 2011 UTC (13 years, 3 months ago) by root
Branch: MAIN
CVS Tags: rel-5_36
Changes since 1.27: +33 -16 lines
Log Message:
5.26

File Contents

# Content
1 NAME
2 Coro - the only real threads in perl
3
4 SYNOPSIS
5 use Coro;
6
7 async {
8 # some asynchronous thread of execution
9 print "2\n";
10 cede; # yield back to main
11 print "4\n";
12 };
13 print "1\n";
14 cede; # yield to coro
15 print "3\n";
16 cede; # and again
17
18 # use locking
19 use Coro::Semaphore;
20 my $lock = new Coro::Semaphore;
21 my $locked;
22
23 $lock->down;
24 $locked = 1;
25 $lock->up;
26
27 DESCRIPTION
28 For a tutorial-style introduction, please read the Coro::Intro manpage.
29 This manpage mainly contains reference information.
30
31 This module collection manages continuations in general, most often in
32 the form of cooperative threads (also called coros, or simply "coro" in
33 the documentation). They are similar to kernel threads but don't (in
34 general) run in parallel at the same time even on SMP machines. The
35 specific flavor of thread offered by this module also guarantees you
36 that it will not switch between threads unless necessary, at
37 easily-identified points in your program, so locking and parallel access
38 are rarely an issue, making thread programming much safer and easier
39 than using other thread models.
40
41 Unlike the so-called "Perl threads" (which are not actually real threads
42 but only the windows process emulation (see section of same name for
43 more details) ported to unix, and as such act as processes), Coro
44 provides a full shared address space, which makes communication between
45 threads very easy. And Coro's threads are fast, too: disabling the
46 Windows process emulation code in your perl and using Coro can easily
47 result in a two to four times speed increase for your programs. A
48 parallel matrix multiplication benchmark runs over 300 times faster on a
49 single core than perl's pseudo-threads on a quad core using all four
50 cores.
51
52 Coro achieves that by supporting multiple running interpreters that
53 share data, which is especially useful to code pseudo-parallel processes
54 and for event-based programming, such as multiple HTTP-GET requests
55 running concurrently. See Coro::AnyEvent to learn more on how to
56 integrate Coro into an event-based environment.
57
58 In this module, a thread is defined as "callchain + lexical variables +
59 some package variables + C stack), that is, a thread has its own
60 callchain, its own set of lexicals and its own set of perls most
61 important global variables (see Coro::State for more configuration and
62 background info).
63
64 See also the "SEE ALSO" section at the end of this document - the Coro
65 module family is quite large.
66
67 GLOBAL VARIABLES
68 $Coro::main
69 This variable stores the Coro object that represents the main
70 program. While you cna "ready" it and do most other things you can
71 do to coro, it is mainly useful to compare again $Coro::current, to
72 see whether you are running in the main program or not.
73
74 $Coro::current
75 The Coro object representing the current coro (the last coro that
76 the Coro scheduler switched to). The initial value is $Coro::main
77 (of course).
78
79 This variable is strictly *read-only*. You can take copies of the
80 value stored in it and use it as any other Coro object, but you must
81 not otherwise modify the variable itself.
82
83 $Coro::idle
84 This variable is mainly useful to integrate Coro into event loops.
85 It is usually better to rely on Coro::AnyEvent or Coro::EV, as this
86 is pretty low-level functionality.
87
88 This variable stores a Coro object that is put into the ready queue
89 when there are no other ready threads (without invoking any ready
90 hooks).
91
92 The default implementation dies with "FATAL: deadlock detected.",
93 followed by a thread listing, because the program has no other way
94 to continue.
95
96 This hook is overwritten by modules such as "Coro::EV" and
97 "Coro::AnyEvent" to wait on an external event that hopefully wake up
98 a coro so the scheduler can run it.
99
100 See Coro::EV or Coro::AnyEvent for examples of using this technique.
101
102 SIMPLE CORO CREATION
103 async { ... } [@args...]
104 Create a new coro and return its Coro object (usually unused). The
105 coro will be put into the ready queue, so it will start running
106 automatically on the next scheduler run.
107
108 The first argument is a codeblock/closure that should be executed in
109 the coro. When it returns argument returns the coro is automatically
110 terminated.
111
112 The remaining arguments are passed as arguments to the closure.
113
114 See the "Coro::State::new" constructor for info about the coro
115 environment in which coro are executed.
116
117 Calling "exit" in a coro will do the same as calling exit outside
118 the coro. Likewise, when the coro dies, the program will exit, just
119 as it would in the main program.
120
121 If you do not want that, you can provide a default "die" handler, or
122 simply avoid dieing (by use of "eval").
123
124 Example: Create a new coro that just prints its arguments.
125
126 async {
127 print "@_\n";
128 } 1,2,3,4;
129
130 async_pool { ... } [@args...]
131 Similar to "async", but uses a coro pool, so you should not call
132 terminate or join on it (although you are allowed to), and you get a
133 coro that might have executed other code already (which can be good
134 or bad :).
135
136 On the plus side, this function is about twice as fast as creating
137 (and destroying) a completely new coro, so if you need a lot of
138 generic coros in quick successsion, use "async_pool", not "async".
139
140 The code block is executed in an "eval" context and a warning will
141 be issued in case of an exception instead of terminating the
142 program, as "async" does. As the coro is being reused, stuff like
143 "on_destroy" will not work in the expected way, unless you call
144 terminate or cancel, which somehow defeats the purpose of pooling
145 (but is fine in the exceptional case).
146
147 The priority will be reset to 0 after each run, tracing will be
148 disabled, the description will be reset and the default output
149 filehandle gets restored, so you can change all these. Otherwise the
150 coro will be re-used "as-is": most notably if you change other
151 per-coro global stuff such as $/ you *must needs* revert that
152 change, which is most simply done by using local as in: "local $/".
153
154 The idle pool size is limited to 8 idle coros (this can be adjusted
155 by changing $Coro::POOL_SIZE), but there can be as many non-idle
156 coros as required.
157
158 If you are concerned about pooled coros growing a lot because a
159 single "async_pool" used a lot of stackspace you can e.g.
160 "async_pool { terminate }" once per second or so to slowly replenish
161 the pool. In addition to that, when the stacks used by a handler
162 grows larger than 32kb (adjustable via $Coro::POOL_RSS) it will also
163 be destroyed.
164
165 STATIC METHODS
166 Static methods are actually functions that implicitly operate on the
167 current coro.
168
169 schedule
170 Calls the scheduler. The scheduler will find the next coro that is
171 to be run from the ready queue and switches to it. The next coro to
172 be run is simply the one with the highest priority that is longest
173 in its ready queue. If there is no coro ready, it will call the
174 $Coro::idle hook.
175
176 Please note that the current coro will *not* be put into the ready
177 queue, so calling this function usually means you will never be
178 called again unless something else (e.g. an event handler) calls
179 "->ready", thus waking you up.
180
181 This makes "schedule" *the* generic method to use to block the
182 current coro and wait for events: first you remember the current
183 coro in a variable, then arrange for some callback of yours to call
184 "->ready" on that once some event happens, and last you call
185 "schedule" to put yourself to sleep. Note that a lot of things can
186 wake your coro up, so you need to check whether the event indeed
187 happened, e.g. by storing the status in a variable.
188
189 See HOW TO WAIT FOR A CALLBACK, below, for some ways to wait for
190 callbacks.
191
192 cede
193 "Cede" to other coros. This function puts the current coro into the
194 ready queue and calls "schedule", which has the effect of giving up
195 the current "timeslice" to other coros of the same or higher
196 priority. Once your coro gets its turn again it will automatically
197 be resumed.
198
199 This function is often called "yield" in other languages.
200
201 Coro::cede_notself
202 Works like cede, but is not exported by default and will cede to
203 *any* coro, regardless of priority. This is useful sometimes to
204 ensure progress is made.
205
206 terminate [arg...]
207 Terminates the current coro with the given status values (see
208 cancel).
209
210 Coro::on_enter BLOCK, Coro::on_leave BLOCK
211 These function install enter and leave winders in the current scope.
212 The enter block will be executed when on_enter is called and
213 whenever the current coro is re-entered by the scheduler, while the
214 leave block is executed whenever the current coro is blocked by the
215 scheduler, and also when the containing scope is exited (by whatever
216 means, be it exit, die, last etc.).
217
218 *Neither invoking the scheduler, nor exceptions, are allowed within
219 those BLOCKs*. That means: do not even think about calling "die"
220 without an eval, and do not even think of entering the scheduler in
221 any way.
222
223 Since both BLOCKs are tied to the current scope, they will
224 automatically be removed when the current scope exits.
225
226 These functions implement the same concept as "dynamic-wind" in
227 scheme does, and are useful when you want to localise some resource
228 to a specific coro.
229
230 They slow down thread switching considerably for coros that use them
231 (about 40% for a BLOCK with a single assignment, so thread switching
232 is still reasonably fast if the handlers are fast).
233
234 These functions are best understood by an example: The following
235 function will change the current timezone to
236 "Antarctica/South_Pole", which requires a call to "tzset", but by
237 using "on_enter" and "on_leave", which remember/change the current
238 timezone and restore the previous value, respectively, the timezone
239 is only changed for the coro that installed those handlers.
240
241 use POSIX qw(tzset);
242
243 async {
244 my $old_tz; # store outside TZ value here
245
246 Coro::on_enter {
247 $old_tz = $ENV{TZ}; # remember the old value
248
249 $ENV{TZ} = "Antarctica/South_Pole";
250 tzset; # enable new value
251 };
252
253 Coro::on_leave {
254 $ENV{TZ} = $old_tz;
255 tzset; # restore old value
256 };
257
258 # at this place, the timezone is Antarctica/South_Pole,
259 # without disturbing the TZ of any other coro.
260 };
261
262 This can be used to localise about any resource (locale, uid,
263 current working directory etc.) to a block, despite the existance of
264 other coros.
265
266 Another interesting example implements time-sliced multitasking
267 using interval timers (this could obviously be optimised, but does
268 the job):
269
270 # "timeslice" the given block
271 sub timeslice(&) {
272 use Time::HiRes ();
273
274 Coro::on_enter {
275 # on entering the thread, we set an VTALRM handler to cede
276 $SIG{VTALRM} = sub { cede };
277 # and then start the interval timer
278 Time::HiRes::setitimer &Time::HiRes::ITIMER_VIRTUAL, 0.01, 0.01;
279 };
280 Coro::on_leave {
281 # on leaving the thread, we stop the interval timer again
282 Time::HiRes::setitimer &Time::HiRes::ITIMER_VIRTUAL, 0, 0;
283 };
284
285 &{+shift};
286 }
287
288 # use like this:
289 timeslice {
290 # The following is an endless loop that would normally
291 # monopolise the process. Since it runs in a timesliced
292 # environment, it will regularly cede to other threads.
293 while () { }
294 };
295
296 killall
297 Kills/terminates/cancels all coros except the currently running one.
298
299 Note that while this will try to free some of the main interpreter
300 resources if the calling coro isn't the main coro, but one cannot
301 free all of them, so if a coro that is not the main coro calls this
302 function, there will be some one-time resource leak.
303
304 CORO OBJECT METHODS
305 These are the methods you can call on coro objects (or to create them).
306
307 new Coro \&sub [, @args...]
308 Create a new coro and return it. When the sub returns, the coro
309 automatically terminates as if "terminate" with the returned values
310 were called. To make the coro run you must first put it into the
311 ready queue by calling the ready method.
312
313 See "async" and "Coro::State::new" for additional info about the
314 coro environment.
315
316 $success = $coro->ready
317 Put the given coro into the end of its ready queue (there is one
318 queue for each priority) and return true. If the coro is already in
319 the ready queue, do nothing and return false.
320
321 This ensures that the scheduler will resume this coro automatically
322 once all the coro of higher priority and all coro of the same
323 priority that were put into the ready queue earlier have been
324 resumed.
325
326 $coro->suspend
327 Suspends the specified coro. A suspended coro works just like any
328 other coro, except that the scheduler will not select a suspended
329 coro for execution.
330
331 Suspending a coro can be useful when you want to keep the coro from
332 running, but you don't want to destroy it, or when you want to
333 temporarily freeze a coro (e.g. for debugging) to resume it later.
334
335 A scenario for the former would be to suspend all (other) coros
336 after a fork and keep them alive, so their destructors aren't
337 called, but new coros can be created.
338
339 $coro->resume
340 If the specified coro was suspended, it will be resumed. Note that
341 when the coro was in the ready queue when it was suspended, it might
342 have been unreadied by the scheduler, so an activation might have
343 been lost.
344
345 To avoid this, it is best to put a suspended coro into the ready
346 queue unconditionally, as every synchronisation mechanism must
347 protect itself against spurious wakeups, and the one in the Coro
348 family certainly do that.
349
350 $is_ready = $coro->is_ready
351 Returns true iff the Coro object is in the ready queue. Unless the
352 Coro object gets destroyed, it will eventually be scheduled by the
353 scheduler.
354
355 $is_running = $coro->is_running
356 Returns true iff the Coro object is currently running. Only one Coro
357 object can ever be in the running state (but it currently is
358 possible to have multiple running Coro::States).
359
360 $is_suspended = $coro->is_suspended
361 Returns true iff this Coro object has been suspended. Suspended
362 Coros will not ever be scheduled.
363
364 $coro->cancel (arg...)
365 Terminates the given Coro and makes it return the given arguments as
366 status (default: the empty list). Never returns if the Coro is the
367 current Coro.
368
369 $coro->schedule_to
370 Puts the current coro to sleep (like "Coro::schedule"), but instead
371 of continuing with the next coro from the ready queue, always switch
372 to the given coro object (regardless of priority etc.). The
373 readyness state of that coro isn't changed.
374
375 This is an advanced method for special cases - I'd love to hear
376 about any uses for this one.
377
378 $coro->cede_to
379 Like "schedule_to", but puts the current coro into the ready queue.
380 This has the effect of temporarily switching to the given coro, and
381 continuing some time later.
382
383 This is an advanced method for special cases - I'd love to hear
384 about any uses for this one.
385
386 $coro->throw ([$scalar])
387 If $throw is specified and defined, it will be thrown as an
388 exception inside the coro at the next convenient point in time.
389 Otherwise clears the exception object.
390
391 Coro will check for the exception each time a schedule-like-function
392 returns, i.e. after each "schedule", "cede",
393 "Coro::Semaphore->down", "Coro::Handle->readable" and so on. Most of
394 these functions detect this case and return early in case an
395 exception is pending.
396
397 The exception object will be thrown "as is" with the specified
398 scalar in $@, i.e. if it is a string, no line number or newline will
399 be appended (unlike with "die").
400
401 This can be used as a softer means than "cancel" to ask a coro to
402 end itself, although there is no guarantee that the exception will
403 lead to termination, and if the exception isn't caught it might well
404 end the whole program.
405
406 You might also think of "throw" as being the moral equivalent of
407 "kill"ing a coro with a signal (in this case, a scalar).
408
409 $coro->join
410 Wait until the coro terminates and return any values given to the
411 "terminate" or "cancel" functions. "join" can be called concurrently
412 from multiple coro, and all will be resumed and given the status
413 return once the $coro terminates.
414
415 $coro->on_destroy (\&cb)
416 Registers a callback that is called when this coro thread gets
417 destroyed, but before it is joined. The callback gets passed the
418 terminate arguments, if any, and *must not* die, under any
419 circumstances.
420
421 There can be any number of "on_destroy" callbacks per coro.
422
423 $oldprio = $coro->prio ($newprio)
424 Sets (or gets, if the argument is missing) the priority of the coro
425 thread. Higher priority coro get run before lower priority coros.
426 Priorities are small signed integers (currently -4 .. +3), that you
427 can refer to using PRIO_xxx constants (use the import tag :prio to
428 get then):
429
430 PRIO_MAX > PRIO_HIGH > PRIO_NORMAL > PRIO_LOW > PRIO_IDLE > PRIO_MIN
431 3 > 1 > 0 > -1 > -3 > -4
432
433 # set priority to HIGH
434 current->prio (PRIO_HIGH);
435
436 The idle coro thread ($Coro::idle) always has a lower priority than
437 any existing coro.
438
439 Changing the priority of the current coro will take effect
440 immediately, but changing the priority of a coro in the ready queue
441 (but not running) will only take effect after the next schedule (of
442 that coro). This is a bug that will be fixed in some future version.
443
444 $newprio = $coro->nice ($change)
445 Similar to "prio", but subtract the given value from the priority
446 (i.e. higher values mean lower priority, just as in UNIX's nice
447 command).
448
449 $olddesc = $coro->desc ($newdesc)
450 Sets (or gets in case the argument is missing) the description for
451 this coro thread. This is just a free-form string you can associate
452 with a coro.
453
454 This method simply sets the "$coro->{desc}" member to the given
455 string. You can modify this member directly if you wish, and in
456 fact, this is often preferred to indicate major processing states
457 that cna then be seen for example in a Coro::Debug session:
458
459 sub my_long_function {
460 local $Coro::current->{desc} = "now in my_long_function";
461 ...
462 $Coro::current->{desc} = "my_long_function: phase 1";
463 ...
464 $Coro::current->{desc} = "my_long_function: phase 2";
465 ...
466 }
467
468 GLOBAL FUNCTIONS
469 Coro::nready
470 Returns the number of coro that are currently in the ready state,
471 i.e. that can be switched to by calling "schedule" directory or
472 indirectly. The value 0 means that the only runnable coro is the
473 currently running one, so "cede" would have no effect, and
474 "schedule" would cause a deadlock unless there is an idle handler
475 that wakes up some coro.
476
477 my $guard = Coro::guard { ... }
478 This function still exists, but is deprecated. Please use the
479 "Guard::guard" function instead.
480
481 unblock_sub { ... }
482 This utility function takes a BLOCK or code reference and "unblocks"
483 it, returning a new coderef. Unblocking means that calling the new
484 coderef will return immediately without blocking, returning nothing,
485 while the original code ref will be called (with parameters) from
486 within another coro.
487
488 The reason this function exists is that many event libraries (such
489 as the venerable Event module) are not thread-safe (a weaker form of
490 reentrancy). This means you must not block within event callbacks,
491 otherwise you might suffer from crashes or worse. The only event
492 library currently known that is safe to use without "unblock_sub" is
493 EV (but you might still run into deadlocks if all event loops are
494 blocked).
495
496 Coro will try to catch you when you block in the event loop
497 ("FATAL:$Coro::IDLE blocked itself"), but this is just best effort
498 and only works when you do not run your own event loop.
499
500 This function allows your callbacks to block by executing them in
501 another coro where it is safe to block. One example where blocking
502 is handy is when you use the Coro::AIO functions to save results to
503 disk, for example.
504
505 In short: simply use "unblock_sub { ... }" instead of "sub { ... }"
506 when creating event callbacks that want to block.
507
508 If your handler does not plan to block (e.g. simply sends a message
509 to another coro, or puts some other coro into the ready queue),
510 there is no reason to use "unblock_sub".
511
512 Note that you also need to use "unblock_sub" for any other callbacks
513 that are indirectly executed by any C-based event loop. For example,
514 when you use a module that uses AnyEvent (and you use
515 Coro::AnyEvent) and it provides callbacks that are the result of
516 some event callback, then you must not block either, or use
517 "unblock_sub".
518
519 $cb = rouse_cb
520 Create and return a "rouse callback". That's a code reference that,
521 when called, will remember a copy of its arguments and notify the
522 owner coro of the callback.
523
524 See the next function.
525
526 @args = rouse_wait [$cb]
527 Wait for the specified rouse callback (or the last one that was
528 created in this coro).
529
530 As soon as the callback is invoked (or when the callback was invoked
531 before "rouse_wait"), it will return the arguments originally passed
532 to the rouse callback. In scalar context, that means you get the
533 *last* argument, just as if "rouse_wait" had a "return ($a1, $a2,
534 $a3...)" statement at the end.
535
536 See the section HOW TO WAIT FOR A CALLBACK for an actual usage
537 example.
538
539 HOW TO WAIT FOR A CALLBACK
540 It is very common for a coro to wait for some callback to be called.
541 This occurs naturally when you use coro in an otherwise event-based
542 program, or when you use event-based libraries.
543
544 These typically register a callback for some event, and call that
545 callback when the event occured. In a coro, however, you typically want
546 to just wait for the event, simplyifying things.
547
548 For example "AnyEvent->child" registers a callback to be called when a
549 specific child has exited:
550
551 my $child_watcher = AnyEvent->child (pid => $pid, cb => sub { ... });
552
553 But from within a coro, you often just want to write this:
554
555 my $status = wait_for_child $pid;
556
557 Coro offers two functions specifically designed to make this easy,
558 "Coro::rouse_cb" and "Coro::rouse_wait".
559
560 The first function, "rouse_cb", generates and returns a callback that,
561 when invoked, will save its arguments and notify the coro that created
562 the callback.
563
564 The second function, "rouse_wait", waits for the callback to be called
565 (by calling "schedule" to go to sleep) and returns the arguments
566 originally passed to the callback.
567
568 Using these functions, it becomes easy to write the "wait_for_child"
569 function mentioned above:
570
571 sub wait_for_child($) {
572 my ($pid) = @_;
573
574 my $watcher = AnyEvent->child (pid => $pid, cb => Coro::rouse_cb);
575
576 my ($rpid, $rstatus) = Coro::rouse_wait;
577 $rstatus
578 }
579
580 In the case where "rouse_cb" and "rouse_wait" are not flexible enough,
581 you can roll your own, using "schedule":
582
583 sub wait_for_child($) {
584 my ($pid) = @_;
585
586 # store the current coro in $current,
587 # and provide result variables for the closure passed to ->child
588 my $current = $Coro::current;
589 my ($done, $rstatus);
590
591 # pass a closure to ->child
592 my $watcher = AnyEvent->child (pid => $pid, cb => sub {
593 $rstatus = $_[1]; # remember rstatus
594 $done = 1; # mark $rstatus as valud
595 });
596
597 # wait until the closure has been called
598 schedule while !$done;
599
600 $rstatus
601 }
602
603 BUGS/LIMITATIONS
604 fork with pthread backend
605 When Coro is compiled using the pthread backend (which isn't
606 recommended but required on many BSDs as their libcs are completely
607 broken), then coro will not survive a fork. There is no known
608 workaround except to fix your libc and use a saner backend.
609
610 perl process emulation ("threads")
611 This module is not perl-pseudo-thread-safe. You should only ever use
612 this module from the first thread (this requirement might be removed
613 in the future to allow per-thread schedulers, but Coro::State does
614 not yet allow this). I recommend disabling thread support and using
615 processes, as having the windows process emulation enabled under
616 unix roughly halves perl performance, even when not used.
617
618 coro switching is not signal safe
619 You must not switch to another coro from within a signal handler
620 (only relevant with %SIG - most event libraries provide safe
621 signals), *unless* you are sure you are not interrupting a Coro
622 function.
623
624 That means you *MUST NOT* call any function that might "block" the
625 current coro - "cede", "schedule" "Coro::Semaphore->down" or
626 anything that calls those. Everything else, including calling
627 "ready", works.
628
629 WINDOWS PROCESS EMULATION
630 A great many people seem to be confused about ithreads (for example,
631 Chip Salzenberg called me unintelligent, incapable, stupid and gullible,
632 while in the same mail making rather confused statements about perl
633 ithreads (for example, that memory or files would be shared), showing
634 his lack of understanding of this area - if it is hard to understand for
635 Chip, it is probably not obvious to everybody).
636
637 What follows is an ultra-condensed version of my talk about threads in
638 scripting languages given on the perl workshop 2009:
639
640 The so-called "ithreads" were originally implemented for two reasons:
641 first, to (badly) emulate unix processes on native win32 perls, and
642 secondly, to replace the older, real thread model ("5.005-threads").
643
644 It does that by using threads instead of OS processes. The difference
645 between processes and threads is that threads share memory (and other
646 state, such as files) between threads within a single process, while
647 processes do not share anything (at least not semantically). That means
648 that modifications done by one thread are seen by others, while
649 modifications by one process are not seen by other processes.
650
651 The "ithreads" work exactly like that: when creating a new ithreads
652 process, all state is copied (memory is copied physically, files and
653 code is copied logically). Afterwards, it isolates all modifications. On
654 UNIX, the same behaviour can be achieved by using operating system
655 processes, except that UNIX typically uses hardware built into the
656 system to do this efficiently, while the windows process emulation
657 emulates this hardware in software (rather efficiently, but of course it
658 is still much slower than dedicated hardware).
659
660 As mentioned before, loading code, modifying code, modifying data
661 structures and so on is only visible in the ithreads process doing the
662 modification, not in other ithread processes within the same OS process.
663
664 This is why "ithreads" do not implement threads for perl at all, only
665 processes. What makes it so bad is that on non-windows platforms, you
666 can actually take advantage of custom hardware for this purpose (as
667 evidenced by the forks module, which gives you the (i-) threads API,
668 just much faster).
669
670 Sharing data is in the i-threads model is done by transfering data
671 structures between threads using copying semantics, which is very slow -
672 shared data simply does not exist. Benchmarks using i-threads which are
673 communication-intensive show extremely bad behaviour with i-threads (in
674 fact, so bad that Coro, which cannot take direct advantage of multiple
675 CPUs, is often orders of magnitude faster because it shares data using
676 real threads, refer to my talk for details).
677
678 As summary, i-threads *use* threads to implement processes, while the
679 compatible forks module *uses* processes to emulate, uhm, processes.
680 I-threads slow down every perl program when enabled, and outside of
681 windows, serve no (or little) practical purpose, but disadvantages every
682 single-threaded Perl program.
683
684 This is the reason that I try to avoid the name "ithreads", as it is
685 misleading as it implies that it implements some kind of thread model
686 for perl, and prefer the name "windows process emulation", which
687 describes the actual use and behaviour of it much better.
688
689 SEE ALSO
690 Event-Loop integration: Coro::AnyEvent, Coro::EV, Coro::Event.
691
692 Debugging: Coro::Debug.
693
694 Support/Utility: Coro::Specific, Coro::Util.
695
696 Locking and IPC: Coro::Signal, Coro::Channel, Coro::Semaphore,
697 Coro::SemaphoreSet, Coro::RWLock.
698
699 I/O and Timers: Coro::Timer, Coro::Handle, Coro::Socket, Coro::AIO.
700
701 Compatibility with other modules: Coro::LWP (but see also AnyEvent::HTTP
702 for a better-working alternative), Coro::BDB, Coro::Storable,
703 Coro::Select.
704
705 XS API: Coro::MakeMaker.
706
707 Low level Configuration, Thread Environment, Continuations: Coro::State.
708
709 AUTHOR
710 Marc Lehmann <schmorp@schmorp.de>
711 http://home.schmorp.de/
712