… | |
… | |
366 | |
366 | |
367 | our $idle; # idle handler |
367 | our $idle; # idle handler |
368 | our $main; # main coro |
368 | our $main; # main coro |
369 | our $current; # current coro |
369 | our $current; # current coro |
370 | |
370 | |
371 | our $VERSION = 6.49; |
371 | our $VERSION = 6.52; |
372 | |
372 | |
373 | our @EXPORT = qw(async async_pool cede schedule terminate current unblock_sub rouse_cb rouse_wait); |
373 | our @EXPORT = qw(async async_pool cede schedule terminate current unblock_sub rouse_cb rouse_wait); |
374 | our %EXPORT_TAGS = ( |
374 | our %EXPORT_TAGS = ( |
375 | prio => [qw(PRIO_MAX PRIO_HIGH PRIO_NORMAL PRIO_LOW PRIO_IDLE PRIO_MIN)], |
375 | prio => [qw(PRIO_MAX PRIO_HIGH PRIO_NORMAL PRIO_LOW PRIO_IDLE PRIO_MIN)], |
376 | ); |
376 | ); |
… | |
… | |
498 | C<async> does. As the coro is being reused, stuff like C<on_destroy> |
498 | C<async> does. As the coro is being reused, stuff like C<on_destroy> |
499 | will not work in the expected way, unless you call terminate or cancel, |
499 | will not work in the expected way, unless you call terminate or cancel, |
500 | which somehow defeats the purpose of pooling (but is fine in the |
500 | which somehow defeats the purpose of pooling (but is fine in the |
501 | exceptional case). |
501 | exceptional case). |
502 | |
502 | |
503 | The priority will be reset to C<0> after each run, tracing will be |
503 | The priority will be reset to C<0> after each run, all C<swap_sv> calls |
504 | disabled, the description will be reset and the default output filehandle |
504 | will be undone, tracing will be disabled, the description will be reset |
505 | gets restored, so you can change all these. Otherwise the coro will |
505 | and the default output filehandle gets restored, so you can change all |
506 | be re-used "as-is": most notably if you change other per-coro global |
506 | these. Otherwise the coro will be re-used "as-is": most notably if you |
507 | stuff such as C<$/> you I<must needs> revert that change, which is most |
507 | change other per-coro global stuff such as C<$/> you I<must needs> revert |
508 | simply done by using local as in: C<< local $/ >>. |
508 | that change, which is most simply done by using local as in: C<< local $/ |
|
|
509 | >>. |
509 | |
510 | |
510 | The idle pool size is limited to C<8> idle coros (this can be |
511 | The idle pool size is limited to C<8> idle coros (this can be |
511 | adjusted by changing $Coro::POOL_SIZE), but there can be as many non-idle |
512 | adjusted by changing $Coro::POOL_SIZE), but there can be as many non-idle |
512 | coros as required. |
513 | coros as required. |
513 | |
514 | |
… | |
… | |
637 | # at this place, the timezone is Antarctica/South_Pole, |
638 | # at this place, the timezone is Antarctica/South_Pole, |
638 | # without disturbing the TZ of any other coro. |
639 | # without disturbing the TZ of any other coro. |
639 | }; |
640 | }; |
640 | |
641 | |
641 | This can be used to localise about any resource (locale, uid, current |
642 | This can be used to localise about any resource (locale, uid, current |
642 | working directory etc.) to a block, despite the existance of other |
643 | working directory etc.) to a block, despite the existence of other |
643 | coros. |
644 | coros. |
644 | |
645 | |
645 | Another interesting example implements time-sliced multitasking using |
646 | Another interesting example implements time-sliced multitasking using |
646 | interval timers (this could obviously be optimised, but does the job): |
647 | interval timers (this could obviously be optimised, but does the job): |
647 | |
648 | |
… | |
… | |
753 | =item $state->is_new |
754 | =item $state->is_new |
754 | |
755 | |
755 | Returns true iff this Coro object is "new", i.e. has never been run |
756 | Returns true iff this Coro object is "new", i.e. has never been run |
756 | yet. Those states basically consist of only the code reference to call and |
757 | yet. Those states basically consist of only the code reference to call and |
757 | the arguments, but consumes very little other resources. New states will |
758 | the arguments, but consumes very little other resources. New states will |
758 | automatically get assigned a perl interpreter when they are transfered to. |
759 | automatically get assigned a perl interpreter when they are transferred to. |
759 | |
760 | |
760 | =item $state->is_zombie |
761 | =item $state->is_zombie |
761 | |
762 | |
762 | Returns true iff the Coro object has been cancelled, i.e. |
763 | Returns true iff the Coro object has been cancelled, i.e. |
763 | it's resources freed because they were C<cancel>'ed, C<terminate>'d, |
764 | it's resources freed because they were C<cancel>'ed, C<terminate>'d, |
… | |
… | |
781 | =item $is_suspended = $coro->is_suspended |
782 | =item $is_suspended = $coro->is_suspended |
782 | |
783 | |
783 | Returns true iff this Coro object has been suspended. Suspended Coros will |
784 | Returns true iff this Coro object has been suspended. Suspended Coros will |
784 | not ever be scheduled. |
785 | not ever be scheduled. |
785 | |
786 | |
786 | =item $coro->cancel (arg...) |
787 | =item $coro->cancel ($arg...) |
787 | |
788 | |
788 | Terminates the given Coro thread and makes it return the given arguments as |
789 | Terminate the given Coro thread and make it return the given arguments as |
789 | status (default: an empty list). Never returns if the Coro is the |
790 | status (default: an empty list). Never returns if the Coro is the |
790 | current Coro. |
791 | current Coro. |
791 | |
792 | |
792 | This is a rather brutal way to free a coro, with some limitations - if |
793 | This is a rather brutal way to free a coro, with some limitations - if |
793 | the thread is inside a C callback that doesn't expect to be canceled, |
794 | the thread is inside a C callback that doesn't expect to be canceled, |
… | |
… | |
829 | context and can block if they wish. The downside is that there is no |
830 | context and can block if they wish. The downside is that there is no |
830 | guarantee that the thread can be cancelled when you call this method, and |
831 | guarantee that the thread can be cancelled when you call this method, and |
831 | therefore, it might fail. It is also considerably slower than C<cancel> or |
832 | therefore, it might fail. It is also considerably slower than C<cancel> or |
832 | C<terminate>. |
833 | C<terminate>. |
833 | |
834 | |
834 | A thread is in a safe-cancellable state if it either hasn't been run yet, |
835 | A thread is in a safe-cancellable state if it either has never been run |
|
|
836 | yet, has already been canceled/terminated or otherwise destroyed, or has |
835 | or it has no C context attached and is inside an SLF function. |
837 | no C context attached and is inside an SLF function. |
836 | |
838 | |
|
|
839 | The first two states are trivial - a thread that hasnot started or has |
|
|
840 | already finished is safe to cancel. |
|
|
841 | |
837 | The latter two basically mean that the thread isn't currently inside a |
842 | The last state basically means that the thread isn't currently inside a |
838 | perl callback called from some C function (usually via some XS modules) |
843 | perl callback called from some C function (usually via some XS modules) |
839 | and isn't currently executing inside some C function itself (via Coro's XS |
844 | and isn't currently executing inside some C function itself (via Coro's XS |
840 | API). |
845 | API). |
841 | |
846 | |
842 | This call returns true when it could cancel the thread, or croaks with an |
847 | This call returns true when it could cancel the thread, or croaks with an |
… | |
… | |
1122 | It is very common for a coro to wait for some callback to be |
1127 | It is very common for a coro to wait for some callback to be |
1123 | called. This occurs naturally when you use coro in an otherwise |
1128 | called. This occurs naturally when you use coro in an otherwise |
1124 | event-based program, or when you use event-based libraries. |
1129 | event-based program, or when you use event-based libraries. |
1125 | |
1130 | |
1126 | These typically register a callback for some event, and call that callback |
1131 | These typically register a callback for some event, and call that callback |
1127 | when the event occured. In a coro, however, you typically want to |
1132 | when the event occurred. In a coro, however, you typically want to |
1128 | just wait for the event, simplyifying things. |
1133 | just wait for the event, simplyifying things. |
1129 | |
1134 | |
1130 | For example C<< AnyEvent->child >> registers a callback to be called when |
1135 | For example C<< AnyEvent->child >> registers a callback to be called when |
1131 | a specific child has exited: |
1136 | a specific child has exited: |
1132 | |
1137 | |
… | |
… | |
1261 | processes. What makes it so bad is that on non-windows platforms, you can |
1266 | processes. What makes it so bad is that on non-windows platforms, you can |
1262 | actually take advantage of custom hardware for this purpose (as evidenced |
1267 | actually take advantage of custom hardware for this purpose (as evidenced |
1263 | by the forks module, which gives you the (i-) threads API, just much |
1268 | by the forks module, which gives you the (i-) threads API, just much |
1264 | faster). |
1269 | faster). |
1265 | |
1270 | |
1266 | Sharing data is in the i-threads model is done by transfering data |
1271 | Sharing data is in the i-threads model is done by transferring data |
1267 | structures between threads using copying semantics, which is very slow - |
1272 | structures between threads using copying semantics, which is very slow - |
1268 | shared data simply does not exist. Benchmarks using i-threads which are |
1273 | shared data simply does not exist. Benchmarks using i-threads which are |
1269 | communication-intensive show extremely bad behaviour with i-threads (in |
1274 | communication-intensive show extremely bad behaviour with i-threads (in |
1270 | fact, so bad that Coro, which cannot take direct advantage of multiple |
1275 | fact, so bad that Coro, which cannot take direct advantage of multiple |
1271 | CPUs, is often orders of magnitude faster because it shares data using |
1276 | CPUs, is often orders of magnitude faster because it shares data using |