=head1 Introduction to Coro This tutorial will introduce you to the main features of the Coro module family. It first introduces some basic concepts, and later gives a short overview of the module family. =head1 What is Coro? Coro started as a simple module that implemented a specific form of first class continuations called Coroutines. These basically allow you to capture the current point execution and jump to another point, while allowing you to return at any time, as kind of non-local jump, not unlike C's C/C. This is nowadays known as a L. One natural application for these is to include a scheduler, resulting in cooperative threads, which is the main use case for Coro today. Still, much of the documentation and custom refers to these threads as "coroutines" or often just "coros". A thread is very much like a stripped-down perl interpreter, or a process: Unlike a full interpreter process, a thread doesn't have its own variable or code namespaces - everything is shared. That means that when one thread modifies a variable (or any value, e.g. through a reference), then other threads immediately see this change when they look at the same variable or location. Cooperative means that these threads must cooperate with each other, when it comes to CPU usage - only one thread ever has the CPU, and if another thread wants the CPU, the running thread has to give it up. The latter is either explicitly, by calling a function to do so, or implicity, when waiting on a resource (such as a Semaphore, or the completion of some I/O request). This threading model is popular in scripting languages (such as python or ruby), and this implementation is typically far more efficient than threads implemented in other languages. Perl itself uses rather confusing terminilogy - what perl calls a "thread" (or "ithread") is actually called a "process" everywhere else: The so-called "perl threads" are actually artifacts of the unix process emulation code used on Windows, which is consequently why they are actually processes and not threads. The biggest difference is that neither variables (nor code) are shared between processes or ithreads. =head1 Cooperative Threads Cooperative threads is what the Coro module gives you. Obviously, you have to C it first: use Coro; To create a thread, you can use the C function that automatically gets exported from that module: async { print "hello\n"; }; Async expects a code block as first argument (in indirect object notation). You can actually pass it extra arguments, and these will end up in C<@_> when executing the codeblock, but since it is a closure, you can also just refer to any lexical variables that are currently visible. The above lines create a thread, but if you save them in a file and execute it as a perl program, you will not get any output. The reasons is that, although you created a thread, and the thread is ready to execute (because C puts it into the so-called I), it never gets any CPU time to actually execute, as the main program - which also is a thread almost like any other - never gives up the CPU but instead exits the whole program, by running off the end of the file. Since Coro threads are cooperative, the main thread has to cooperate, and give up the CPU. To explicitly give up the CPU, use the C function (which is often called C in other thread implementations): use Coro; async { print "hello\n"; }; cede; Running the above prints C and exits. Now, this is not very interesting, so let's try a slightly more interesting program: use Coro; async { print "async 1\n"; cede; print "async 2\n"; }; print "main 1\n"; cede; print "main 2\n"; cede; Running this program prints: main 1 async 1 main 2 async 2 This nicely illustrates the non-local jump ability: the main program prints the first line, and then yields the CPU to whatever other threads there are. And there is one other, which runs and prints "async 1", and itself yields the CPU. Since the only other thread available is the main program, it continues running and so on. Let's look at the example in more detail: C first creates a new thread. All new threads start in a suspended state. To make them run, they need to be put into the ready queue, which is the second thing that C does. Each time a thread gives up the CPU, Coro runs a so-called I. The scheduler selects the next thread from the ready queue, removes it from the queue, and runs it. C also does two things: first it puts the running thread into the ready queue, and then it jumps into the scheduler. This has the effect of giving up the CPU, but also ensures that, eventually, the thread gets run again. In fact, C could be implemented like this: sub my_cede { $Coro::current->ready; schedule; } This works because C<$Coro::current> always contains the currently running thread, and the scheduler itself can be called directly via C. What is the effect of just calling C without putting the current thread into the ready queue first? Simple: the scheduler selects the next ready thread and runs it. And the current thread, as it hasn't been put into the ready queue, will go to sleep until something wakes it up. If. Ever. The following example remembers the current thread in a variable, creates a thread and then puts the main program to sleep. The newly created thread uses rand to wake up the main thread by calling its C method - or not. use Coro; my $wakeme = $Coro::current; async { $wakeme->ready if 0.5 > rand; }; schedule; Now, when you run it, one of two things happen: Either the C thread wakes up the main thread again, in which case the program silently exits, or it doesn't, in which case you get something like this: FATAL: deadlock detected. PID SC RSS USES Description Where 31976480 -C 19k 0 [main::] [program:9] 32223768 UC 12k 1 [Coro.pm:691] 32225088 -- 2068 1 [coro manager] [Coro.pm:691] 32225184 N- 216 0 [unblock_sub scheduler] - Why is that? Well, when the C thread runs into the end of its block, it will be terminated (via a call to C) and the scheduler is called again. Since the C thread hasn't woken up the main thread, and there aren't any other threads, there is nothing to wake up, and the program cannot continue. Since there I threads that I be running (main) but none are I to do so, Coro signals a I - no progress is possible. Usually you also get a listing of all threads, which might help you track down the problem. However, there is an important case where progress I, in fact, possible, despite no threads being ready - namely in an event-based program. In such a program, some threads could wait for I events, such as a timeout, or some data to arrive on a socket. Since a deadlock in such a case would not be very useful, there is a module named L that integrates threads into an event loop. It configures Coro in a way that, instead of Cing with an error message, it instead runs the event loop in the hope of receiving an event that will wake up some thread. =head2 Semaphores and other locks Using only C, C and C to synchronise threads is difficult, especially if many threads are ready at the same time. Coro supports a number of primitives to help synchronising threads in easier ways. The first such primitives is L, which implements counting semaphores (binary semaphores are available as L, and there are L and L primitives as well). Counting semaphores, in a sense, store a count of resources. You can remove/allocate/reserve a resource by calling the C<< ->down >> method, which decrements the counter, and you can add or free a resource by calling the C<< ->up >> method, which increments the counter. If the counter is C<0>, then C<< ->down >> cannot decrement the semaphore - it is locked - and the thread will wait until a count becomes available again. Here is an example: use Coro; my $sem = new Coro::Semaphore 0; # a locked semaphore async { print "unlocking semaphore\n"; $sem->up; }; print "trying to lock semaphore\n"; $sem->down; print "we got it!\n"; This program creates a I semaphore (a semaphore with count C<0>) and tries to lock it (by trying to decrement it's counter in the C method). Since the semaphore count is already exhausted, this will block the main thread until the semaphore becomes available. This yields the CPU to the only other read thread in the process,t he one created with C, which unlocks the semaphore (and instantly terminates itself by returning). Since the semaphore is now available, the main program locks it and continues: "we got it!". Counting semaphores are most often used to lock resources, or to exclude other threads from accessing or using a resource. For example, consider a very costly function (that temporarily allocates a lot of ram, for example). You wouldn't want to have many threads calling this function at the same time, so you use a semaphore: my $lock = new Coro::Semaphore; # unlocked initially - default is 1 sub costly_function { $lock->down; # acquire semaphore # do costly operation that blocks $lock->up; # unlock it } No matter how many threads call C, only one will run the body of it, all others will wait in the C call. If you want to limit the number of concurrent executions to five, you could create the semaphore with an initial count of C<5>. Why does the comment mention an "operation the blocks"? Again, that's because coro's threads are cooperative: unless C willingly gives up the CPU, other threads of control will simply not run. This makes locking superfluous in cases where the function itself never gives up the CPU, but when dealing with the outside world, this is rare. Now consider what happens when the code Cs after executing C, but before C. This will leave the semaphore in a locked state, which often isn't what you want - imagine the caller expecting a failure and wrapping the call into an C. So normally you would want to free the lock again if execution somehow leaves the function, whether "normally" or via an exception. Here the C method proves useful: my $lock = new Coro::Semaphore; # unlocked initially sub costly_function { my $guard = $lock->guard; # acquire guard # do costly operation that blocks } The C method Cs the semaphore and returns a so-called guard object. Nothing happens as long as there are references to it (i.e. it is in scope somehow), but when all references are gone, for example, when C returns or throws an exception, it will automatically call C on the semaphore, no way to forget it. Even when the thread gets Ced by another thread will the guard object ensure that the lock is freed. This concludes this introduction to semaphores and locks. Apart from L and L, there is also a reader-writer lock (L) and a semaphore set (L). All of these come with their own manpage. =head2 Channels Semaphores are fine, but usually you want to communicate by exchanging data as well. Of course, you can just use some locks, and array of sorts and use that to communicate, but there is a useful abstraction for communicaiton between threads: L. Channels are the Coro equivalent of a unix pipe (and very similar to AmigaOS message ports :) - you can put stuff into it on one side, and read data from it on the other. Here is a simple example that creates a thread and sends numbers to it. The thread calculates the square of each number and puts that into another channel, which the main thread reads the result from: use Coro; my $calculate = new Coro::Channel; my $result = new Coro::Channel; async { # endless loop while () { my $num = $calculate->get; # read a number $num **= 2; # square it $result->put ($num); # put the result into the result queue } }; for (1, 2, 5, 10, 77) { $calculate->put ($_); print "$_ ** 2 = ", $result->get, "\n"; } Gives: 1 ** 2 = 1 2 ** 2 = 4 5 ** 2 = 25 10 ** 2 = 100 77 ** 2 = 5929 Both C and C methods can block the current thread: C first checks whether there I some data available, and if not, it block the current thread until some data arrives. C can also block, as each Channel has a "maximum item capacity", i.e. you cannot store more than a specific number of items, which can be configured when the Channel gets created. In the above example, C never blocks, as the default capacity of a Channel is very high. So the for loop first puts data into the channel, then tries to C the result. Since the async thread hasn't put anything in there yet (on the first iteration it hasn't even run yet), the result Channel is still empty, so the main thread blocks. Since the only other runnable/ready thread at this point is the squaring thread, it will be woken up, will C the number, square it and put it into the result channel, waking up the main thread again. It will still continue to run, as waking up other threads just puts them into the ready queue, nothing less, nothing more. Only when the async thread tries to C the next number from the calculate channel will it block (because nothing is there yet) and the main thread will continue running. And so on. This illustrates a general principle used by Coro: a thread will I when it has to. Neither the Coro module itself nor any of its submodules will ever give up the CPU unless they have to, because they wait for some event to happen. Be careful, however: when multiple threads put numbers into C<$calculate> and read from C<$result>, they won't know which result is theirs. The solution for this is to either use a semaphore, or send not just the number, but also your own private result channel. =head2 What is mine, what is ours? What, exactly, constitutes a thread? Obviously it contains the current point of execution. Not so obviously, it also has to include all lexical variables, that means, every thread has its own set of lexical variables. To see why this is necessary, consider this program: use Coro; sub printit { my ($string) = @_; cede; print $string; } async { printit "Hello, " }; async { printit "World!\n" }; cede; cede; # do it The above prints C. If C wouldn't have its own per-thread C<$string> variable, it would probably print C, which is rather unexpected, and would make it very difficult to make good use of threads. To make things run smoothly, there are quite a number of other things that are per-thread: =over 4 =item $_, @_, $@ and the regex result vars, $&, %+, $1, $2, ... C<$_> is used much like a local variable, so it gets localised per-thread. The same is true for regex results (C<$1>, C<$2> and so on). C<@_> contains the arguments, so like lexicals, it also must be per-thread. C<$@> is not obviously required to be per-thread, but it is quite useful. =item $/ and the default output file handle Threads most often block when doing I/O. Since C<$/> is used when reading lines, it would be very inconvenient if it were a shared variable, so it is per-thread. The default output handle (see C