… | |
… | |
260 | flags. If that is troubling you, check C<ev_backend ()> afterwards). |
260 | flags. If that is troubling you, check C<ev_backend ()> afterwards). |
261 | |
261 | |
262 | If you don't know what event loop to use, use the one returned from this |
262 | If you don't know what event loop to use, use the one returned from this |
263 | function. |
263 | function. |
264 | |
264 | |
|
|
265 | The default loop is the only loop that can handle C<ev_signal> and |
|
|
266 | C<ev_child> watchers, and to do this, it always registers a handler |
|
|
267 | for C<SIGCHLD>. If this is a problem for your app you can either |
|
|
268 | create a dynamic loop with C<ev_loop_new> that doesn't do that, or you |
|
|
269 | can simply overwrite the C<SIGCHLD> signal handler I<after> calling |
|
|
270 | C<ev_default_init>. |
|
|
271 | |
265 | The flags argument can be used to specify special behaviour or specific |
272 | The flags argument can be used to specify special behaviour or specific |
266 | backends to use, and is usually specified as C<0> (or C<EVFLAG_AUTO>). |
273 | backends to use, and is usually specified as C<0> (or C<EVFLAG_AUTO>). |
267 | |
274 | |
268 | The following flags are supported: |
275 | The following flags are supported: |
269 | |
276 | |
… | |
… | |
403 | While this backend scales well, it requires one system call per active |
410 | While this backend scales well, it requires one system call per active |
404 | file descriptor per loop iteration. For small and medium numbers of file |
411 | file descriptor per loop iteration. For small and medium numbers of file |
405 | descriptors a "slow" C<EVBACKEND_SELECT> or C<EVBACKEND_POLL> backend |
412 | descriptors a "slow" C<EVBACKEND_SELECT> or C<EVBACKEND_POLL> backend |
406 | might perform better. |
413 | might perform better. |
407 | |
414 | |
|
|
415 | On the positive side, ignoring the spurious readyness notifications, this |
|
|
416 | backend actually performed to specification in all tests and is fully |
|
|
417 | embeddable, which is a rare feat among the OS-specific backends. |
|
|
418 | |
408 | =item C<EVBACKEND_ALL> |
419 | =item C<EVBACKEND_ALL> |
409 | |
420 | |
410 | Try all backends (even potentially broken ones that wouldn't be tried |
421 | Try all backends (even potentially broken ones that wouldn't be tried |
411 | with C<EVFLAG_AUTO>). Since this is a mask, you can do stuff such as |
422 | with C<EVFLAG_AUTO>). Since this is a mask, you can do stuff such as |
412 | C<EVBACKEND_ALL & ~EVBACKEND_KQUEUE>. |
423 | C<EVBACKEND_ALL & ~EVBACKEND_KQUEUE>. |
… | |
… | |
414 | It is definitely not recommended to use this flag. |
425 | It is definitely not recommended to use this flag. |
415 | |
426 | |
416 | =back |
427 | =back |
417 | |
428 | |
418 | If one or more of these are ored into the flags value, then only these |
429 | If one or more of these are ored into the flags value, then only these |
419 | backends will be tried (in the reverse order as given here). If none are |
430 | backends will be tried (in the reverse order as listed here). If none are |
420 | specified, most compiled-in backend will be tried, usually in reverse |
431 | specified, all backends in C<ev_recommended_backends ()> will be tried. |
421 | order of their flag values :) |
|
|
422 | |
432 | |
423 | The most typical usage is like this: |
433 | The most typical usage is like this: |
424 | |
434 | |
425 | if (!ev_default_loop (0)) |
435 | if (!ev_default_loop (0)) |
426 | fatal ("could not initialise libev, bad $LIBEV_FLAGS in environment?"); |
436 | fatal ("could not initialise libev, bad $LIBEV_FLAGS in environment?"); |
… | |
… | |
473 | Like C<ev_default_destroy>, but destroys an event loop created by an |
483 | Like C<ev_default_destroy>, but destroys an event loop created by an |
474 | earlier call to C<ev_loop_new>. |
484 | earlier call to C<ev_loop_new>. |
475 | |
485 | |
476 | =item ev_default_fork () |
486 | =item ev_default_fork () |
477 | |
487 | |
|
|
488 | This function sets a flag that causes subsequent C<ev_loop> iterations |
478 | This function reinitialises the kernel state for backends that have |
489 | to reinitialise the kernel state for backends that have one. Despite the |
479 | one. Despite the name, you can call it anytime, but it makes most sense |
490 | name, you can call it anytime, but it makes most sense after forking, in |
480 | after forking, in either the parent or child process (or both, but that |
491 | the child process (or both child and parent, but that again makes little |
481 | again makes little sense). |
492 | sense). You I<must> call it in the child before using any of the libev |
|
|
493 | functions, and it will only take effect at the next C<ev_loop> iteration. |
482 | |
494 | |
483 | You I<must> call this function in the child process after forking if and |
495 | On the other hand, you only need to call this function in the child |
484 | only if you want to use the event library in both processes. If you just |
496 | process if and only if you want to use the event library in the child. If |
485 | fork+exec, you don't have to call it. |
497 | you just fork+exec, you don't have to call it at all. |
486 | |
498 | |
487 | The function itself is quite fast and it's usually not a problem to call |
499 | The function itself is quite fast and it's usually not a problem to call |
488 | it just in case after a fork. To make this easy, the function will fit in |
500 | it just in case after a fork. To make this easy, the function will fit in |
489 | quite nicely into a call to C<pthread_atfork>: |
501 | quite nicely into a call to C<pthread_atfork>: |
490 | |
502 | |
491 | pthread_atfork (0, 0, ev_default_fork); |
503 | pthread_atfork (0, 0, ev_default_fork); |
492 | |
|
|
493 | At the moment, C<EVBACKEND_SELECT> and C<EVBACKEND_POLL> are safe to use |
|
|
494 | without calling this function, so if you force one of those backends you |
|
|
495 | do not need to care. |
|
|
496 | |
504 | |
497 | =item ev_loop_fork (loop) |
505 | =item ev_loop_fork (loop) |
498 | |
506 | |
499 | Like C<ev_default_fork>, but acts on an event loop created by |
507 | Like C<ev_default_fork>, but acts on an event loop created by |
500 | C<ev_loop_new>. Yes, you have to call this on every allocated event loop |
508 | C<ev_loop_new>. Yes, you have to call this on every allocated event loop |
… | |
… | |
551 | usually a better approach for this kind of thing. |
559 | usually a better approach for this kind of thing. |
552 | |
560 | |
553 | Here are the gory details of what C<ev_loop> does: |
561 | Here are the gory details of what C<ev_loop> does: |
554 | |
562 | |
555 | - Before the first iteration, call any pending watchers. |
563 | - Before the first iteration, call any pending watchers. |
556 | * If there are no active watchers (reference count is zero), return. |
564 | * If EVFLAG_FORKCHECK was used, check for a fork. |
557 | - Queue all prepare watchers and then call all outstanding watchers. |
565 | - If a fork was detected, queue and call all fork watchers. |
|
|
566 | - Queue and call all prepare watchers. |
558 | - If we have been forked, recreate the kernel state. |
567 | - If we have been forked, recreate the kernel state. |
559 | - Update the kernel state with all outstanding changes. |
568 | - Update the kernel state with all outstanding changes. |
560 | - Update the "event loop time". |
569 | - Update the "event loop time". |
561 | - Calculate for how long to block. |
570 | - Calculate for how long to sleep or block, if at all |
|
|
571 | (active idle watchers, EVLOOP_NONBLOCK or not having |
|
|
572 | any active watchers at all will result in not sleeping). |
|
|
573 | - Sleep if the I/O and timer collect interval say so. |
562 | - Block the process, waiting for any events. |
574 | - Block the process, waiting for any events. |
563 | - Queue all outstanding I/O (fd) events. |
575 | - Queue all outstanding I/O (fd) events. |
564 | - Update the "event loop time" and do time jump handling. |
576 | - Update the "event loop time" and do time jump handling. |
565 | - Queue all outstanding timers. |
577 | - Queue all outstanding timers. |
566 | - Queue all outstanding periodics. |
578 | - Queue all outstanding periodics. |
567 | - If no events are pending now, queue all idle watchers. |
579 | - If no events are pending now, queue all idle watchers. |
568 | - Queue all check watchers. |
580 | - Queue all check watchers. |
569 | - Call all queued watchers in reverse order (i.e. check watchers first). |
581 | - Call all queued watchers in reverse order (i.e. check watchers first). |
570 | Signals and child watchers are implemented as I/O watchers, and will |
582 | Signals and child watchers are implemented as I/O watchers, and will |
571 | be handled here by queueing them when their watcher gets executed. |
583 | be handled here by queueing them when their watcher gets executed. |
572 | - If ev_unloop has been called or EVLOOP_ONESHOT or EVLOOP_NONBLOCK |
584 | - If ev_unloop has been called, or EVLOOP_ONESHOT or EVLOOP_NONBLOCK |
573 | were used, return, otherwise continue with step *. |
585 | were used, or there are no active watchers, return, otherwise |
|
|
586 | continue with step *. |
574 | |
587 | |
575 | Example: Queue some jobs and then loop until no events are outsanding |
588 | Example: Queue some jobs and then loop until no events are outstanding |
576 | anymore. |
589 | anymore. |
577 | |
590 | |
578 | ... queue jobs here, make sure they register event watchers as long |
591 | ... queue jobs here, make sure they register event watchers as long |
579 | ... as they still have work to do (even an idle watcher will do..) |
592 | ... as they still have work to do (even an idle watcher will do..) |
580 | ev_loop (my_loop, 0); |
593 | ev_loop (my_loop, 0); |
… | |
… | |
584 | |
597 | |
585 | Can be used to make a call to C<ev_loop> return early (but only after it |
598 | Can be used to make a call to C<ev_loop> return early (but only after it |
586 | has processed all outstanding events). The C<how> argument must be either |
599 | has processed all outstanding events). The C<how> argument must be either |
587 | C<EVUNLOOP_ONE>, which will make the innermost C<ev_loop> call return, or |
600 | C<EVUNLOOP_ONE>, which will make the innermost C<ev_loop> call return, or |
588 | C<EVUNLOOP_ALL>, which will make all nested C<ev_loop> calls return. |
601 | C<EVUNLOOP_ALL>, which will make all nested C<ev_loop> calls return. |
|
|
602 | |
|
|
603 | This "unloop state" will be cleared when entering C<ev_loop> again. |
589 | |
604 | |
590 | =item ev_ref (loop) |
605 | =item ev_ref (loop) |
591 | |
606 | |
592 | =item ev_unref (loop) |
607 | =item ev_unref (loop) |
593 | |
608 | |
… | |
… | |
598 | returning, ev_unref() after starting, and ev_ref() before stopping it. For |
613 | returning, ev_unref() after starting, and ev_ref() before stopping it. For |
599 | example, libev itself uses this for its internal signal pipe: It is not |
614 | example, libev itself uses this for its internal signal pipe: It is not |
600 | visible to the libev user and should not keep C<ev_loop> from exiting if |
615 | visible to the libev user and should not keep C<ev_loop> from exiting if |
601 | no event watchers registered by it are active. It is also an excellent |
616 | no event watchers registered by it are active. It is also an excellent |
602 | way to do this for generic recurring timers or from within third-party |
617 | way to do this for generic recurring timers or from within third-party |
603 | libraries. Just remember to I<unref after start> and I<ref before stop>. |
618 | libraries. Just remember to I<unref after start> and I<ref before stop> |
|
|
619 | (but only if the watcher wasn't active before, or was active before, |
|
|
620 | respectively). |
604 | |
621 | |
605 | Example: Create a signal watcher, but keep it from keeping C<ev_loop> |
622 | Example: Create a signal watcher, but keep it from keeping C<ev_loop> |
606 | running when nothing else is active. |
623 | running when nothing else is active. |
607 | |
624 | |
608 | struct ev_signal exitsig; |
625 | struct ev_signal exitsig; |
… | |
… | |
756 | |
773 | |
757 | =item C<EV_FORK> |
774 | =item C<EV_FORK> |
758 | |
775 | |
759 | The event loop has been resumed in the child process after fork (see |
776 | The event loop has been resumed in the child process after fork (see |
760 | C<ev_fork>). |
777 | C<ev_fork>). |
|
|
778 | |
|
|
779 | =item C<EV_ASYNC> |
|
|
780 | |
|
|
781 | The given async watcher has been asynchronously notified (see C<ev_async>). |
761 | |
782 | |
762 | =item C<EV_ERROR> |
783 | =item C<EV_ERROR> |
763 | |
784 | |
764 | An unspecified error has occured, the watcher has been stopped. This might |
785 | An unspecified error has occured, the watcher has been stopped. This might |
765 | happen because the watcher could not be properly started because libev |
786 | happen because the watcher could not be properly started because libev |
… | |
… | |
1067 | =item int events [read-only] |
1088 | =item int events [read-only] |
1068 | |
1089 | |
1069 | The events being watched. |
1090 | The events being watched. |
1070 | |
1091 | |
1071 | =back |
1092 | =back |
|
|
1093 | |
|
|
1094 | =head3 Examples |
1072 | |
1095 | |
1073 | Example: Call C<stdin_readable_cb> when STDIN_FILENO has become, well |
1096 | Example: Call C<stdin_readable_cb> when STDIN_FILENO has become, well |
1074 | readable, but only once. Since it is likely line-buffered, you could |
1097 | readable, but only once. Since it is likely line-buffered, you could |
1075 | attempt to read a whole line in the callback. |
1098 | attempt to read a whole line in the callback. |
1076 | |
1099 | |
… | |
… | |
1174 | or C<ev_timer_again> is called and determines the next timeout (if any), |
1197 | or C<ev_timer_again> is called and determines the next timeout (if any), |
1175 | which is also when any modifications are taken into account. |
1198 | which is also when any modifications are taken into account. |
1176 | |
1199 | |
1177 | =back |
1200 | =back |
1178 | |
1201 | |
|
|
1202 | =head3 Examples |
|
|
1203 | |
1179 | Example: Create a timer that fires after 60 seconds. |
1204 | Example: Create a timer that fires after 60 seconds. |
1180 | |
1205 | |
1181 | static void |
1206 | static void |
1182 | one_minute_cb (struct ev_loop *loop, struct ev_timer *w, int revents) |
1207 | one_minute_cb (struct ev_loop *loop, struct ev_timer *w, int revents) |
1183 | { |
1208 | { |
… | |
… | |
1340 | When active, contains the absolute time that the watcher is supposed to |
1365 | When active, contains the absolute time that the watcher is supposed to |
1341 | trigger next. |
1366 | trigger next. |
1342 | |
1367 | |
1343 | =back |
1368 | =back |
1344 | |
1369 | |
|
|
1370 | =head3 Examples |
|
|
1371 | |
1345 | Example: Call a callback every hour, or, more precisely, whenever the |
1372 | Example: Call a callback every hour, or, more precisely, whenever the |
1346 | system clock is divisible by 3600. The callback invocation times have |
1373 | system clock is divisible by 3600. The callback invocation times have |
1347 | potentially a lot of jittering, but good long-term stability. |
1374 | potentially a lot of jittering, but good long-term stability. |
1348 | |
1375 | |
1349 | static void |
1376 | static void |
… | |
… | |
1415 | |
1442 | |
1416 | =head3 Watcher-Specific Functions and Data Members |
1443 | =head3 Watcher-Specific Functions and Data Members |
1417 | |
1444 | |
1418 | =over 4 |
1445 | =over 4 |
1419 | |
1446 | |
1420 | =item ev_child_init (ev_child *, callback, int pid) |
1447 | =item ev_child_init (ev_child *, callback, int pid, int trace) |
1421 | |
1448 | |
1422 | =item ev_child_set (ev_child *, int pid) |
1449 | =item ev_child_set (ev_child *, int pid, int trace) |
1423 | |
1450 | |
1424 | Configures the watcher to wait for status changes of process C<pid> (or |
1451 | Configures the watcher to wait for status changes of process C<pid> (or |
1425 | I<any> process if C<pid> is specified as C<0>). The callback can look |
1452 | I<any> process if C<pid> is specified as C<0>). The callback can look |
1426 | at the C<rstatus> member of the C<ev_child> watcher structure to see |
1453 | at the C<rstatus> member of the C<ev_child> watcher structure to see |
1427 | the status word (use the macros from C<sys/wait.h> and see your systems |
1454 | the status word (use the macros from C<sys/wait.h> and see your systems |
1428 | C<waitpid> documentation). The C<rpid> member contains the pid of the |
1455 | C<waitpid> documentation). The C<rpid> member contains the pid of the |
1429 | process causing the status change. |
1456 | process causing the status change. C<trace> must be either C<0> (only |
|
|
1457 | activate the watcher when the process terminates) or C<1> (additionally |
|
|
1458 | activate the watcher when the process is stopped or continued). |
1430 | |
1459 | |
1431 | =item int pid [read-only] |
1460 | =item int pid [read-only] |
1432 | |
1461 | |
1433 | The process id this watcher watches out for, or C<0>, meaning any process id. |
1462 | The process id this watcher watches out for, or C<0>, meaning any process id. |
1434 | |
1463 | |
… | |
… | |
1440 | |
1469 | |
1441 | The process exit/trace status caused by C<rpid> (see your systems |
1470 | The process exit/trace status caused by C<rpid> (see your systems |
1442 | C<waitpid> and C<sys/wait.h> documentation for details). |
1471 | C<waitpid> and C<sys/wait.h> documentation for details). |
1443 | |
1472 | |
1444 | =back |
1473 | =back |
|
|
1474 | |
|
|
1475 | =head3 Examples |
1445 | |
1476 | |
1446 | Example: Try to exit cleanly on SIGINT and SIGTERM. |
1477 | Example: Try to exit cleanly on SIGINT and SIGTERM. |
1447 | |
1478 | |
1448 | static void |
1479 | static void |
1449 | sigint_cb (struct ev_loop *loop, struct ev_signal *w, int revents) |
1480 | sigint_cb (struct ev_loop *loop, struct ev_signal *w, int revents) |
… | |
… | |
1658 | kind. There is a C<ev_idle_set> macro, but using it is utterly pointless, |
1689 | kind. There is a C<ev_idle_set> macro, but using it is utterly pointless, |
1659 | believe me. |
1690 | believe me. |
1660 | |
1691 | |
1661 | =back |
1692 | =back |
1662 | |
1693 | |
|
|
1694 | =head3 Examples |
|
|
1695 | |
1663 | Example: Dynamically allocate an C<ev_idle> watcher, start it, and in the |
1696 | Example: Dynamically allocate an C<ev_idle> watcher, start it, and in the |
1664 | callback, free it. Also, use no error checking, as usual. |
1697 | callback, free it. Also, use no error checking, as usual. |
1665 | |
1698 | |
1666 | static void |
1699 | static void |
1667 | idle_cb (struct ev_loop *loop, struct ev_idle *w, int revents) |
1700 | idle_cb (struct ev_loop *loop, struct ev_idle *w, int revents) |
1668 | { |
1701 | { |
1669 | free (w); |
1702 | free (w); |
1670 | // now do something you wanted to do when the program has |
1703 | // now do something you wanted to do when the program has |
1671 | // no longer asnything immediate to do. |
1704 | // no longer anything immediate to do. |
1672 | } |
1705 | } |
1673 | |
1706 | |
1674 | struct ev_idle *idle_watcher = malloc (sizeof (struct ev_idle)); |
1707 | struct ev_idle *idle_watcher = malloc (sizeof (struct ev_idle)); |
1675 | ev_idle_init (idle_watcher, idle_cb); |
1708 | ev_idle_init (idle_watcher, idle_cb); |
1676 | ev_idle_start (loop, idle_cb); |
1709 | ev_idle_start (loop, idle_cb); |
… | |
… | |
1738 | parameters of any kind. There are C<ev_prepare_set> and C<ev_check_set> |
1771 | parameters of any kind. There are C<ev_prepare_set> and C<ev_check_set> |
1739 | macros, but using them is utterly, utterly and completely pointless. |
1772 | macros, but using them is utterly, utterly and completely pointless. |
1740 | |
1773 | |
1741 | =back |
1774 | =back |
1742 | |
1775 | |
|
|
1776 | =head3 Examples |
|
|
1777 | |
1743 | There are a number of principal ways to embed other event loops or modules |
1778 | There are a number of principal ways to embed other event loops or modules |
1744 | into libev. Here are some ideas on how to include libadns into libev |
1779 | into libev. Here are some ideas on how to include libadns into libev |
1745 | (there is a Perl module named C<EV::ADNS> that does this, which you could |
1780 | (there is a Perl module named C<EV::ADNS> that does this, which you could |
1746 | use for an actually working example. Another Perl module named C<EV::Glib> |
1781 | use for an actually working example. Another Perl module named C<EV::Glib> |
1747 | embeds a Glib main context into libev, and finally, C<Glib::EV> embeds EV |
1782 | embeds a Glib main context into libev, and finally, C<Glib::EV> embeds EV |
… | |
… | |
1915 | portable one. |
1950 | portable one. |
1916 | |
1951 | |
1917 | So when you want to use this feature you will always have to be prepared |
1952 | So when you want to use this feature you will always have to be prepared |
1918 | that you cannot get an embeddable loop. The recommended way to get around |
1953 | that you cannot get an embeddable loop. The recommended way to get around |
1919 | this is to have a separate variables for your embeddable loop, try to |
1954 | this is to have a separate variables for your embeddable loop, try to |
1920 | create it, and if that fails, use the normal loop for everything: |
1955 | create it, and if that fails, use the normal loop for everything. |
|
|
1956 | |
|
|
1957 | =head3 Watcher-Specific Functions and Data Members |
|
|
1958 | |
|
|
1959 | =over 4 |
|
|
1960 | |
|
|
1961 | =item ev_embed_init (ev_embed *, callback, struct ev_loop *embedded_loop) |
|
|
1962 | |
|
|
1963 | =item ev_embed_set (ev_embed *, callback, struct ev_loop *embedded_loop) |
|
|
1964 | |
|
|
1965 | Configures the watcher to embed the given loop, which must be |
|
|
1966 | embeddable. If the callback is C<0>, then C<ev_embed_sweep> will be |
|
|
1967 | invoked automatically, otherwise it is the responsibility of the callback |
|
|
1968 | to invoke it (it will continue to be called until the sweep has been done, |
|
|
1969 | if you do not want thta, you need to temporarily stop the embed watcher). |
|
|
1970 | |
|
|
1971 | =item ev_embed_sweep (loop, ev_embed *) |
|
|
1972 | |
|
|
1973 | Make a single, non-blocking sweep over the embedded loop. This works |
|
|
1974 | similarly to C<ev_loop (embedded_loop, EVLOOP_NONBLOCK)>, but in the most |
|
|
1975 | apropriate way for embedded loops. |
|
|
1976 | |
|
|
1977 | =item struct ev_loop *other [read-only] |
|
|
1978 | |
|
|
1979 | The embedded event loop. |
|
|
1980 | |
|
|
1981 | =back |
|
|
1982 | |
|
|
1983 | =head3 Examples |
|
|
1984 | |
|
|
1985 | Example: Try to get an embeddable event loop and embed it into the default |
|
|
1986 | event loop. If that is not possible, use the default loop. The default |
|
|
1987 | loop is stored in C<loop_hi>, while the mebeddable loop is stored in |
|
|
1988 | C<loop_lo> (which is C<loop_hi> in the acse no embeddable loop can be |
|
|
1989 | used). |
1921 | |
1990 | |
1922 | struct ev_loop *loop_hi = ev_default_init (0); |
1991 | struct ev_loop *loop_hi = ev_default_init (0); |
1923 | struct ev_loop *loop_lo = 0; |
1992 | struct ev_loop *loop_lo = 0; |
1924 | struct ev_embed embed; |
1993 | struct ev_embed embed; |
1925 | |
1994 | |
… | |
… | |
1936 | ev_embed_start (loop_hi, &embed); |
2005 | ev_embed_start (loop_hi, &embed); |
1937 | } |
2006 | } |
1938 | else |
2007 | else |
1939 | loop_lo = loop_hi; |
2008 | loop_lo = loop_hi; |
1940 | |
2009 | |
1941 | =head3 Watcher-Specific Functions and Data Members |
2010 | Example: Check if kqueue is available but not recommended and create |
|
|
2011 | a kqueue backend for use with sockets (which usually work with any |
|
|
2012 | kqueue implementation). Store the kqueue/socket-only event loop in |
|
|
2013 | C<loop_socket>. (One might optionally use C<EVFLAG_NOENV>, too). |
1942 | |
2014 | |
1943 | =over 4 |
2015 | struct ev_loop *loop = ev_default_init (0); |
|
|
2016 | struct ev_loop *loop_socket = 0; |
|
|
2017 | struct ev_embed embed; |
|
|
2018 | |
|
|
2019 | if (ev_supported_backends () & ~ev_recommended_backends () & EVBACKEND_KQUEUE) |
|
|
2020 | if ((loop_socket = ev_loop_new (EVBACKEND_KQUEUE)) |
|
|
2021 | { |
|
|
2022 | ev_embed_init (&embed, 0, loop_socket); |
|
|
2023 | ev_embed_start (loop, &embed); |
|
|
2024 | } |
1944 | |
2025 | |
1945 | =item ev_embed_init (ev_embed *, callback, struct ev_loop *embedded_loop) |
2026 | if (!loop_socket) |
|
|
2027 | loop_socket = loop; |
1946 | |
2028 | |
1947 | =item ev_embed_set (ev_embed *, callback, struct ev_loop *embedded_loop) |
2029 | // now use loop_socket for all sockets, and loop for everything else |
1948 | |
|
|
1949 | Configures the watcher to embed the given loop, which must be |
|
|
1950 | embeddable. If the callback is C<0>, then C<ev_embed_sweep> will be |
|
|
1951 | invoked automatically, otherwise it is the responsibility of the callback |
|
|
1952 | to invoke it (it will continue to be called until the sweep has been done, |
|
|
1953 | if you do not want thta, you need to temporarily stop the embed watcher). |
|
|
1954 | |
|
|
1955 | =item ev_embed_sweep (loop, ev_embed *) |
|
|
1956 | |
|
|
1957 | Make a single, non-blocking sweep over the embedded loop. This works |
|
|
1958 | similarly to C<ev_loop (embedded_loop, EVLOOP_NONBLOCK)>, but in the most |
|
|
1959 | apropriate way for embedded loops. |
|
|
1960 | |
|
|
1961 | =item struct ev_loop *other [read-only] |
|
|
1962 | |
|
|
1963 | The embedded event loop. |
|
|
1964 | |
|
|
1965 | =back |
|
|
1966 | |
2030 | |
1967 | |
2031 | |
1968 | =head2 C<ev_fork> - the audacity to resume the event loop after a fork |
2032 | =head2 C<ev_fork> - the audacity to resume the event loop after a fork |
1969 | |
2033 | |
1970 | Fork watchers are called when a C<fork ()> was detected (usually because |
2034 | Fork watchers are called when a C<fork ()> was detected (usually because |
… | |
… | |
1986 | believe me. |
2050 | believe me. |
1987 | |
2051 | |
1988 | =back |
2052 | =back |
1989 | |
2053 | |
1990 | |
2054 | |
|
|
2055 | =head2 C<ev_async> - how to wake up another event loop |
|
|
2056 | |
|
|
2057 | In general, you cannot use an C<ev_loop> from multiple threads or other |
|
|
2058 | asynchronous sources such as signal handlers (as opposed to multiple event |
|
|
2059 | loops - those are of course safe to use in different threads). |
|
|
2060 | |
|
|
2061 | Sometimes, however, you need to wake up another event loop you do not |
|
|
2062 | control, for example because it belongs to another thread. This is what |
|
|
2063 | C<ev_async> watchers do: as long as the C<ev_async> watcher is active, you |
|
|
2064 | can signal it by calling C<ev_async_send>, which is thread- and signal |
|
|
2065 | safe. |
|
|
2066 | |
|
|
2067 | This functionality is very similar to C<ev_signal> watchers, as signals, |
|
|
2068 | too, are asynchronous in nature, and signals, too, will be compressed |
|
|
2069 | (i.e. the number of callback invocations may be less than the number of |
|
|
2070 | C<ev_async_sent> calls). |
|
|
2071 | |
|
|
2072 | Unlike C<ev_signal> watchers, C<ev_async> works with any event loop, not |
|
|
2073 | just the default loop. |
|
|
2074 | |
|
|
2075 | =head3 Queueing |
|
|
2076 | |
|
|
2077 | C<ev_async> does not support queueing of data in any way. The reason |
|
|
2078 | is that the author does not know of a simple (or any) algorithm for a |
|
|
2079 | multiple-writer-single-reader queue that works in all cases and doesn't |
|
|
2080 | need elaborate support such as pthreads. |
|
|
2081 | |
|
|
2082 | That means that if you want to queue data, you have to provide your own |
|
|
2083 | queue. And here is how you would implement locking: |
|
|
2084 | |
|
|
2085 | =over 4 |
|
|
2086 | |
|
|
2087 | =item queueing from a signal handler context |
|
|
2088 | |
|
|
2089 | To implement race-free queueing, you simply add to the queue in the signal |
|
|
2090 | handler but you block the signal handler in the watcher callback. Here is an example that does that for |
|
|
2091 | some fictitiuous SIGUSR1 handler: |
|
|
2092 | |
|
|
2093 | static ev_async mysig; |
|
|
2094 | |
|
|
2095 | static void |
|
|
2096 | sigusr1_handler (void) |
|
|
2097 | { |
|
|
2098 | sometype data; |
|
|
2099 | |
|
|
2100 | // no locking etc. |
|
|
2101 | queue_put (data); |
|
|
2102 | ev_async_send (DEFAULT_ &mysig); |
|
|
2103 | } |
|
|
2104 | |
|
|
2105 | static void |
|
|
2106 | mysig_cb (EV_P_ ev_async *w, int revents) |
|
|
2107 | { |
|
|
2108 | sometype data; |
|
|
2109 | sigset_t block, prev; |
|
|
2110 | |
|
|
2111 | sigemptyset (&block); |
|
|
2112 | sigaddset (&block, SIGUSR1); |
|
|
2113 | sigprocmask (SIG_BLOCK, &block, &prev); |
|
|
2114 | |
|
|
2115 | while (queue_get (&data)) |
|
|
2116 | process (data); |
|
|
2117 | |
|
|
2118 | if (sigismember (&prev, SIGUSR1) |
|
|
2119 | sigprocmask (SIG_UNBLOCK, &block, 0); |
|
|
2120 | } |
|
|
2121 | |
|
|
2122 | (Note: pthreads in theory requires you to use C<pthread_setmask> |
|
|
2123 | instead of C<sigprocmask> when you use threads, but libev doesn't do it |
|
|
2124 | either...). |
|
|
2125 | |
|
|
2126 | =item queueing from a thread context |
|
|
2127 | |
|
|
2128 | The strategy for threads is different, as you cannot (easily) block |
|
|
2129 | threads but you can easily preempt them, so to queue safely you need to |
|
|
2130 | emply a traditional mutex lock, such as in this pthread example: |
|
|
2131 | |
|
|
2132 | static ev_async mysig; |
|
|
2133 | static pthread_mutex_t mymutex = PTHREAD_MUTEX_INITIALIZER; |
|
|
2134 | |
|
|
2135 | static void |
|
|
2136 | otherthread (void) |
|
|
2137 | { |
|
|
2138 | // only need to lock the actual queueing operation |
|
|
2139 | pthread_mutex_lock (&mymutex); |
|
|
2140 | queue_put (data); |
|
|
2141 | pthread_mutex_unlock (&mymutex); |
|
|
2142 | |
|
|
2143 | ev_async_send (DEFAULT_ &mysig); |
|
|
2144 | } |
|
|
2145 | |
|
|
2146 | static void |
|
|
2147 | mysig_cb (EV_P_ ev_async *w, int revents) |
|
|
2148 | { |
|
|
2149 | pthread_mutex_lock (&mymutex); |
|
|
2150 | |
|
|
2151 | while (queue_get (&data)) |
|
|
2152 | process (data); |
|
|
2153 | |
|
|
2154 | pthread_mutex_unlock (&mymutex); |
|
|
2155 | } |
|
|
2156 | |
|
|
2157 | =back |
|
|
2158 | |
|
|
2159 | |
|
|
2160 | =head3 Watcher-Specific Functions and Data Members |
|
|
2161 | |
|
|
2162 | =over 4 |
|
|
2163 | |
|
|
2164 | =item ev_async_init (ev_async *, callback) |
|
|
2165 | |
|
|
2166 | Initialises and configures the async watcher - it has no parameters of any |
|
|
2167 | kind. There is a C<ev_asynd_set> macro, but using it is utterly pointless, |
|
|
2168 | believe me. |
|
|
2169 | |
|
|
2170 | =item ev_async_send (loop, ev_async *) |
|
|
2171 | |
|
|
2172 | Sends/signals/activates the given C<ev_async> watcher, that is, feeds |
|
|
2173 | an C<EV_ASYNC> event on the watcher into the event loop. Unlike |
|
|
2174 | C<ev_feed_event>, this call is safe to do in other threads, signal or |
|
|
2175 | similar contexts (see the dicusssion of C<EV_ATOMIC_T> in the embedding |
|
|
2176 | section below on what exactly this means). |
|
|
2177 | |
|
|
2178 | This call incurs the overhead of a syscall only once per loop iteration, |
|
|
2179 | so while the overhead might be noticable, it doesn't apply to repeated |
|
|
2180 | calls to C<ev_async_send>. |
|
|
2181 | |
|
|
2182 | =back |
|
|
2183 | |
|
|
2184 | |
1991 | =head1 OTHER FUNCTIONS |
2185 | =head1 OTHER FUNCTIONS |
1992 | |
2186 | |
1993 | There are some other functions of possible interest. Described. Here. Now. |
2187 | There are some other functions of possible interest. Described. Here. Now. |
1994 | |
2188 | |
1995 | =over 4 |
2189 | =over 4 |
… | |
… | |
2222 | Example: Define a class with an IO and idle watcher, start one of them in |
2416 | Example: Define a class with an IO and idle watcher, start one of them in |
2223 | the constructor. |
2417 | the constructor. |
2224 | |
2418 | |
2225 | class myclass |
2419 | class myclass |
2226 | { |
2420 | { |
2227 | ev_io io; void io_cb (ev::io &w, int revents); |
2421 | ev::io io; void io_cb (ev::io &w, int revents); |
2228 | ev_idle idle void idle_cb (ev::idle &w, int revents); |
2422 | ev:idle idle void idle_cb (ev::idle &w, int revents); |
2229 | |
2423 | |
2230 | myclass (); |
2424 | myclass (int fd) |
2231 | } |
|
|
2232 | |
|
|
2233 | myclass::myclass (int fd) |
|
|
2234 | { |
2425 | { |
2235 | io .set <myclass, &myclass::io_cb > (this); |
2426 | io .set <myclass, &myclass::io_cb > (this); |
2236 | idle.set <myclass, &myclass::idle_cb> (this); |
2427 | idle.set <myclass, &myclass::idle_cb> (this); |
2237 | |
2428 | |
2238 | io.start (fd, ev::READ); |
2429 | io.start (fd, ev::READ); |
|
|
2430 | } |
2239 | } |
2431 | }; |
2240 | |
2432 | |
2241 | |
2433 | |
2242 | =head1 MACRO MAGIC |
2434 | =head1 MACRO MAGIC |
2243 | |
2435 | |
2244 | Libev can be compiled with a variety of options, the most fundamantal |
2436 | Libev can be compiled with a variety of options, the most fundamantal |
… | |
… | |
2449 | wants osf handles on win32 (this is the case when the select to |
2641 | wants osf handles on win32 (this is the case when the select to |
2450 | be used is the winsock select). This means that it will call |
2642 | be used is the winsock select). This means that it will call |
2451 | C<_get_osfhandle> on the fd to convert it to an OS handle. Otherwise, |
2643 | C<_get_osfhandle> on the fd to convert it to an OS handle. Otherwise, |
2452 | it is assumed that all these functions actually work on fds, even |
2644 | it is assumed that all these functions actually work on fds, even |
2453 | on win32. Should not be defined on non-win32 platforms. |
2645 | on win32. Should not be defined on non-win32 platforms. |
|
|
2646 | |
|
|
2647 | =item EV_FD_TO_WIN32_HANDLE |
|
|
2648 | |
|
|
2649 | If C<EV_SELECT_IS_WINSOCKET> is enabled, then libev needs a way to map |
|
|
2650 | file descriptors to socket handles. When not defining this symbol (the |
|
|
2651 | default), then libev will call C<_get_osfhandle>, which is usually |
|
|
2652 | correct. In some cases, programs use their own file descriptor management, |
|
|
2653 | in which case they can provide this function to map fds to socket handles. |
2454 | |
2654 | |
2455 | =item EV_USE_POLL |
2655 | =item EV_USE_POLL |
2456 | |
2656 | |
2457 | If defined to be C<1>, libev will compile in support for the C<poll>(2) |
2657 | If defined to be C<1>, libev will compile in support for the C<poll>(2) |
2458 | backend. Otherwise it will be enabled on non-win32 platforms. It |
2658 | backend. Otherwise it will be enabled on non-win32 platforms. It |
… | |
… | |
2492 | |
2692 | |
2493 | If defined to be C<1>, libev will compile in support for the Linux inotify |
2693 | If defined to be C<1>, libev will compile in support for the Linux inotify |
2494 | interface to speed up C<ev_stat> watchers. Its actual availability will |
2694 | interface to speed up C<ev_stat> watchers. Its actual availability will |
2495 | be detected at runtime. |
2695 | be detected at runtime. |
2496 | |
2696 | |
|
|
2697 | =item EV_ATOMIC_T |
|
|
2698 | |
|
|
2699 | Libev requires an integer type (suitable for storing C<0> or C<1>) whose |
|
|
2700 | access is atomic with respect to other threads or signal contexts. No such type |
|
|
2701 | is easily found using, so you cna provide your own type that you know is safe. |
|
|
2702 | |
|
|
2703 | In the absense of this define, libev will use C<sig_atomic_t volatile> |
|
|
2704 | from F<signal.h>, which is usually good enough on most platforms. |
|
|
2705 | |
2497 | =item EV_H |
2706 | =item EV_H |
2498 | |
2707 | |
2499 | The name of the F<ev.h> header file used to include it. The default if |
2708 | The name of the F<ev.h> header file used to include it. The default if |
2500 | undefined is C<< <ev.h> >> in F<event.h> and C<"ev.h"> in F<ev.c>. This |
2709 | undefined is C<"ev.h"> in F<event.h>, F<ev.c> and F<ev++.h>. This can be |
2501 | can be used to virtually rename the F<ev.h> header file in case of conflicts. |
2710 | used to virtually rename the F<ev.h> header file in case of conflicts. |
2502 | |
2711 | |
2503 | =item EV_CONFIG_H |
2712 | =item EV_CONFIG_H |
2504 | |
2713 | |
2505 | If C<EV_STANDALONE> isn't C<1>, this variable can be used to override |
2714 | If C<EV_STANDALONE> isn't C<1>, this variable can be used to override |
2506 | F<ev.c>'s idea of where to find the F<config.h> file, similarly to |
2715 | F<ev.c>'s idea of where to find the F<config.h> file, similarly to |
2507 | C<EV_H>, above. |
2716 | C<EV_H>, above. |
2508 | |
2717 | |
2509 | =item EV_EVENT_H |
2718 | =item EV_EVENT_H |
2510 | |
2719 | |
2511 | Similarly to C<EV_H>, this macro can be used to override F<event.c>'s idea |
2720 | Similarly to C<EV_H>, this macro can be used to override F<event.c>'s idea |
2512 | of how the F<event.h> header can be found. |
2721 | of how the F<event.h> header can be found, the default is C<"event.h">. |
2513 | |
2722 | |
2514 | =item EV_PROTOTYPES |
2723 | =item EV_PROTOTYPES |
2515 | |
2724 | |
2516 | If defined to be C<0>, then F<ev.h> will not define any function |
2725 | If defined to be C<0>, then F<ev.h> will not define any function |
2517 | prototypes, but still define all the structs and other symbols. This is |
2726 | prototypes, but still define all the structs and other symbols. This is |
… | |
… | |
2566 | defined to be C<0>, then they are not. |
2775 | defined to be C<0>, then they are not. |
2567 | |
2776 | |
2568 | =item EV_FORK_ENABLE |
2777 | =item EV_FORK_ENABLE |
2569 | |
2778 | |
2570 | If undefined or defined to be C<1>, then fork watchers are supported. If |
2779 | If undefined or defined to be C<1>, then fork watchers are supported. If |
|
|
2780 | defined to be C<0>, then they are not. |
|
|
2781 | |
|
|
2782 | =item EV_ASYNC_ENABLE |
|
|
2783 | |
|
|
2784 | If undefined or defined to be C<1>, then async watchers are supported. If |
2571 | defined to be C<0>, then they are not. |
2785 | defined to be C<0>, then they are not. |
2572 | |
2786 | |
2573 | =item EV_MINIMAL |
2787 | =item EV_MINIMAL |
2574 | |
2788 | |
2575 | If you need to shave off some kilobytes of code at the expense of some |
2789 | If you need to shave off some kilobytes of code at the expense of some |
… | |
… | |
2731 | watchers becomes O(1) w.r.t. prioritiy handling. |
2945 | watchers becomes O(1) w.r.t. prioritiy handling. |
2732 | |
2946 | |
2733 | =back |
2947 | =back |
2734 | |
2948 | |
2735 | |
2949 | |
|
|
2950 | =head1 Win32 platform limitations and workarounds |
|
|
2951 | |
|
|
2952 | Win32 doesn't support any of the standards (e.g. POSIX) that libev |
|
|
2953 | requires, and its I/O model is fundamentally incompatible with the POSIX |
|
|
2954 | model. Libev still offers limited functionality on this platform in |
|
|
2955 | the form of the C<EVBACKEND_SELECT> backend, and only supports socket |
|
|
2956 | descriptors. This only applies when using Win32 natively, not when using |
|
|
2957 | e.g. cygwin. |
|
|
2958 | |
|
|
2959 | There is no supported compilation method available on windows except |
|
|
2960 | embedding it into other applications. |
|
|
2961 | |
|
|
2962 | Due to the many, low, and arbitrary limits on the win32 platform and the |
|
|
2963 | abysmal performance of winsockets, using a large number of sockets is not |
|
|
2964 | recommended (and not reasonable). If your program needs to use more than |
|
|
2965 | a hundred or so sockets, then likely it needs to use a totally different |
|
|
2966 | implementation for windows, as libev offers the POSIX model, which cannot |
|
|
2967 | be implemented efficiently on windows (microsoft monopoly games). |
|
|
2968 | |
|
|
2969 | =over 4 |
|
|
2970 | |
|
|
2971 | =item The winsocket select function |
|
|
2972 | |
|
|
2973 | The winsocket C<select> function doesn't follow POSIX in that it requires |
|
|
2974 | socket I<handles> and not socket I<file descriptors>. This makes select |
|
|
2975 | very inefficient, and also requires a mapping from file descriptors |
|
|
2976 | to socket handles. See the discussion of the C<EV_SELECT_USE_FD_SET>, |
|
|
2977 | C<EV_SELECT_IS_WINSOCKET> and C<EV_FD_TO_WIN32_HANDLE> preprocessor |
|
|
2978 | symbols for more info. |
|
|
2979 | |
|
|
2980 | The configuration for a "naked" win32 using the microsoft runtime |
|
|
2981 | libraries and raw winsocket select is: |
|
|
2982 | |
|
|
2983 | #define EV_USE_SELECT 1 |
|
|
2984 | #define EV_SELECT_IS_WINSOCKET 1 /* forces EV_SELECT_USE_FD_SET, too */ |
|
|
2985 | |
|
|
2986 | Note that winsockets handling of fd sets is O(n), so you can easily get a |
|
|
2987 | complexity in the O(n²) range when using win32. |
|
|
2988 | |
|
|
2989 | =item Limited number of file descriptors |
|
|
2990 | |
|
|
2991 | Windows has numerous arbitrary (and low) limits on things. Early versions |
|
|
2992 | of winsocket's select only supported waiting for a max. of C<64> handles |
|
|
2993 | (probably owning to the fact that all windows kernels can only wait for |
|
|
2994 | C<64> things at the same time internally; microsoft recommends spawning a |
|
|
2995 | chain of threads and wait for 63 handles and the previous thread in each). |
|
|
2996 | |
|
|
2997 | Newer versions support more handles, but you need to define C<FD_SETSIZE> |
|
|
2998 | to some high number (e.g. C<2048>) before compiling the winsocket select |
|
|
2999 | call (which might be in libev or elsewhere, for example, perl does its own |
|
|
3000 | select emulation on windows). |
|
|
3001 | |
|
|
3002 | Another limit is the number of file descriptors in the microsoft runtime |
|
|
3003 | libraries, which by default is C<64> (there must be a hidden I<64> fetish |
|
|
3004 | or something like this inside microsoft). You can increase this by calling |
|
|
3005 | C<_setmaxstdio>, which can increase this limit to C<2048> (another |
|
|
3006 | arbitrary limit), but is broken in many versions of the microsoft runtime |
|
|
3007 | libraries. |
|
|
3008 | |
|
|
3009 | This might get you to about C<512> or C<2048> sockets (depending on |
|
|
3010 | windows version and/or the phase of the moon). To get more, you need to |
|
|
3011 | wrap all I/O functions and provide your own fd management, but the cost of |
|
|
3012 | calling select (O(n²)) will likely make this unworkable. |
|
|
3013 | |
|
|
3014 | =back |
|
|
3015 | |
|
|
3016 | |
2736 | =head1 AUTHOR |
3017 | =head1 AUTHOR |
2737 | |
3018 | |
2738 | Marc Lehmann <libev@schmorp.de>. |
3019 | Marc Lehmann <libev@schmorp.de>. |
2739 | |
3020 | |