… | |
… | |
25 | similar functions, as well as less rarely ones such as C<mknod>, C<futime> |
25 | similar functions, as well as less rarely ones such as C<mknod>, C<futime> |
26 | or C<readlink>. |
26 | or C<readlink>. |
27 | |
27 | |
28 | It also offers wrappers around C<sendfile> (Solaris, Linux, HP-UX and |
28 | It also offers wrappers around C<sendfile> (Solaris, Linux, HP-UX and |
29 | FreeBSD, with emulation on other platforms) and C<readahead> (Linux, with |
29 | FreeBSD, with emulation on other platforms) and C<readahead> (Linux, with |
30 | emulation elsewhere>). |
30 | emulation elsewhere). |
31 | |
31 | |
32 | The goal is to enable you to write fully non-blocking programs. For |
32 | The goal is to enable you to write fully non-blocking programs. For |
33 | example, in a game server, you would not want to freeze for a few seconds |
33 | example, in a game server, you would not want to freeze for a few seconds |
34 | just because the server is running a backup and you happen to call |
34 | just because the server is running a backup and you happen to call |
35 | C<readdir>. |
35 | C<readdir>. |
… | |
… | |
176 | { |
176 | { |
177 | loop = EV_DEFAULT; |
177 | loop = EV_DEFAULT; |
178 | |
178 | |
179 | ev_idle_init (&repeat_watcher, repeat); |
179 | ev_idle_init (&repeat_watcher, repeat); |
180 | ev_async_init (&ready_watcher, ready); |
180 | ev_async_init (&ready_watcher, ready); |
181 | ev_async_start (loop &watcher); |
181 | ev_async_start (loop, &watcher); |
182 | |
182 | |
183 | eio_init (want_poll, 0); |
183 | eio_init (want_poll, 0); |
184 | } |
184 | } |
185 | |
185 | |
186 | For most other event loops, you would typically use a pipe - the event |
186 | For most other event loops, you would typically use a pipe - the event |
… | |
… | |
290 | |
290 | |
291 | Cancel the request (and all its subrequests). If the request is currently |
291 | Cancel the request (and all its subrequests). If the request is currently |
292 | executing it might still continue to execute, and in other cases it might |
292 | executing it might still continue to execute, and in other cases it might |
293 | still take a while till the request is cancelled. |
293 | still take a while till the request is cancelled. |
294 | |
294 | |
295 | Even if cancelled, the finish callback will still be invoked - the |
295 | When cancelled, the finish callback will not be invoked. |
296 | callbacks of all cancellable requests need to check whether the request |
|
|
297 | has been cancelled by calling C<EIO_CANCELLED (req)>: |
|
|
298 | |
|
|
299 | static int |
|
|
300 | my_eio_cb (eio_req *req) |
|
|
301 | { |
|
|
302 | if (EIO_CANCELLED (req)) |
|
|
303 | return 0; |
|
|
304 | } |
|
|
305 | |
|
|
306 | In addition, cancelled requests will I<either> have C<< req->result >> |
|
|
307 | set to C<-1> and C<errno> to C<ECANCELED>, or I<otherwise> they were |
|
|
308 | successfully executed, despite being cancelled (e.g. when they have |
|
|
309 | already been executed at the time they were cancelled). |
|
|
310 | |
296 | |
311 | C<EIO_CANCELLED> is still true for requests that have successfully |
297 | C<EIO_CANCELLED> is still true for requests that have successfully |
312 | executed, as long as C<eio_cancel> was called on them at some point. |
298 | executed, as long as C<eio_cancel> was called on them at some point. |
313 | |
299 | |
314 | =back |
300 | =back |
… | |
… | |
630 | |
616 | |
631 | =over 4 |
617 | =over 4 |
632 | |
618 | |
633 | =item eio_mtouch (void *addr, size_t length, int flags, int pri, eio_cb cb, void *data) |
619 | =item eio_mtouch (void *addr, size_t length, int flags, int pri, eio_cb cb, void *data) |
634 | |
620 | |
635 | Reads (C<flags == 0>) or modifies (C<flags == EIO_MT_MODIFY) the given |
621 | Reads (C<flags == 0>) or modifies (C<flags == EIO_MT_MODIFY>) the given |
636 | memory area, page-wise, that is, it reads (or reads and writes back) the |
622 | memory area, page-wise, that is, it reads (or reads and writes back) the |
637 | first octet of every page that spans the memory area. |
623 | first octet of every page that spans the memory area. |
638 | |
624 | |
639 | This can be used to page in some mmapped file, or dirty some pages. Note |
625 | This can be used to page in some mmapped file, or dirty some pages. Note |
640 | that dirtying is an unlocked read-write access, so races can ensue when |
626 | that dirtying is an unlocked read-write access, so races can ensue when |
… | |
… | |
764 | request finish on its own. |
750 | request finish on its own. |
765 | |
751 | |
766 | =item 3) open callback adds more requests |
752 | =item 3) open callback adds more requests |
767 | |
753 | |
768 | In the open callback, if the open was not successful, copy C<< |
754 | In the open callback, if the open was not successful, copy C<< |
769 | req->errorno >> to C<< grp->errorno >> and set C<< grp->errorno >> to |
755 | req->errorno >> to C<< grp->errorno >> and set C<< grp->result >> to |
770 | C<-1> to signal an error. |
756 | C<-1> to signal an error. |
771 | |
757 | |
772 | Otherwise, malloc some memory or so and issue a read request, adding the |
758 | Otherwise, malloc some memory or so and issue a read request, adding the |
773 | read request to the group. |
759 | read request to the group. |
774 | |
760 | |
775 | =item 4) continue issuing requests till finished |
761 | =item 4) continue issuing requests till finished |
776 | |
762 | |
777 | In the real callback, check for errors and possibly continue with |
763 | In the read callback, check for errors and possibly continue with |
778 | C<eio_close> or any other eio request in the same way. |
764 | C<eio_close> or any other eio request in the same way. |
779 | |
765 | |
780 | As soon as no new requests are added the group request will finish. Make |
766 | As soon as no new requests are added, the group request will finish. Make |
781 | sure you I<always> set C<< grp->result >> to some sensible value. |
767 | sure you I<always> set C<< grp->result >> to some sensible value. |
782 | |
768 | |
783 | =back |
769 | =back |
784 | |
770 | |
785 | =head4 REQUEST LIMITING |
771 | =head4 REQUEST LIMITING |
… | |
… | |
787 | |
773 | |
788 | #TODO |
774 | #TODO |
789 | |
775 | |
790 | void eio_grp_limit (eio_req *grp, int limit); |
776 | void eio_grp_limit (eio_req *grp, int limit); |
791 | |
777 | |
792 | |
|
|
793 | =back |
|
|
794 | |
778 | |
795 | |
779 | |
796 | =head1 LOW LEVEL REQUEST API |
780 | =head1 LOW LEVEL REQUEST API |
797 | |
781 | |
798 | #TODO |
782 | #TODO |
… | |
… | |
926 | This symbol governs the stack size for each eio thread. Libeio itself |
910 | This symbol governs the stack size for each eio thread. Libeio itself |
927 | was written to use very little stackspace, but when using C<EIO_CUSTOM> |
911 | was written to use very little stackspace, but when using C<EIO_CUSTOM> |
928 | requests, you might want to increase this. |
912 | requests, you might want to increase this. |
929 | |
913 | |
930 | If this symbol is undefined (the default) then libeio will use its default |
914 | If this symbol is undefined (the default) then libeio will use its default |
931 | stack size (C<sizeof (void *) * 4096> currently). If it is defined, but |
915 | stack size (C<sizeof (void *) * 4096> currently). In all other cases, the |
932 | C<0>, then the default operating system stack size will be used. In all |
|
|
933 | other cases, the value must be an expression that evaluates to the desired |
916 | value must be an expression that evaluates to the desired stack size. |
934 | stack size. |
|
|
935 | |
917 | |
936 | =back |
918 | =back |
937 | |
919 | |
938 | |
920 | |
939 | =head1 PORTABILITY REQUIREMENTS |
921 | =head1 PORTABILITY REQUIREMENTS |