… | |
… | |
25 | similar functions, as well as less rarely ones such as C<mknod>, C<futime> |
25 | similar functions, as well as less rarely ones such as C<mknod>, C<futime> |
26 | or C<readlink>. |
26 | or C<readlink>. |
27 | |
27 | |
28 | It also offers wrappers around C<sendfile> (Solaris, Linux, HP-UX and |
28 | It also offers wrappers around C<sendfile> (Solaris, Linux, HP-UX and |
29 | FreeBSD, with emulation on other platforms) and C<readahead> (Linux, with |
29 | FreeBSD, with emulation on other platforms) and C<readahead> (Linux, with |
30 | emulation elsewhere>). |
30 | emulation elsewhere). |
31 | |
31 | |
32 | The goal is to enable you to write fully non-blocking programs. For |
32 | The goal is to enable you to write fully non-blocking programs. For |
33 | example, in a game server, you would not want to freeze for a few seconds |
33 | example, in a game server, you would not want to freeze for a few seconds |
34 | just because the server is running a backup and you happen to call |
34 | just because the server is running a backup and you happen to call |
35 | C<readdir>. |
35 | C<readdir>. |
… | |
… | |
104 | This callback is invoked when libeio detects that all pending requests |
104 | This callback is invoked when libeio detects that all pending requests |
105 | have been handled. It is "edge-triggered", that is, it will only be |
105 | have been handled. It is "edge-triggered", that is, it will only be |
106 | called once after C<want_poll>. To put it differently, C<want_poll> and |
106 | called once after C<want_poll>. To put it differently, C<want_poll> and |
107 | C<done_poll> are invoked in pairs: after C<want_poll> you have to call |
107 | C<done_poll> are invoked in pairs: after C<want_poll> you have to call |
108 | C<eio_poll ()> until either C<eio_poll> indicates that everything has been |
108 | C<eio_poll ()> until either C<eio_poll> indicates that everything has been |
109 | handled or C<done_poll> has been called, which signals the same. |
109 | handled or C<done_poll> has been called, which signals the same - only one |
|
|
110 | method is needed. |
110 | |
111 | |
111 | Note that C<eio_poll> might return after C<done_poll> and C<want_poll> |
112 | Note that C<eio_poll> might return after C<done_poll> and C<want_poll> |
112 | have been called again, so watch out for races in your code. |
113 | have been called again, so watch out for races in your code. |
113 | |
114 | |
|
|
115 | It is quite common to have an empty C<done_call> callback and only use |
|
|
116 | the return value from C<eio_poll>, or, when C<eio_poll> is configured to |
|
|
117 | handle all outstanding replies, it's enough to call C<eio_poll> once. |
|
|
118 | |
114 | As with C<want_poll>, this callback is called while locks are being held, |
119 | As with C<want_poll>, this callback is called while locks are being held, |
115 | so you I<must not call any libeio functions form within this callback>. |
120 | so you I<must not call any libeio functions from within this callback>. |
116 | |
121 | |
117 | =item int eio_poll () |
122 | =item int eio_poll () |
118 | |
123 | |
119 | This function has to be called whenever there are pending requests that |
124 | This function has to be called whenever there are pending requests that |
120 | need finishing. You usually call this after C<want_poll> has indicated |
125 | need finishing. You usually call this after C<want_poll> has indicated |
… | |
… | |
176 | { |
181 | { |
177 | loop = EV_DEFAULT; |
182 | loop = EV_DEFAULT; |
178 | |
183 | |
179 | ev_idle_init (&repeat_watcher, repeat); |
184 | ev_idle_init (&repeat_watcher, repeat); |
180 | ev_async_init (&ready_watcher, ready); |
185 | ev_async_init (&ready_watcher, ready); |
181 | ev_async_start (loop &watcher); |
186 | ev_async_start (loop, &watcher); |
182 | |
187 | |
183 | eio_init (want_poll, 0); |
188 | eio_init (want_poll, 0); |
184 | } |
189 | } |
185 | |
190 | |
186 | For most other event loops, you would typically use a pipe - the event |
191 | For most other event loops, you would typically use a pipe - the event |
… | |
… | |
290 | |
295 | |
291 | Cancel the request (and all its subrequests). If the request is currently |
296 | Cancel the request (and all its subrequests). If the request is currently |
292 | executing it might still continue to execute, and in other cases it might |
297 | executing it might still continue to execute, and in other cases it might |
293 | still take a while till the request is cancelled. |
298 | still take a while till the request is cancelled. |
294 | |
299 | |
295 | Even if cancelled, the finish callback will still be invoked - the |
300 | When cancelled, the finish callback will not be invoked. |
296 | callbacks of all cancellable requests need to check whether the request |
|
|
297 | has been cancelled by calling C<EIO_CANCELLED (req)>: |
|
|
298 | |
|
|
299 | static int |
|
|
300 | my_eio_cb (eio_req *req) |
|
|
301 | { |
|
|
302 | if (EIO_CANCELLED (req)) |
|
|
303 | return 0; |
|
|
304 | } |
|
|
305 | |
|
|
306 | In addition, cancelled requests will I<either> have C<< req->result >> |
|
|
307 | set to C<-1> and C<errno> to C<ECANCELED>, or I<otherwise> they were |
|
|
308 | successfully executed, despite being cancelled (e.g. when they have |
|
|
309 | already been executed at the time they were cancelled). |
|
|
310 | |
301 | |
311 | C<EIO_CANCELLED> is still true for requests that have successfully |
302 | C<EIO_CANCELLED> is still true for requests that have successfully |
312 | executed, as long as C<eio_cancel> was called on them at some point. |
303 | executed, as long as C<eio_cancel> was called on them at some point. |
313 | |
304 | |
314 | =back |
305 | =back |
… | |
… | |
630 | |
621 | |
631 | =over 4 |
622 | =over 4 |
632 | |
623 | |
633 | =item eio_mtouch (void *addr, size_t length, int flags, int pri, eio_cb cb, void *data) |
624 | =item eio_mtouch (void *addr, size_t length, int flags, int pri, eio_cb cb, void *data) |
634 | |
625 | |
635 | Reads (C<flags == 0>) or modifies (C<flags == EIO_MT_MODIFY) the given |
626 | Reads (C<flags == 0>) or modifies (C<flags == EIO_MT_MODIFY>) the given |
636 | memory area, page-wise, that is, it reads (or reads and writes back) the |
627 | memory area, page-wise, that is, it reads (or reads and writes back) the |
637 | first octet of every page that spans the memory area. |
628 | first octet of every page that spans the memory area. |
638 | |
629 | |
639 | This can be used to page in some mmapped file, or dirty some pages. Note |
630 | This can be used to page in some mmapped file, or dirty some pages. Note |
640 | that dirtying is an unlocked read-write access, so races can ensue when |
631 | that dirtying is an unlocked read-write access, so races can ensue when |
… | |
… | |
764 | request finish on its own. |
755 | request finish on its own. |
765 | |
756 | |
766 | =item 3) open callback adds more requests |
757 | =item 3) open callback adds more requests |
767 | |
758 | |
768 | In the open callback, if the open was not successful, copy C<< |
759 | In the open callback, if the open was not successful, copy C<< |
769 | req->errorno >> to C<< grp->errorno >> and set C<< grp->errorno >> to |
760 | req->errorno >> to C<< grp->errorno >> and set C<< grp->result >> to |
770 | C<-1> to signal an error. |
761 | C<-1> to signal an error. |
771 | |
762 | |
772 | Otherwise, malloc some memory or so and issue a read request, adding the |
763 | Otherwise, malloc some memory or so and issue a read request, adding the |
773 | read request to the group. |
764 | read request to the group. |
774 | |
765 | |
775 | =item 4) continue issuing requests till finished |
766 | =item 4) continue issuing requests till finished |
776 | |
767 | |
777 | In the real callback, check for errors and possibly continue with |
768 | In the read callback, check for errors and possibly continue with |
778 | C<eio_close> or any other eio request in the same way. |
769 | C<eio_close> or any other eio request in the same way. |
779 | |
770 | |
780 | As soon as no new requests are added the group request will finish. Make |
771 | As soon as no new requests are added, the group request will finish. Make |
781 | sure you I<always> set C<< grp->result >> to some sensible value. |
772 | sure you I<always> set C<< grp->result >> to some sensible value. |
782 | |
773 | |
783 | =back |
774 | =back |
784 | |
775 | |
785 | =head4 REQUEST LIMITING |
776 | =head4 REQUEST LIMITING |
… | |
… | |
787 | |
778 | |
788 | #TODO |
779 | #TODO |
789 | |
780 | |
790 | void eio_grp_limit (eio_req *grp, int limit); |
781 | void eio_grp_limit (eio_req *grp, int limit); |
791 | |
782 | |
792 | |
|
|
793 | =back |
|
|
794 | |
783 | |
795 | |
784 | |
796 | =head1 LOW LEVEL REQUEST API |
785 | =head1 LOW LEVEL REQUEST API |
797 | |
786 | |
798 | #TODO |
787 | #TODO |
… | |
… | |
926 | This symbol governs the stack size for each eio thread. Libeio itself |
915 | This symbol governs the stack size for each eio thread. Libeio itself |
927 | was written to use very little stackspace, but when using C<EIO_CUSTOM> |
916 | was written to use very little stackspace, but when using C<EIO_CUSTOM> |
928 | requests, you might want to increase this. |
917 | requests, you might want to increase this. |
929 | |
918 | |
930 | If this symbol is undefined (the default) then libeio will use its default |
919 | If this symbol is undefined (the default) then libeio will use its default |
931 | stack size (C<sizeof (void *) * 4096> currently). If it is defined, but |
920 | stack size (C<sizeof (void *) * 4096> currently). In all other cases, the |
932 | C<0>, then the default operating system stack size will be used. In all |
|
|
933 | other cases, the value must be an expression that evaluates to the desired |
921 | value must be an expression that evaluates to the desired stack size. |
934 | stack size. |
|
|
935 | |
922 | |
936 | =back |
923 | =back |
937 | |
924 | |
938 | |
925 | |
939 | =head1 PORTABILITY REQUIREMENTS |
926 | =head1 PORTABILITY REQUIREMENTS |