… | |
… | |
25 | similar functions, as well as less rarely ones such as C<mknod>, C<futime> |
25 | similar functions, as well as less rarely ones such as C<mknod>, C<futime> |
26 | or C<readlink>. |
26 | or C<readlink>. |
27 | |
27 | |
28 | It also offers wrappers around C<sendfile> (Solaris, Linux, HP-UX and |
28 | It also offers wrappers around C<sendfile> (Solaris, Linux, HP-UX and |
29 | FreeBSD, with emulation on other platforms) and C<readahead> (Linux, with |
29 | FreeBSD, with emulation on other platforms) and C<readahead> (Linux, with |
30 | emulation elsewhere>). |
30 | emulation elsewhere). |
31 | |
31 | |
32 | The goal is to enable you to write fully non-blocking programs. For |
32 | The goal is to enable you to write fully non-blocking programs. For |
33 | example, in a game server, you would not want to freeze for a few seconds |
33 | example, in a game server, you would not want to freeze for a few seconds |
34 | just because the server is running a backup and you happen to call |
34 | just because the server is running a backup and you happen to call |
35 | C<readdir>. |
35 | C<readdir>. |
… | |
… | |
104 | This callback is invoked when libeio detects that all pending requests |
104 | This callback is invoked when libeio detects that all pending requests |
105 | have been handled. It is "edge-triggered", that is, it will only be |
105 | have been handled. It is "edge-triggered", that is, it will only be |
106 | called once after C<want_poll>. To put it differently, C<want_poll> and |
106 | called once after C<want_poll>. To put it differently, C<want_poll> and |
107 | C<done_poll> are invoked in pairs: after C<want_poll> you have to call |
107 | C<done_poll> are invoked in pairs: after C<want_poll> you have to call |
108 | C<eio_poll ()> until either C<eio_poll> indicates that everything has been |
108 | C<eio_poll ()> until either C<eio_poll> indicates that everything has been |
109 | handled or C<done_poll> has been called, which signals the same. |
109 | handled or C<done_poll> has been called, which signals the same - only one |
|
|
110 | method is needed. |
110 | |
111 | |
111 | Note that C<eio_poll> might return after C<done_poll> and C<want_poll> |
112 | Note that C<eio_poll> might return after C<done_poll> and C<want_poll> |
112 | have been called again, so watch out for races in your code. |
113 | have been called again, so watch out for races in your code. |
113 | |
114 | |
|
|
115 | It is quite common to have an empty C<done_call> callback and only use |
|
|
116 | the return value from C<eio_poll>, or, when C<eio_poll> is configured to |
|
|
117 | handle all outstanding replies, it's enough to call C<eio_poll> once. |
|
|
118 | |
114 | As with C<want_poll>, this callback is called while locks are being held, |
119 | As with C<want_poll>, this callback is called while locks are being held, |
115 | so you I<must not call any libeio functions form within this callback>. |
120 | so you I<must not call any libeio functions from within this callback>. |
116 | |
121 | |
117 | =item int eio_poll () |
122 | =item int eio_poll () |
118 | |
123 | |
119 | This function has to be called whenever there are pending requests that |
124 | This function has to be called whenever there are pending requests that |
120 | need finishing. You usually call this after C<want_poll> has indicated |
125 | need finishing. You usually call this after C<want_poll> has indicated |
… | |
… | |
176 | { |
181 | { |
177 | loop = EV_DEFAULT; |
182 | loop = EV_DEFAULT; |
178 | |
183 | |
179 | ev_idle_init (&repeat_watcher, repeat); |
184 | ev_idle_init (&repeat_watcher, repeat); |
180 | ev_async_init (&ready_watcher, ready); |
185 | ev_async_init (&ready_watcher, ready); |
181 | ev_async_start (loop &watcher); |
186 | ev_async_start (loop, &watcher); |
182 | |
187 | |
183 | eio_init (want_poll, 0); |
188 | eio_init (want_poll, 0); |
184 | } |
189 | } |
185 | |
190 | |
186 | For most other event loops, you would typically use a pipe - the event |
191 | For most other event loops, you would typically use a pipe - the event |
… | |
… | |
233 | |
238 | |
234 | The C<void *data> member simply stores the value of the C<data> argument. |
239 | The C<void *data> member simply stores the value of the C<data> argument. |
235 | |
240 | |
236 | =back |
241 | =back |
237 | |
242 | |
|
|
243 | Members not explicitly described as accessible must not be |
|
|
244 | accessed. Specifically, there is no guarantee that any members will still |
|
|
245 | have the value they had when the request was submitted. |
|
|
246 | |
238 | The return value of the callback is normally C<0>, which tells libeio to |
247 | The return value of the callback is normally C<0>, which tells libeio to |
239 | continue normally. If a callback returns a nonzero value, libeio will |
248 | continue normally. If a callback returns a nonzero value, libeio will |
240 | stop processing results (in C<eio_poll>) and will return the value to its |
249 | stop processing results (in C<eio_poll>) and will return the value to its |
241 | caller. |
250 | caller. |
242 | |
251 | |
243 | Memory areas passed to libeio must stay valid as long as a request |
252 | Memory areas passed to libeio wrappers must stay valid as long as a |
244 | executes, with the exception of paths, which are being copied |
253 | request executes, with the exception of paths, which are being copied |
245 | internally. Any memory libeio itself allocates will be freed after the |
254 | internally. Any memory libeio itself allocates will be freed after the |
246 | finish callback has been called. If you want to manage all memory passed |
255 | finish callback has been called. If you want to manage all memory passed |
247 | to libeio yourself you can use the low-level API. |
256 | to libeio yourself you can use the low-level API. |
248 | |
257 | |
249 | For example, to open a file, you could do this: |
258 | For example, to open a file, you could do this: |
… | |
… | |
286 | |
295 | |
287 | Cancel the request (and all its subrequests). If the request is currently |
296 | Cancel the request (and all its subrequests). If the request is currently |
288 | executing it might still continue to execute, and in other cases it might |
297 | executing it might still continue to execute, and in other cases it might |
289 | still take a while till the request is cancelled. |
298 | still take a while till the request is cancelled. |
290 | |
299 | |
291 | Even if cancelled, the finish callback will still be invoked - the |
300 | When cancelled, the finish callback will not be invoked. |
292 | callbacks of all cancellable requests need to check whether the request |
|
|
293 | has been cancelled by calling C<EIO_CANCELLED (req)>: |
|
|
294 | |
|
|
295 | static int |
|
|
296 | my_eio_cb (eio_req *req) |
|
|
297 | { |
|
|
298 | if (EIO_CANCELLED (req)) |
|
|
299 | return 0; |
|
|
300 | } |
|
|
301 | |
|
|
302 | In addition, cancelled requests will I<either> have C<< req->result >> |
|
|
303 | set to C<-1> and C<errno> to C<ECANCELED>, or I<otherwise> they were |
|
|
304 | successfully executed, despite being cancelled (e.g. when they have |
|
|
305 | already been executed at the time they were cancelled). |
|
|
306 | |
301 | |
307 | C<EIO_CANCELLED> is still true for requests that have successfully |
302 | C<EIO_CANCELLED> is still true for requests that have successfully |
308 | executed, as long as C<eio_cancel> was called on them at some point. |
303 | executed, as long as C<eio_cancel> was called on them at some point. |
309 | |
304 | |
310 | =back |
305 | =back |
… | |
… | |
592 | =item eio_readahead (int fd, off_t offset, size_t length, int pri, eio_cb cb, void *data) |
587 | =item eio_readahead (int fd, off_t offset, size_t length, int pri, eio_cb cb, void *data) |
593 | |
588 | |
594 | Calls C<readahead(2)>. If the syscall is missing, then the call is |
589 | Calls C<readahead(2)>. If the syscall is missing, then the call is |
595 | emulated by simply reading the data (currently in 64kiB chunks). |
590 | emulated by simply reading the data (currently in 64kiB chunks). |
596 | |
591 | |
|
|
592 | =item eio_syncfs (int fd, int pri, eio_cb cb, void *data) |
|
|
593 | |
|
|
594 | Calls Linux' C<syncfs> syscall, if available. Returns C<-1> and sets |
|
|
595 | C<errno> to C<ENOSYS> if the call is missing I<but still calls sync()>, |
|
|
596 | if the C<fd> is C<< >= 0 >>, so you can probe for the availability of the |
|
|
597 | syscall with a negative C<fd> argument and checking for C<-1/ENOSYS>. |
|
|
598 | |
597 | =item eio_sync_file_range (int fd, off_t offset, size_t nbytes, unsigned int flags, int pri, eio_cb cb, void *data) |
599 | =item eio_sync_file_range (int fd, off_t offset, size_t nbytes, unsigned int flags, int pri, eio_cb cb, void *data) |
598 | |
600 | |
599 | Calls C<sync_file_range>. If the syscall is missing, then this is the same |
601 | Calls C<sync_file_range>. If the syscall is missing, then this is the same |
600 | as calling C<fdatasync>. |
602 | as calling C<fdatasync>. |
601 | |
603 | |
… | |
… | |
619 | |
621 | |
620 | =over 4 |
622 | =over 4 |
621 | |
623 | |
622 | =item eio_mtouch (void *addr, size_t length, int flags, int pri, eio_cb cb, void *data) |
624 | =item eio_mtouch (void *addr, size_t length, int flags, int pri, eio_cb cb, void *data) |
623 | |
625 | |
624 | Reads (C<flags == 0>) or modifies (C<flags == EIO_MT_MODIFY) the given |
626 | Reads (C<flags == 0>) or modifies (C<flags == EIO_MT_MODIFY>) the given |
625 | memory area, page-wise, that is, it reads (or reads and writes back) the |
627 | memory area, page-wise, that is, it reads (or reads and writes back) the |
626 | first octet of every page that spans the memory area. |
628 | first octet of every page that spans the memory area. |
627 | |
629 | |
628 | This can be used to page in some mmapped file, or dirty some pages. Note |
630 | This can be used to page in some mmapped file, or dirty some pages. Note |
629 | that dirtying is an unlocked read-write access, so races can ensue when |
631 | that dirtying is an unlocked read-write access, so races can ensue when |
… | |
… | |
753 | request finish on its own. |
755 | request finish on its own. |
754 | |
756 | |
755 | =item 3) open callback adds more requests |
757 | =item 3) open callback adds more requests |
756 | |
758 | |
757 | In the open callback, if the open was not successful, copy C<< |
759 | In the open callback, if the open was not successful, copy C<< |
758 | req->errorno >> to C<< grp->errorno >> and set C<< grp->errorno >> to |
760 | req->errorno >> to C<< grp->errorno >> and set C<< grp->result >> to |
759 | C<-1> to signal an error. |
761 | C<-1> to signal an error. |
760 | |
762 | |
761 | Otherwise, malloc some memory or so and issue a read request, adding the |
763 | Otherwise, malloc some memory or so and issue a read request, adding the |
762 | read request to the group. |
764 | read request to the group. |
763 | |
765 | |
764 | =item 4) continue issuing requests till finished |
766 | =item 4) continue issuing requests till finished |
765 | |
767 | |
766 | In the real callback, check for errors and possibly continue with |
768 | In the read callback, check for errors and possibly continue with |
767 | C<eio_close> or any other eio request in the same way. |
769 | C<eio_close> or any other eio request in the same way. |
768 | |
770 | |
769 | As soon as no new requests are added the group request will finish. Make |
771 | As soon as no new requests are added, the group request will finish. Make |
770 | sure you I<always> set C<< grp->result >> to some sensible value. |
772 | sure you I<always> set C<< grp->result >> to some sensible value. |
771 | |
773 | |
772 | =back |
774 | =back |
773 | |
775 | |
774 | =head4 REQUEST LIMITING |
776 | =head4 REQUEST LIMITING |
… | |
… | |
776 | |
778 | |
777 | #TODO |
779 | #TODO |
778 | |
780 | |
779 | void eio_grp_limit (eio_req *grp, int limit); |
781 | void eio_grp_limit (eio_req *grp, int limit); |
780 | |
782 | |
781 | |
|
|
782 | =back |
|
|
783 | |
783 | |
784 | |
784 | |
785 | =head1 LOW LEVEL REQUEST API |
785 | =head1 LOW LEVEL REQUEST API |
786 | |
786 | |
787 | #TODO |
787 | #TODO |
… | |
… | |
915 | This symbol governs the stack size for each eio thread. Libeio itself |
915 | This symbol governs the stack size for each eio thread. Libeio itself |
916 | was written to use very little stackspace, but when using C<EIO_CUSTOM> |
916 | was written to use very little stackspace, but when using C<EIO_CUSTOM> |
917 | requests, you might want to increase this. |
917 | requests, you might want to increase this. |
918 | |
918 | |
919 | If this symbol is undefined (the default) then libeio will use its default |
919 | If this symbol is undefined (the default) then libeio will use its default |
920 | stack size (C<sizeof (void *) * 4096> currently). If it is defined, but |
920 | stack size (C<sizeof (void *) * 4096> currently). In all other cases, the |
921 | C<0>, then the default operating system stack size will be used. In all |
|
|
922 | other cases, the value must be an expression that evaluates to the desired |
921 | value must be an expression that evaluates to the desired stack size. |
923 | stack size. |
|
|
924 | |
922 | |
925 | =back |
923 | =back |
926 | |
924 | |
927 | |
925 | |
928 | =head1 PORTABILITY REQUIREMENTS |
926 | =head1 PORTABILITY REQUIREMENTS |