… | |
… | |
25 | similar functions, as well as less rarely ones such as C<mknod>, C<futime> |
25 | similar functions, as well as less rarely ones such as C<mknod>, C<futime> |
26 | or C<readlink>. |
26 | or C<readlink>. |
27 | |
27 | |
28 | It also offers wrappers around C<sendfile> (Solaris, Linux, HP-UX and |
28 | It also offers wrappers around C<sendfile> (Solaris, Linux, HP-UX and |
29 | FreeBSD, with emulation on other platforms) and C<readahead> (Linux, with |
29 | FreeBSD, with emulation on other platforms) and C<readahead> (Linux, with |
30 | emulation elsewhere>). |
30 | emulation elsewhere). |
31 | |
31 | |
32 | The goal is to enable you to write fully non-blocking programs. For |
32 | The goal is to enable you to write fully non-blocking programs. For |
33 | example, in a game server, you would not want to freeze for a few seconds |
33 | example, in a game server, you would not want to freeze for a few seconds |
34 | just because the server is running a backup and you happen to call |
34 | just because the server is running a backup and you happen to call |
35 | C<readdir>. |
35 | C<readdir>. |
… | |
… | |
45 | Unlike the name component C<stamp> might indicate, it is also used for |
45 | Unlike the name component C<stamp> might indicate, it is also used for |
46 | time differences throughout libeio. |
46 | time differences throughout libeio. |
47 | |
47 | |
48 | =head2 FORK SUPPORT |
48 | =head2 FORK SUPPORT |
49 | |
49 | |
50 | Calling C<fork ()> is fully supported by this module - but you must not |
50 | Usage of pthreads in a program changes the semantics of fork |
51 | rely on this. It is currently implemented in these steps: |
51 | considerably. Specifically, only async-safe functions can be called after |
|
|
52 | fork. Libeio uses pthreads, so this applies, and makes using fork hard for |
|
|
53 | anything but relatively fork + exec uses. |
52 | |
54 | |
53 | 1. wait till all requests in "execute" state have been handled |
55 | This library only works in the process that initialised it: Forking is |
54 | (basically requests that are already handed over to the kernel). |
56 | fully supported, but using libeio in any other process than the one that |
55 | 2. fork |
57 | called C<eio_init> is not. |
56 | 3. in the parent, continue business as usual, done |
|
|
57 | 4. in the child, destroy all ready and pending requests and free the |
|
|
58 | memory used by the worker threads. This gives you a fully empty |
|
|
59 | libeio queue. |
|
|
60 | |
58 | |
61 | Note, however, since libeio does use threads, the above guarantee doesn't |
59 | You might get around by not I<using> libeio before (or after) forking in |
62 | cover your libc, for example, malloc and other libc functions are not |
60 | the parent, and using it in the child afterwards. You could also try to |
63 | fork-safe, so there is very little you can do after a fork, and in fact, |
61 | call the L<eio_init> function again in the child, which will brutally |
64 | the above might crash, and thus change. |
62 | reinitialise all data structures, which isn't POSIX conformant, but |
|
|
63 | typically works. |
|
|
64 | |
|
|
65 | Otherwise, the only recommendation you should follow is: treat fork code |
|
|
66 | the same way you treat signal handlers, and only ever call C<eio_init> in |
|
|
67 | the process that uses it, and only once ever. |
65 | |
68 | |
66 | =head1 INITIALISATION/INTEGRATION |
69 | =head1 INITIALISATION/INTEGRATION |
67 | |
70 | |
68 | Before you can call any eio functions you first have to initialise the |
71 | Before you can call any eio functions you first have to initialise the |
69 | library. The library integrates into any event loop, but can also be used |
72 | library. The library integrates into any event loop, but can also be used |
… | |
… | |
78 | This function initialises the library. On success it returns C<0>, on |
81 | This function initialises the library. On success it returns C<0>, on |
79 | failure it returns C<-1> and sets C<errno> appropriately. |
82 | failure it returns C<-1> and sets C<errno> appropriately. |
80 | |
83 | |
81 | It accepts two function pointers specifying callbacks as argument, both of |
84 | It accepts two function pointers specifying callbacks as argument, both of |
82 | which can be C<0>, in which case the callback isn't called. |
85 | which can be C<0>, in which case the callback isn't called. |
|
|
86 | |
|
|
87 | There is currently no way to change these callbacks later, or to |
|
|
88 | "uninitialise" the library again. |
83 | |
89 | |
84 | =item want_poll callback |
90 | =item want_poll callback |
85 | |
91 | |
86 | The C<want_poll> callback is invoked whenever libeio wants attention (i.e. |
92 | The C<want_poll> callback is invoked whenever libeio wants attention (i.e. |
87 | it wants to be polled by calling C<eio_poll>). It is "edge-triggered", |
93 | it wants to be polled by calling C<eio_poll>). It is "edge-triggered", |
… | |
… | |
98 | This callback is invoked when libeio detects that all pending requests |
104 | This callback is invoked when libeio detects that all pending requests |
99 | have been handled. It is "edge-triggered", that is, it will only be |
105 | have been handled. It is "edge-triggered", that is, it will only be |
100 | called once after C<want_poll>. To put it differently, C<want_poll> and |
106 | called once after C<want_poll>. To put it differently, C<want_poll> and |
101 | C<done_poll> are invoked in pairs: after C<want_poll> you have to call |
107 | C<done_poll> are invoked in pairs: after C<want_poll> you have to call |
102 | C<eio_poll ()> until either C<eio_poll> indicates that everything has been |
108 | C<eio_poll ()> until either C<eio_poll> indicates that everything has been |
103 | handled or C<done_poll> has been called, which signals the same. |
109 | handled or C<done_poll> has been called, which signals the same - only one |
|
|
110 | method is needed. |
104 | |
111 | |
105 | Note that C<eio_poll> might return after C<done_poll> and C<want_poll> |
112 | Note that C<eio_poll> might return after C<done_poll> and C<want_poll> |
106 | have been called again, so watch out for races in your code. |
113 | have been called again, so watch out for races in your code. |
107 | |
114 | |
|
|
115 | It is quite common to have an empty C<done_call> callback and only use |
|
|
116 | the return value from C<eio_poll>, or, when C<eio_poll> is configured to |
|
|
117 | handle all outstanding replies, it's enough to call C<eio_poll> once. |
|
|
118 | |
108 | As with C<want_poll>, this callback is called while locks are being held, |
119 | As with C<want_poll>, this callback is called while locks are being held, |
109 | so you I<must not call any libeio functions form within this callback>. |
120 | so you I<must not call any libeio functions from within this callback>. |
110 | |
121 | |
111 | =item int eio_poll () |
122 | =item int eio_poll () |
112 | |
123 | |
113 | This function has to be called whenever there are pending requests that |
124 | This function has to be called whenever there are pending requests that |
114 | need finishing. You usually call this after C<want_poll> has indicated |
125 | need finishing. You usually call this after C<want_poll> has indicated |
… | |
… | |
131 | |
142 | |
132 | If C<eio_poll ()> is configured to not handle all results in one go |
143 | If C<eio_poll ()> is configured to not handle all results in one go |
133 | (i.e. it returns C<-1>) then you should start an idle watcher that calls |
144 | (i.e. it returns C<-1>) then you should start an idle watcher that calls |
134 | C<eio_poll> until it returns something C<!= -1>. |
145 | C<eio_poll> until it returns something C<!= -1>. |
135 | |
146 | |
136 | A full-featured conenctor between libeio and libev would look as follows |
147 | A full-featured connector between libeio and libev would look as follows |
137 | (if C<eio_poll> is handling all requests, it can of course be simplified a |
148 | (if C<eio_poll> is handling all requests, it can of course be simplified a |
138 | lot by removing the idle watcher logic): |
149 | lot by removing the idle watcher logic): |
139 | |
150 | |
140 | static struct ev_loop *loop; |
151 | static struct ev_loop *loop; |
141 | static ev_idle repeat_watcher; |
152 | static ev_idle repeat_watcher; |
… | |
… | |
170 | { |
181 | { |
171 | loop = EV_DEFAULT; |
182 | loop = EV_DEFAULT; |
172 | |
183 | |
173 | ev_idle_init (&repeat_watcher, repeat); |
184 | ev_idle_init (&repeat_watcher, repeat); |
174 | ev_async_init (&ready_watcher, ready); |
185 | ev_async_init (&ready_watcher, ready); |
175 | ev_async_start (loop &watcher); |
186 | ev_async_start (loop, &watcher); |
176 | |
187 | |
177 | eio_init (want_poll, 0); |
188 | eio_init (want_poll, 0); |
178 | } |
189 | } |
179 | |
190 | |
180 | For most other event loops, you would typically use a pipe - the event |
191 | For most other event loops, you would typically use a pipe - the event |
… | |
… | |
183 | to read that byte, and in the callback for the read end, you would call |
194 | to read that byte, and in the callback for the read end, you would call |
184 | C<eio_poll>. |
195 | C<eio_poll>. |
185 | |
196 | |
186 | You don't have to take special care in the case C<eio_poll> doesn't handle |
197 | You don't have to take special care in the case C<eio_poll> doesn't handle |
187 | all requests, as the done callback will not be invoked, so the event loop |
198 | all requests, as the done callback will not be invoked, so the event loop |
188 | will still signal readyness for the pipe until I<all> results have been |
199 | will still signal readiness for the pipe until I<all> results have been |
189 | processed. |
200 | processed. |
190 | |
201 | |
191 | |
202 | |
192 | =head1 HIGH LEVEL REQUEST API |
203 | =head1 HIGH LEVEL REQUEST API |
193 | |
204 | |
… | |
… | |
227 | |
238 | |
228 | The C<void *data> member simply stores the value of the C<data> argument. |
239 | The C<void *data> member simply stores the value of the C<data> argument. |
229 | |
240 | |
230 | =back |
241 | =back |
231 | |
242 | |
|
|
243 | Members not explicitly described as accessible must not be |
|
|
244 | accessed. Specifically, there is no guarantee that any members will still |
|
|
245 | have the value they had when the request was submitted. |
|
|
246 | |
232 | The return value of the callback is normally C<0>, which tells libeio to |
247 | The return value of the callback is normally C<0>, which tells libeio to |
233 | continue normally. If a callback returns a nonzero value, libeio will |
248 | continue normally. If a callback returns a nonzero value, libeio will |
234 | stop processing results (in C<eio_poll>) and will return the value to its |
249 | stop processing results (in C<eio_poll>) and will return the value to its |
235 | caller. |
250 | caller. |
236 | |
251 | |
237 | Memory areas passed to libeio must stay valid as long as a request |
252 | Memory areas passed to libeio wrappers must stay valid as long as a |
238 | executes, with the exception of paths, which are being copied |
253 | request executes, with the exception of paths, which are being copied |
239 | internally. Any memory libeio itself allocates will be freed after the |
254 | internally. Any memory libeio itself allocates will be freed after the |
240 | finish callback has been called. If you want to manage all memory passed |
255 | finish callback has been called. If you want to manage all memory passed |
241 | to libeio yourself you can use the low-level API. |
256 | to libeio yourself you can use the low-level API. |
242 | |
257 | |
243 | For example, to open a file, you could do this: |
258 | For example, to open a file, you could do this: |
… | |
… | |
261 | } |
276 | } |
262 | |
277 | |
263 | /* the first three arguments are passed to open(2) */ |
278 | /* the first three arguments are passed to open(2) */ |
264 | /* the remaining are priority, callback and data */ |
279 | /* the remaining are priority, callback and data */ |
265 | if (!eio_open ("/etc/passwd", O_RDONLY, 0, 0, file_open_done, 0)) |
280 | if (!eio_open ("/etc/passwd", O_RDONLY, 0, 0, file_open_done, 0)) |
266 | abort (); /* something ent wrong, we will all die!!! */ |
281 | abort (); /* something went wrong, we will all die!!! */ |
267 | |
282 | |
268 | Note that you additionally need to call C<eio_poll> when the C<want_cb> |
283 | Note that you additionally need to call C<eio_poll> when the C<want_cb> |
269 | indicates that requests are ready to be processed. |
284 | indicates that requests are ready to be processed. |
270 | |
285 | |
271 | =head2 CANCELLING REQUESTS |
286 | =head2 CANCELLING REQUESTS |
272 | |
287 | |
273 | Sometimes the need for a request goes away before the request is |
288 | Sometimes the need for a request goes away before the request is |
274 | finished. In that case, one can cancel the reqiest by a call to |
289 | finished. In that case, one can cancel the request by a call to |
275 | C<eio_cancel>: |
290 | C<eio_cancel>: |
276 | |
291 | |
277 | =over 4 |
292 | =over 4 |
278 | |
293 | |
279 | =item eio_cancel (eio_req *req) |
294 | =item eio_cancel (eio_req *req) |
280 | |
295 | |
281 | Cancel the request. If the request is currently executing it might still |
296 | Cancel the request (and all its subrequests). If the request is currently |
282 | continue to execute, and in other cases it might still take a while till |
297 | executing it might still continue to execute, and in other cases it might |
283 | the request is cancelled. |
298 | still take a while till the request is cancelled. |
284 | |
299 | |
285 | Even if cancelled, the finish callback will still be invoked - the |
300 | When cancelled, the finish callback will not be invoked. |
286 | callbacks of all cancellable requests need to check whether the request |
|
|
287 | has been cancelled by calling C<EIO_CANCELLED (req)>: |
|
|
288 | |
301 | |
289 | static int |
302 | C<EIO_CANCELLED> is still true for requests that have successfully |
290 | my_eio_cb (eio_req *req) |
303 | executed, as long as C<eio_cancel> was called on them at some point. |
291 | { |
|
|
292 | if (EIO_CANCELLED (req)) |
|
|
293 | return 0; |
|
|
294 | } |
|
|
295 | |
|
|
296 | In addition, cancelled requests will either have C<< req->result >> set to |
|
|
297 | C<-1> and C<errno> to C<ECANCELED>, or otherwise they were successfully |
|
|
298 | executed despite being cancelled (e.g. when they have already been |
|
|
299 | executed at the time they were cancelled). |
|
|
300 | |
304 | |
301 | =back |
305 | =back |
302 | |
306 | |
303 | =head2 AVAILABLE REQUESTS |
307 | =head2 AVAILABLE REQUESTS |
304 | |
308 | |
… | |
… | |
402 | free (target); |
406 | free (target); |
403 | } |
407 | } |
404 | |
408 | |
405 | =item eio_realpath (const char *path, int pri, eio_cb cb, void *data) |
409 | =item eio_realpath (const char *path, int pri, eio_cb cb, void *data) |
406 | |
410 | |
407 | Similar to the realpath libc function, but unlike that one, result is |
411 | Similar to the realpath libc function, but unlike that one, C<< |
408 | C<-1> on failure and the length of the returned path in C<ptr2> (which is |
412 | req->result >> is C<-1> on failure. On success, the result is the length |
409 | not 0-terminated) - this is similar to readlink. |
413 | of the returned path in C<ptr2> (which is I<NOT> 0-terminated) - this is |
|
|
414 | similar to readlink. |
410 | |
415 | |
411 | =item eio_stat (const char *path, int pri, eio_cb cb, void *data) |
416 | =item eio_stat (const char *path, int pri, eio_cb cb, void *data) |
412 | |
417 | |
413 | =item eio_lstat (const char *path, int pri, eio_cb cb, void *data) |
418 | =item eio_lstat (const char *path, int pri, eio_cb cb, void *data) |
414 | |
419 | |
… | |
… | |
431 | =back |
436 | =back |
432 | |
437 | |
433 | =head3 READING DIRECTORIES |
438 | =head3 READING DIRECTORIES |
434 | |
439 | |
435 | Reading directories sounds simple, but can be rather demanding, especially |
440 | Reading directories sounds simple, but can be rather demanding, especially |
436 | if you want to do stuff such as traversing a diretcory hierarchy or |
441 | if you want to do stuff such as traversing a directory hierarchy or |
437 | processing all files in a directory. Libeio can assist thess complex tasks |
442 | processing all files in a directory. Libeio can assist these complex tasks |
438 | with it's C<eio_readdir> call. |
443 | with it's C<eio_readdir> call. |
439 | |
444 | |
440 | =over 4 |
445 | =over 4 |
441 | |
446 | |
442 | =item eio_readdir (const char *path, int flags, int pri, eio_cb cb, void *data) |
447 | =item eio_readdir (const char *path, int flags, int pri, eio_cb cb, void *data) |
… | |
… | |
534 | When this flag is specified, then the names will be returned in an order |
539 | When this flag is specified, then the names will be returned in an order |
535 | suitable for stat()'ing each one. That is, when you plan to stat() |
540 | suitable for stat()'ing each one. That is, when you plan to stat() |
536 | all files in the given directory, then the returned order will likely |
541 | all files in the given directory, then the returned order will likely |
537 | be fastest. |
542 | be fastest. |
538 | |
543 | |
539 | If both this flag and C<EIO_READDIR_DIRS_FIRST> are specified, then |
544 | If both this flag and C<EIO_READDIR_DIRS_FIRST> are specified, then the |
540 | the likely dirs come first, resulting in a less optimal stat order. |
545 | likely directories come first, resulting in a less optimal stat order. |
541 | |
546 | |
542 | =item EIO_READDIR_FOUND_UNKNOWN |
547 | =item EIO_READDIR_FOUND_UNKNOWN |
543 | |
548 | |
544 | This flag should not be specified when calling C<eio_readdir>. Instead, |
549 | This flag should not be specified when calling C<eio_readdir>. Instead, |
545 | it is being set by C<eio_readdir> (you can access the C<flags> via C<< |
550 | it is being set by C<eio_readdir> (you can access the C<flags> via C<< |
546 | req->int1 >>, when any of the C<type>'s found were C<EIO_DT_UNKNOWN>. The |
551 | req->int1 >>, when any of the C<type>'s found were C<EIO_DT_UNKNOWN>. The |
547 | absense of this flag therefore indicates that all C<type>'s are known, |
552 | absence of this flag therefore indicates that all C<type>'s are known, |
548 | which can be used to speed up some algorithms. |
553 | which can be used to speed up some algorithms. |
549 | |
554 | |
550 | A typical use case would be to identify all subdirectories within a |
555 | A typical use case would be to identify all subdirectories within a |
551 | directory - you would ask C<eio_readdir> for C<EIO_READDIR_DIRS_FIRST>. If |
556 | directory - you would ask C<eio_readdir> for C<EIO_READDIR_DIRS_FIRST>. If |
552 | then this flag is I<NOT> set, then all the entries at the beginning of the |
557 | then this flag is I<NOT> set, then all the entries at the beginning of the |
… | |
… | |
582 | =item eio_readahead (int fd, off_t offset, size_t length, int pri, eio_cb cb, void *data) |
587 | =item eio_readahead (int fd, off_t offset, size_t length, int pri, eio_cb cb, void *data) |
583 | |
588 | |
584 | Calls C<readahead(2)>. If the syscall is missing, then the call is |
589 | Calls C<readahead(2)>. If the syscall is missing, then the call is |
585 | emulated by simply reading the data (currently in 64kiB chunks). |
590 | emulated by simply reading the data (currently in 64kiB chunks). |
586 | |
591 | |
|
|
592 | =item eio_syncfs (int fd, int pri, eio_cb cb, void *data) |
|
|
593 | |
|
|
594 | Calls Linux' C<syncfs> syscall, if available. Returns C<-1> and sets |
|
|
595 | C<errno> to C<ENOSYS> if the call is missing I<but still calls sync()>, |
|
|
596 | if the C<fd> is C<< >= 0 >>, so you can probe for the availability of the |
|
|
597 | syscall with a negative C<fd> argument and checking for C<-1/ENOSYS>. |
|
|
598 | |
587 | =item eio_sync_file_range (int fd, off_t offset, size_t nbytes, unsigned int flags, int pri, eio_cb cb, void *data) |
599 | =item eio_sync_file_range (int fd, off_t offset, size_t nbytes, unsigned int flags, int pri, eio_cb cb, void *data) |
588 | |
600 | |
589 | Calls C<sync_file_range>. If the syscall is missing, then this is the same |
601 | Calls C<sync_file_range>. If the syscall is missing, then this is the same |
590 | as calling C<fdatasync>. |
602 | as calling C<fdatasync>. |
591 | |
603 | |
592 | Flags can be any combination of C<EIO_SYNC_FILE_RANGE_WAIT_BEFORE>, |
604 | Flags can be any combination of C<EIO_SYNC_FILE_RANGE_WAIT_BEFORE>, |
593 | C<EIO_SYNC_FILE_RANGE_WRITE> and C<EIO_SYNC_FILE_RANGE_WAIT_AFTER>. |
605 | C<EIO_SYNC_FILE_RANGE_WRITE> and C<EIO_SYNC_FILE_RANGE_WAIT_AFTER>. |
594 | |
606 | |
|
|
607 | =item eio_fallocate (int fd, int mode, off_t offset, off_t len, int pri, eio_cb cb, void *data) |
|
|
608 | |
|
|
609 | Calls C<fallocate> (note: I<NOT> C<posix_fallocate>!). If the syscall is |
|
|
610 | missing, then it returns failure and sets C<errno> to C<ENOSYS>. |
|
|
611 | |
|
|
612 | The C<mode> argument can be C<0> (for behaviour similar to |
|
|
613 | C<posix_fallocate>), or C<EIO_FALLOC_FL_KEEP_SIZE>, which keeps the size |
|
|
614 | of the file unchanged (but still preallocates space beyond end of file). |
|
|
615 | |
595 | =back |
616 | =back |
596 | |
617 | |
597 | =head3 LIBEIO-SPECIFIC REQUESTS |
618 | =head3 LIBEIO-SPECIFIC REQUESTS |
598 | |
619 | |
599 | These requests are specific to libeio and do not correspond to any OS call. |
620 | These requests are specific to libeio and do not correspond to any OS call. |
600 | |
621 | |
601 | =over 4 |
622 | =over 4 |
602 | |
623 | |
603 | =item eio_mtouch (void *addr, size_t length, int flags, int pri, eio_cb cb, void *data) |
624 | =item eio_mtouch (void *addr, size_t length, int flags, int pri, eio_cb cb, void *data) |
604 | |
625 | |
605 | Reads (C<flags == 0>) or modifies (C<flags == EIO_MT_MODIFY) the given |
626 | Reads (C<flags == 0>) or modifies (C<flags == EIO_MT_MODIFY>) the given |
606 | memory area, page-wise, that is, it reads (or reads and writes back) the |
627 | memory area, page-wise, that is, it reads (or reads and writes back) the |
607 | first octet of every page that spans the memory area. |
628 | first octet of every page that spans the memory area. |
608 | |
629 | |
609 | This can be used to page in some mmapped file, or dirty some pages. Note |
630 | This can be used to page in some mmapped file, or dirty some pages. Note |
610 | that dirtying is an unlocked read-write access, so races can ensue when |
631 | that dirtying is an unlocked read-write access, so races can ensue when |
… | |
… | |
640 | |
661 | |
641 | eio_custom (my_open, 0, my_open_done, "/etc/passwd"); |
662 | eio_custom (my_open, 0, my_open_done, "/etc/passwd"); |
642 | |
663 | |
643 | =item eio_busy (eio_tstamp delay, int pri, eio_cb cb, void *data) |
664 | =item eio_busy (eio_tstamp delay, int pri, eio_cb cb, void *data) |
644 | |
665 | |
645 | This is a a request that takes C<delay> seconds to execute, but otherwise |
666 | This is a request that takes C<delay> seconds to execute, but otherwise |
646 | does nothing - it simply puts one of the worker threads to sleep for this |
667 | does nothing - it simply puts one of the worker threads to sleep for this |
647 | long. |
668 | long. |
648 | |
669 | |
649 | This request can be used to artificially increase load, e.g. for debugging |
670 | This request can be used to artificially increase load, e.g. for debugging |
650 | or benchmarking reasons. |
671 | or benchmarking reasons. |
… | |
… | |
666 | There are two primary use cases for this: a) bundle many requests into a |
687 | There are two primary use cases for this: a) bundle many requests into a |
667 | single, composite, request with a definite callback and the ability to |
688 | single, composite, request with a definite callback and the ability to |
668 | cancel the whole request with its subrequests and b) limiting the number |
689 | cancel the whole request with its subrequests and b) limiting the number |
669 | of "active" requests. |
690 | of "active" requests. |
670 | |
691 | |
671 | Further below you will find more dicussion of these topics - first follows |
692 | Further below you will find more discussion of these topics - first |
672 | the reference section detailing the request generator and other methods. |
693 | follows the reference section detailing the request generator and other |
|
|
694 | methods. |
673 | |
695 | |
674 | =over 4 |
696 | =over 4 |
675 | |
697 | |
676 | =item eio_req *grp = eio_grp (eio_cb cb, void *data) |
698 | =item eio_req *grp = eio_grp (eio_cb cb, void *data) |
677 | |
699 | |
678 | Creates, submits and returns a group request. |
700 | Creates, submits and returns a group request. Note that it doesn't have a |
|
|
701 | priority, unlike all other requests. |
679 | |
702 | |
680 | =item eio_grp_add (eio_req *grp, eio_req *req) |
703 | =item eio_grp_add (eio_req *grp, eio_req *req) |
681 | |
704 | |
682 | Adds a request to the request group. |
705 | Adds a request to the request group. |
683 | |
706 | |
684 | =item eio_grp_cancel (eio_req *grp) |
707 | =item eio_grp_cancel (eio_req *grp) |
685 | |
708 | |
686 | Cancels all requests I<in> the group, but I<not> the group request |
709 | Cancels all requests I<in> the group, but I<not> the group request |
687 | itself. You can cancel the group request via a normal C<eio_cancel> call. |
710 | itself. You can cancel the group request I<and> all subrequests via a |
|
|
711 | normal C<eio_cancel> call. |
688 | |
712 | |
689 | |
|
|
690 | |
|
|
691 | =back |
713 | =back |
692 | |
714 | |
|
|
715 | =head4 GROUP REQUEST LIFETIME |
|
|
716 | |
|
|
717 | Left alone, a group request will instantly move to the pending state and |
|
|
718 | will be finished at the next call of C<eio_poll>. |
|
|
719 | |
|
|
720 | The usefulness stems from the fact that, if a subrequest is added to a |
|
|
721 | group I<before> a call to C<eio_poll>, via C<eio_grp_add>, then the group |
|
|
722 | will not finish until all the subrequests have finished. |
|
|
723 | |
|
|
724 | So the usage cycle of a group request is like this: after it is created, |
|
|
725 | you normally instantly add a subrequest. If none is added, the group |
|
|
726 | request will finish on it's own. As long as subrequests are added before |
|
|
727 | the group request is finished it will be kept from finishing, that is the |
|
|
728 | callbacks of any subrequests can, in turn, add more requests to the group, |
|
|
729 | and as long as any requests are active, the group request itself will not |
|
|
730 | finish. |
|
|
731 | |
|
|
732 | =head4 CREATING COMPOSITE REQUESTS |
|
|
733 | |
|
|
734 | Imagine you wanted to create an C<eio_load> request that opens a file, |
|
|
735 | reads it and closes it. This means it has to execute at least three eio |
|
|
736 | requests, but for various reasons it might be nice if that request looked |
|
|
737 | like any other eio request. |
|
|
738 | |
|
|
739 | This can be done with groups: |
|
|
740 | |
|
|
741 | =over 4 |
|
|
742 | |
|
|
743 | =item 1) create the request object |
|
|
744 | |
|
|
745 | Create a group that contains all further requests. This is the request you |
|
|
746 | can return as "the load request". |
|
|
747 | |
|
|
748 | =item 2) open the file, maybe |
|
|
749 | |
|
|
750 | Next, open the file with C<eio_open> and add the request to the group |
|
|
751 | request and you are finished setting up the request. |
|
|
752 | |
|
|
753 | If, for some reason, you cannot C<eio_open> (path is a null ptr?) you |
|
|
754 | can set C<< grp->result >> to C<-1> to signal an error and let the group |
|
|
755 | request finish on its own. |
|
|
756 | |
|
|
757 | =item 3) open callback adds more requests |
|
|
758 | |
|
|
759 | In the open callback, if the open was not successful, copy C<< |
|
|
760 | req->errorno >> to C<< grp->errorno >> and set C<< grp->result >> to |
|
|
761 | C<-1> to signal an error. |
|
|
762 | |
|
|
763 | Otherwise, malloc some memory or so and issue a read request, adding the |
|
|
764 | read request to the group. |
|
|
765 | |
|
|
766 | =item 4) continue issuing requests till finished |
|
|
767 | |
|
|
768 | In the read callback, check for errors and possibly continue with |
|
|
769 | C<eio_close> or any other eio request in the same way. |
|
|
770 | |
|
|
771 | As soon as no new requests are added, the group request will finish. Make |
|
|
772 | sure you I<always> set C<< grp->result >> to some sensible value. |
|
|
773 | |
|
|
774 | =back |
|
|
775 | |
|
|
776 | =head4 REQUEST LIMITING |
693 | |
777 | |
694 | |
778 | |
695 | #TODO |
779 | #TODO |
696 | |
780 | |
697 | /*****************************************************************************/ |
|
|
698 | /* groups */ |
|
|
699 | |
|
|
700 | eio_req *eio_grp (eio_cb cb, void *data); |
|
|
701 | void eio_grp_feed (eio_req *grp, void (*feed)(eio_req *req), int limit); |
|
|
702 | void eio_grp_limit (eio_req *grp, int limit); |
781 | void eio_grp_limit (eio_req *grp, int limit); |
703 | void eio_grp_cancel (eio_req *grp); /* cancels all sub requests but not the group */ |
|
|
704 | |
782 | |
705 | |
|
|
706 | =back |
|
|
707 | |
783 | |
708 | |
784 | |
709 | =head1 LOW LEVEL REQUEST API |
785 | =head1 LOW LEVEL REQUEST API |
710 | |
786 | |
711 | #TODO |
787 | #TODO |
… | |
… | |
748 | for example, in interactive programs, you might want to limit this time to |
824 | for example, in interactive programs, you might want to limit this time to |
749 | C<0.01> seconds or so. |
825 | C<0.01> seconds or so. |
750 | |
826 | |
751 | Note that: |
827 | Note that: |
752 | |
828 | |
|
|
829 | =over 4 |
|
|
830 | |
753 | a) libeio doesn't know how long your request callbacks take, so the time |
831 | =item a) libeio doesn't know how long your request callbacks take, so the |
754 | spent in C<eio_poll> is up to one callback invocation longer then this |
832 | time spent in C<eio_poll> is up to one callback invocation longer then |
755 | interval. |
833 | this interval. |
756 | |
834 | |
757 | b) this is implemented by calling C<gettimeofday> after each request, |
835 | =item b) this is implemented by calling C<gettimeofday> after each |
758 | which can be costly. |
836 | request, which can be costly. |
759 | |
837 | |
760 | c) at least one request will be handled. |
838 | =item c) at least one request will be handled. |
|
|
839 | |
|
|
840 | =back |
761 | |
841 | |
762 | =item eio_set_max_poll_reqs (unsigned int nreqs) |
842 | =item eio_set_max_poll_reqs (unsigned int nreqs) |
763 | |
843 | |
764 | When C<nreqs> is non-zero, then C<eio_poll> will not handle more than |
844 | When C<nreqs> is non-zero, then C<eio_poll> will not handle more than |
765 | C<nreqs> requests per invocation. This is a less costly way to limit the |
845 | C<nreqs> requests per invocation. This is a less costly way to limit the |
… | |
… | |
835 | This symbol governs the stack size for each eio thread. Libeio itself |
915 | This symbol governs the stack size for each eio thread. Libeio itself |
836 | was written to use very little stackspace, but when using C<EIO_CUSTOM> |
916 | was written to use very little stackspace, but when using C<EIO_CUSTOM> |
837 | requests, you might want to increase this. |
917 | requests, you might want to increase this. |
838 | |
918 | |
839 | If this symbol is undefined (the default) then libeio will use its default |
919 | If this symbol is undefined (the default) then libeio will use its default |
840 | stack size (C<sizeof (long) * 4096> currently). If it is defined, but |
920 | stack size (C<sizeof (void *) * 4096> currently). In all other cases, the |
841 | C<0>, then the default operating system stack size will be used. In all |
|
|
842 | other cases, the value must be an expression that evaluates to the desired |
921 | value must be an expression that evaluates to the desired stack size. |
843 | stack size. |
|
|
844 | |
922 | |
845 | =back |
923 | =back |
846 | |
924 | |
847 | |
925 | |
848 | =head1 PORTABILITY REQUIREMENTS |
926 | =head1 PORTABILITY REQUIREMENTS |