… | |
… | |
45 | Unlike the name component C<stamp> might indicate, it is also used for |
45 | Unlike the name component C<stamp> might indicate, it is also used for |
46 | time differences throughout libeio. |
46 | time differences throughout libeio. |
47 | |
47 | |
48 | =head2 FORK SUPPORT |
48 | =head2 FORK SUPPORT |
49 | |
49 | |
50 | Calling C<fork ()> is fully supported by this module. It is implemented in these steps: |
50 | Usage of pthreads in a program changes the semantics of fork |
|
|
51 | considerably. Specifically, only async-safe functions can be called after |
|
|
52 | fork. Libeio uses pthreads, so this applies, and makes using fork hard for |
|
|
53 | anything but relatively fork + exec uses. |
51 | |
54 | |
52 | 1. wait till all requests in "execute" state have been handled |
55 | This library only works in the process that initialised it: Forking is |
53 | (basically requests that are already handed over to the kernel). |
56 | fully supported, but using libeio in any other process than the one that |
54 | 2. fork |
57 | called C<eio_init> is not. |
55 | 3. in the parent, continue business as usual, done |
|
|
56 | 4. in the child, destroy all ready and pending requests and free the |
|
|
57 | memory used by the worker threads. This gives you a fully empty |
|
|
58 | libeio queue. |
|
|
59 | |
58 | |
60 | Note, however, since libeio does use threads, thr above guarantee doesn't |
59 | You might get around by not I<using> libeio before (or after) forking in |
61 | cover your libc, for example, malloc and other libc functions are not |
60 | the parent, and using it in the child afterwards. You could also try to |
62 | fork-safe, so there is very little you can do after a fork, and in fatc, |
61 | call the L<eio_init> function again in the child, which will brutally |
63 | the above might crash, and thus change. |
62 | reinitialise all data structures, which isn't POSIX conformant, but |
|
|
63 | typically works. |
|
|
64 | |
|
|
65 | Otherwise, the only recommendation you should follow is: treat fork code |
|
|
66 | the same way you treat signal handlers, and only ever call C<eio_init> in |
|
|
67 | the process that uses it, and only once ever. |
64 | |
68 | |
65 | =head1 INITIALISATION/INTEGRATION |
69 | =head1 INITIALISATION/INTEGRATION |
66 | |
70 | |
67 | Before you can call any eio functions you first have to initialise the |
71 | Before you can call any eio functions you first have to initialise the |
68 | library. The library integrates into any event loop, but can also be used |
72 | library. The library integrates into any event loop, but can also be used |
… | |
… | |
77 | This function initialises the library. On success it returns C<0>, on |
81 | This function initialises the library. On success it returns C<0>, on |
78 | failure it returns C<-1> and sets C<errno> appropriately. |
82 | failure it returns C<-1> and sets C<errno> appropriately. |
79 | |
83 | |
80 | It accepts two function pointers specifying callbacks as argument, both of |
84 | It accepts two function pointers specifying callbacks as argument, both of |
81 | which can be C<0>, in which case the callback isn't called. |
85 | which can be C<0>, in which case the callback isn't called. |
|
|
86 | |
|
|
87 | There is currently no way to change these callbacks later, or to |
|
|
88 | "uninitialise" the library again. |
82 | |
89 | |
83 | =item want_poll callback |
90 | =item want_poll callback |
84 | |
91 | |
85 | The C<want_poll> callback is invoked whenever libeio wants attention (i.e. |
92 | The C<want_poll> callback is invoked whenever libeio wants attention (i.e. |
86 | it wants to be polled by calling C<eio_poll>). It is "edge-triggered", |
93 | it wants to be polled by calling C<eio_poll>). It is "edge-triggered", |
… | |
… | |
124 | =back |
131 | =back |
125 | |
132 | |
126 | For libev, you would typically use an C<ev_async> watcher: the |
133 | For libev, you would typically use an C<ev_async> watcher: the |
127 | C<want_poll> callback would invoke C<ev_async_send> to wake up the event |
134 | C<want_poll> callback would invoke C<ev_async_send> to wake up the event |
128 | loop. Inside the callback set for the watcher, one would call C<eio_poll |
135 | loop. Inside the callback set for the watcher, one would call C<eio_poll |
129 | ()> (followed by C<ev_async_send> again if C<eio_poll> indicates that not |
136 | ()>. |
130 | all requests have been handled yet). The race is taken care of because |
137 | |
131 | libev resets/rearms the async watcher before calling your callback, |
138 | If C<eio_poll ()> is configured to not handle all results in one go |
132 | and therefore, before calling C<eio_poll>. This might result in (some) |
139 | (i.e. it returns C<-1>) then you should start an idle watcher that calls |
133 | spurious wake-ups, but is generally harmless. |
140 | C<eio_poll> until it returns something C<!= -1>. |
|
|
141 | |
|
|
142 | A full-featured connector between libeio and libev would look as follows |
|
|
143 | (if C<eio_poll> is handling all requests, it can of course be simplified a |
|
|
144 | lot by removing the idle watcher logic): |
|
|
145 | |
|
|
146 | static struct ev_loop *loop; |
|
|
147 | static ev_idle repeat_watcher; |
|
|
148 | static ev_async ready_watcher; |
|
|
149 | |
|
|
150 | /* idle watcher callback, only used when eio_poll */ |
|
|
151 | /* didn't handle all results in one call */ |
|
|
152 | static void |
|
|
153 | repeat (EV_P_ ev_idle *w, int revents) |
|
|
154 | { |
|
|
155 | if (eio_poll () != -1) |
|
|
156 | ev_idle_stop (EV_A_ w); |
|
|
157 | } |
|
|
158 | |
|
|
159 | /* eio has some results, process them */ |
|
|
160 | static void |
|
|
161 | ready (EV_P_ ev_async *w, int revents) |
|
|
162 | { |
|
|
163 | if (eio_poll () == -1) |
|
|
164 | ev_idle_start (EV_A_ &repeat_watcher); |
|
|
165 | } |
|
|
166 | |
|
|
167 | /* wake up the event loop */ |
|
|
168 | static void |
|
|
169 | want_poll (void) |
|
|
170 | { |
|
|
171 | ev_async_send (loop, &ready_watcher) |
|
|
172 | } |
|
|
173 | |
|
|
174 | void |
|
|
175 | my_init_eio () |
|
|
176 | { |
|
|
177 | loop = EV_DEFAULT; |
|
|
178 | |
|
|
179 | ev_idle_init (&repeat_watcher, repeat); |
|
|
180 | ev_async_init (&ready_watcher, ready); |
|
|
181 | ev_async_start (loop &watcher); |
|
|
182 | |
|
|
183 | eio_init (want_poll, 0); |
|
|
184 | } |
134 | |
185 | |
135 | For most other event loops, you would typically use a pipe - the event |
186 | For most other event loops, you would typically use a pipe - the event |
136 | loop should be told to wait for read readiness on the read end. In |
187 | loop should be told to wait for read readiness on the read end. In |
137 | C<want_poll> you would write a single byte, in C<done_poll> you would try |
188 | C<want_poll> you would write a single byte, in C<done_poll> you would try |
138 | to read that byte, and in the callback for the read end, you would call |
189 | to read that byte, and in the callback for the read end, you would call |
139 | C<eio_poll>. The race is avoided here because the event loop should invoke |
|
|
140 | your callback again and again until the byte has been read (as the pipe |
|
|
141 | read callback does not read it, only C<done_poll>). |
|
|
142 | |
|
|
143 | =head2 CONFIGURATION |
|
|
144 | |
|
|
145 | The functions in this section can sometimes be useful, but the default |
|
|
146 | configuration will do in most case, so you should skip this section on |
|
|
147 | first reading. |
|
|
148 | |
|
|
149 | =over 4 |
|
|
150 | |
|
|
151 | =item eio_set_max_poll_time (eio_tstamp nseconds) |
|
|
152 | |
|
|
153 | This causes C<eio_poll ()> to return after it has detected that it was |
|
|
154 | running for C<nsecond> seconds or longer (this number can be fractional). |
|
|
155 | |
|
|
156 | This can be used to limit the amount of time spent handling eio requests, |
|
|
157 | for example, in interactive programs, you might want to limit this time to |
|
|
158 | C<0.01> seconds or so. |
|
|
159 | |
|
|
160 | Note that: |
|
|
161 | |
|
|
162 | a) libeio doesn't know how long your request callbacks take, so the time |
|
|
163 | spent in C<eio_poll> is up to one callback invocation longer then this |
|
|
164 | interval. |
|
|
165 | |
|
|
166 | b) this is implemented by calling C<gettimeofday> after each request, |
|
|
167 | which can be costly. |
|
|
168 | |
|
|
169 | c) at least one request will be handled. |
|
|
170 | |
|
|
171 | =item eio_set_max_poll_reqs (unsigned int nreqs) |
|
|
172 | |
|
|
173 | When C<nreqs> is non-zero, then C<eio_poll> will not handle more than |
|
|
174 | C<nreqs> requests per invocation. This is a less costly way to limit the |
|
|
175 | amount of work done by C<eio_poll> then setting a time limit. |
|
|
176 | |
|
|
177 | If you know your callbacks are generally fast, you could use this to |
|
|
178 | encourage interactiveness in your programs by setting it to C<10>, C<100> |
|
|
179 | or even C<1000>. |
|
|
180 | |
|
|
181 | =item eio_set_min_parallel (unsigned int nthreads) |
|
|
182 | |
|
|
183 | Make sure libeio can handle at least this many requests in parallel. It |
|
|
184 | might be able handle more. |
|
|
185 | |
|
|
186 | =item eio_set_max_parallel (unsigned int nthreads) |
|
|
187 | |
|
|
188 | Set the maximum number of threads that libeio will spawn. |
|
|
189 | |
|
|
190 | =item eio_set_max_idle (unsigned int nthreads) |
|
|
191 | |
|
|
192 | Libeio uses threads internally to handle most requests, and will start and stop threads on demand. |
|
|
193 | |
|
|
194 | This call can be used to limit the number of idle threads (threads without |
|
|
195 | work to do): libeio will keep some threads idle in preparation for more |
|
|
196 | requests, but never longer than C<nthreads> threads. |
|
|
197 | |
|
|
198 | In addition to this, libeio will also stop threads when they are idle for |
|
|
199 | a few seconds, regardless of this setting. |
|
|
200 | |
|
|
201 | =item unsigned int eio_nthreads () |
|
|
202 | |
|
|
203 | Return the number of worker threads currently running. |
|
|
204 | |
|
|
205 | =item unsigned int eio_nreqs () |
|
|
206 | |
|
|
207 | Return the number of requests currently handled by libeio. This is the |
|
|
208 | total number of requests that have been submitted to libeio, but not yet |
|
|
209 | destroyed. |
|
|
210 | |
|
|
211 | =item unsigned int eio_nready () |
|
|
212 | |
|
|
213 | Returns the number of ready requests, i.e. requests that have been |
|
|
214 | submitted but have not yet entered the execution phase. |
|
|
215 | |
|
|
216 | =item unsigned int eio_npending () |
|
|
217 | |
|
|
218 | Returns the number of pending requests, i.e. requests that have been |
|
|
219 | executed and have results, but have not been finished yet by a call to |
|
|
220 | C<eio_poll>). |
190 | C<eio_poll>. |
221 | |
191 | |
222 | =back |
192 | You don't have to take special care in the case C<eio_poll> doesn't handle |
|
|
193 | all requests, as the done callback will not be invoked, so the event loop |
|
|
194 | will still signal readiness for the pipe until I<all> results have been |
|
|
195 | processed. |
223 | |
196 | |
224 | |
197 | |
225 | =head1 HIGH LEVEL REQUEST API |
198 | =head1 HIGH LEVEL REQUEST API |
226 | |
199 | |
227 | Libeio has both a high-level API, which consists of calling a request |
200 | Libeio has both a high-level API, which consists of calling a request |
… | |
… | |
234 | |
207 | |
235 | You submit a request by calling the relevant C<eio_TYPE> function with the |
208 | You submit a request by calling the relevant C<eio_TYPE> function with the |
236 | required parameters, a callback of type C<int (*eio_cb)(eio_req *req)> |
209 | required parameters, a callback of type C<int (*eio_cb)(eio_req *req)> |
237 | (called C<eio_cb> below) and a freely usable C<void *data> argument. |
210 | (called C<eio_cb> below) and a freely usable C<void *data> argument. |
238 | |
211 | |
239 | The return value will either be 0 |
212 | The return value will either be 0, in case something went really wrong |
|
|
213 | (which can basically only happen on very fatal errors, such as C<malloc> |
|
|
214 | returning 0, which is rather unlikely), or a pointer to the newly-created |
|
|
215 | and submitted C<eio_req *>. |
240 | |
216 | |
241 | The callback will be called with an C<eio_req *> which contains the |
217 | The callback will be called with an C<eio_req *> which contains the |
242 | results of the request. The members you can access inside that structure |
218 | results of the request. The members you can access inside that structure |
243 | vary from request to request, except for: |
219 | vary from request to request, except for: |
244 | |
220 | |
… | |
… | |
256 | =item C<void *data> |
232 | =item C<void *data> |
257 | |
233 | |
258 | The C<void *data> member simply stores the value of the C<data> argument. |
234 | The C<void *data> member simply stores the value of the C<data> argument. |
259 | |
235 | |
260 | =back |
236 | =back |
|
|
237 | |
|
|
238 | Memmbers not explicitly described as accessible must not be |
|
|
239 | accessed. Specifically, there is no gurantee that any members will still |
|
|
240 | have the value they had when the request was submitted. |
261 | |
241 | |
262 | The return value of the callback is normally C<0>, which tells libeio to |
242 | The return value of the callback is normally C<0>, which tells libeio to |
263 | continue normally. If a callback returns a nonzero value, libeio will |
243 | continue normally. If a callback returns a nonzero value, libeio will |
264 | stop processing results (in C<eio_poll>) and will return the value to its |
244 | stop processing results (in C<eio_poll>) and will return the value to its |
265 | caller. |
245 | caller. |
266 | |
246 | |
267 | Memory areas passed to libeio must stay valid as long as a request |
247 | Memory areas passed to libeio wrappers must stay valid as long as a |
268 | executes, with the exception of paths, which are being copied |
248 | request executes, with the exception of paths, which are being copied |
269 | internally. Any memory libeio itself allocates will be freed after the |
249 | internally. Any memory libeio itself allocates will be freed after the |
270 | finish callback has been called. If you want to manage all memory passed |
250 | finish callback has been called. If you want to manage all memory passed |
271 | to libeio yourself you can use the low-level API. |
251 | to libeio yourself you can use the low-level API. |
272 | |
252 | |
273 | For example, to open a file, you could do this: |
253 | For example, to open a file, you could do this: |
… | |
… | |
291 | } |
271 | } |
292 | |
272 | |
293 | /* the first three arguments are passed to open(2) */ |
273 | /* the first three arguments are passed to open(2) */ |
294 | /* the remaining are priority, callback and data */ |
274 | /* the remaining are priority, callback and data */ |
295 | if (!eio_open ("/etc/passwd", O_RDONLY, 0, 0, file_open_done, 0)) |
275 | if (!eio_open ("/etc/passwd", O_RDONLY, 0, 0, file_open_done, 0)) |
296 | abort (); /* something ent wrong, we will all die!!! */ |
276 | abort (); /* something went wrong, we will all die!!! */ |
297 | |
277 | |
298 | Note that you additionally need to call C<eio_poll> when the C<want_cb> |
278 | Note that you additionally need to call C<eio_poll> when the C<want_cb> |
299 | indicates that requests are ready to be processed. |
279 | indicates that requests are ready to be processed. |
|
|
280 | |
|
|
281 | =head2 CANCELLING REQUESTS |
|
|
282 | |
|
|
283 | Sometimes the need for a request goes away before the request is |
|
|
284 | finished. In that case, one can cancel the request by a call to |
|
|
285 | C<eio_cancel>: |
|
|
286 | |
|
|
287 | =over 4 |
|
|
288 | |
|
|
289 | =item eio_cancel (eio_req *req) |
|
|
290 | |
|
|
291 | Cancel the request (and all its subrequests). If the request is currently |
|
|
292 | executing it might still continue to execute, and in other cases it might |
|
|
293 | still take a while till the request is cancelled. |
|
|
294 | |
|
|
295 | Even if cancelled, the finish callback will still be invoked - the |
|
|
296 | callbacks of all cancellable requests need to check whether the request |
|
|
297 | has been cancelled by calling C<EIO_CANCELLED (req)>: |
|
|
298 | |
|
|
299 | static int |
|
|
300 | my_eio_cb (eio_req *req) |
|
|
301 | { |
|
|
302 | if (EIO_CANCELLED (req)) |
|
|
303 | return 0; |
|
|
304 | } |
|
|
305 | |
|
|
306 | In addition, cancelled requests will I<either> have C<< req->result >> |
|
|
307 | set to C<-1> and C<errno> to C<ECANCELED>, or I<otherwise> they were |
|
|
308 | successfully executed, despite being cancelled (e.g. when they have |
|
|
309 | already been executed at the time they were cancelled). |
|
|
310 | |
|
|
311 | C<EIO_CANCELLED> is still true for requests that have successfully |
|
|
312 | executed, as long as C<eio_cancel> was called on them at some point. |
|
|
313 | |
|
|
314 | =back |
300 | |
315 | |
301 | =head2 AVAILABLE REQUESTS |
316 | =head2 AVAILABLE REQUESTS |
302 | |
317 | |
303 | The following request functions are available. I<All> of them return the |
318 | The following request functions are available. I<All> of them return the |
304 | C<eio_req *> on success and C<0> on failure, and I<all> of them have the |
319 | C<eio_req *> on success and C<0> on failure, and I<all> of them have the |
… | |
… | |
307 | custom data value as C<data>. |
322 | custom data value as C<data>. |
308 | |
323 | |
309 | =head3 POSIX API WRAPPERS |
324 | =head3 POSIX API WRAPPERS |
310 | |
325 | |
311 | These requests simply wrap the POSIX call of the same name, with the same |
326 | These requests simply wrap the POSIX call of the same name, with the same |
312 | arguments: |
327 | arguments. If a function is not implemented by the OS and cannot be emulated |
|
|
328 | in some way, then all of these return C<-1> and set C<errorno> to C<ENOSYS>. |
313 | |
329 | |
314 | =over 4 |
330 | =over 4 |
315 | |
331 | |
316 | =item eio_open (const char *path, int flags, mode_t mode, int pri, eio_cb cb, void *data) |
332 | =item eio_open (const char *path, int flags, mode_t mode, int pri, eio_cb cb, void *data) |
317 | |
333 | |
|
|
334 | =item eio_truncate (const char *path, off_t offset, int pri, eio_cb cb, void *data) |
|
|
335 | |
|
|
336 | =item eio_chown (const char *path, uid_t uid, gid_t gid, int pri, eio_cb cb, void *data) |
|
|
337 | |
|
|
338 | =item eio_chmod (const char *path, mode_t mode, int pri, eio_cb cb, void *data) |
|
|
339 | |
|
|
340 | =item eio_mkdir (const char *path, mode_t mode, int pri, eio_cb cb, void *data) |
|
|
341 | |
|
|
342 | =item eio_rmdir (const char *path, int pri, eio_cb cb, void *data) |
|
|
343 | |
|
|
344 | =item eio_unlink (const char *path, int pri, eio_cb cb, void *data) |
|
|
345 | |
318 | =item eio_utime (const char *path, eio_tstamp atime, eio_tstamp mtime, int pri, eio_cb cb, void *data) |
346 | =item eio_utime (const char *path, eio_tstamp atime, eio_tstamp mtime, int pri, eio_cb cb, void *data) |
319 | |
347 | |
320 | =item eio_truncate (const char *path, off_t offset, int pri, eio_cb cb, void *data) |
|
|
321 | |
|
|
322 | =item eio_chown (const char *path, uid_t uid, gid_t gid, int pri, eio_cb cb, void *data) |
|
|
323 | |
|
|
324 | =item eio_chmod (const char *path, mode_t mode, int pri, eio_cb cb, void *data) |
|
|
325 | |
|
|
326 | =item eio_mkdir (const char *path, mode_t mode, int pri, eio_cb cb, void *data) |
|
|
327 | |
|
|
328 | =item eio_rmdir (const char *path, int pri, eio_cb cb, void *data) |
|
|
329 | |
|
|
330 | =item eio_unlink (const char *path, int pri, eio_cb cb, void *data) |
|
|
331 | |
|
|
332 | =item eio_readlink (const char *path, int pri, eio_cb cb, void *data) /* result=ptr2 allocated dynamically */ |
|
|
333 | |
|
|
334 | =item eio_stat (const char *path, int pri, eio_cb cb, void *data) /* stat buffer=ptr2 allocated dynamically */ |
|
|
335 | |
|
|
336 | =item eio_lstat (const char *path, int pri, eio_cb cb, void *data) /* stat buffer=ptr2 allocated dynamically */ |
|
|
337 | |
|
|
338 | =item eio_statvfs (const char *path, int pri, eio_cb cb, void *data) /* stat buffer=ptr2 allocated dynamically */ |
|
|
339 | |
|
|
340 | =item eio_mknod (const char *path, mode_t mode, dev_t dev, int pri, eio_cb cb, void *data) |
348 | =item eio_mknod (const char *path, mode_t mode, dev_t dev, int pri, eio_cb cb, void *data) |
341 | |
349 | |
342 | =item eio_link (const char *path, const char *new_path, int pri, eio_cb cb, void *data) |
350 | =item eio_link (const char *path, const char *new_path, int pri, eio_cb cb, void *data) |
343 | |
351 | |
344 | =item eio_symlink (const char *path, const char *new_path, int pri, eio_cb cb, void *data) |
352 | =item eio_symlink (const char *path, const char *new_path, int pri, eio_cb cb, void *data) |
345 | |
353 | |
346 | =item eio_rename (const char *path, const char *new_path, int pri, eio_cb cb, void *data) |
354 | =item eio_rename (const char *path, const char *new_path, int pri, eio_cb cb, void *data) |
347 | |
355 | |
348 | =item eio_msync (void *addr, size_t length, int flags, int pri, eio_cb cb, void *data) |
|
|
349 | |
|
|
350 | =item eio_mlock (void *addr, size_t length, int pri, eio_cb cb, void *data) |
356 | =item eio_mlock (void *addr, size_t length, int pri, eio_cb cb, void *data) |
351 | |
|
|
352 | =item eio_mlockall (int flags, int pri, eio_cb cb, void *data) |
|
|
353 | |
357 | |
354 | =item eio_close (int fd, int pri, eio_cb cb, void *data) |
358 | =item eio_close (int fd, int pri, eio_cb cb, void *data) |
355 | |
359 | |
356 | =item eio_sync (int pri, eio_cb cb, void *data) |
360 | =item eio_sync (int pri, eio_cb cb, void *data) |
357 | |
361 | |
… | |
… | |
386 | |
390 | |
387 | Not surprisingly, pread and pwrite are not thread-safe on Darwin (OS/X), |
391 | Not surprisingly, pread and pwrite are not thread-safe on Darwin (OS/X), |
388 | so it is advised not to submit multiple requests on the same fd on this |
392 | so it is advised not to submit multiple requests on the same fd on this |
389 | horrible pile of garbage. |
393 | horrible pile of garbage. |
390 | |
394 | |
|
|
395 | =item eio_mlockall (int flags, int pri, eio_cb cb, void *data) |
|
|
396 | |
|
|
397 | Like C<mlockall>, but the flag value constants are called |
|
|
398 | C<EIO_MCL_CURRENT> and C<EIO_MCL_FUTURE>. |
|
|
399 | |
|
|
400 | =item eio_msync (void *addr, size_t length, int flags, int pri, eio_cb cb, void *data) |
|
|
401 | |
|
|
402 | Just like msync, except that the flag values are called C<EIO_MS_ASYNC>, |
|
|
403 | C<EIO_MS_INVALIDATE> and C<EIO_MS_SYNC>. |
|
|
404 | |
|
|
405 | =item eio_readlink (const char *path, int pri, eio_cb cb, void *data) |
|
|
406 | |
|
|
407 | If successful, the path read by C<readlink(2)> can be accessed via C<< |
|
|
408 | req->ptr2 >> and is I<NOT> null-terminated, with the length specified as |
|
|
409 | C<< req->result >>. |
|
|
410 | |
|
|
411 | if (req->result >= 0) |
|
|
412 | { |
|
|
413 | char *target = strndup ((char *)req->ptr2, req->result); |
|
|
414 | |
|
|
415 | free (target); |
|
|
416 | } |
|
|
417 | |
|
|
418 | =item eio_realpath (const char *path, int pri, eio_cb cb, void *data) |
|
|
419 | |
|
|
420 | Similar to the realpath libc function, but unlike that one, C<< |
|
|
421 | req->result >> is C<-1> on failure. On success, the result is the length |
|
|
422 | of the returned path in C<ptr2> (which is I<NOT> 0-terminated) - this is |
|
|
423 | similar to readlink. |
|
|
424 | |
|
|
425 | =item eio_stat (const char *path, int pri, eio_cb cb, void *data) |
|
|
426 | |
|
|
427 | =item eio_lstat (const char *path, int pri, eio_cb cb, void *data) |
|
|
428 | |
391 | =item eio_fstat (int fd, int pri, eio_cb cb, void *data) |
429 | =item eio_fstat (int fd, int pri, eio_cb cb, void *data) |
392 | |
430 | |
393 | Stats a file - if C<< req->result >> indicates success, then you can |
431 | Stats a file - if C<< req->result >> indicates success, then you can |
394 | access the C<struct stat>-like structure via C<< req->ptr2 >>: |
432 | access the C<struct stat>-like structure via C<< req->ptr2 >>: |
395 | |
433 | |
396 | EIO_STRUCT_STAT *statdata = (EIO_STRUCT_STAT *)req->ptr2; |
434 | EIO_STRUCT_STAT *statdata = (EIO_STRUCT_STAT *)req->ptr2; |
397 | |
435 | |
398 | =item eio_fstatvfs (int fd, int pri, eio_cb cb, void *data) /* stat buffer=ptr2 allocated dynamically */ |
436 | =item eio_statvfs (const char *path, int pri, eio_cb cb, void *data) |
|
|
437 | |
|
|
438 | =item eio_fstatvfs (int fd, int pri, eio_cb cb, void *data) |
399 | |
439 | |
400 | Stats a filesystem - if C<< req->result >> indicates success, then you can |
440 | Stats a filesystem - if C<< req->result >> indicates success, then you can |
401 | access the C<struct statvfs>-like structure via C<< req->ptr2 >>: |
441 | access the C<struct statvfs>-like structure via C<< req->ptr2 >>: |
402 | |
442 | |
403 | EIO_STRUCT_STATVFS *statdata = (EIO_STRUCT_STATVFS *)req->ptr2; |
443 | EIO_STRUCT_STATVFS *statdata = (EIO_STRUCT_STATVFS *)req->ptr2; |
404 | |
444 | |
405 | =back |
445 | =back |
406 | |
446 | |
407 | =head3 READING DIRECTORIES |
447 | =head3 READING DIRECTORIES |
408 | |
448 | |
409 | Reading directories sounds simple, but can be rather demanding, especially |
449 | Reading directories sounds simple, but can be rather demanding, especially |
410 | if you want to do stuff such as traversing a diretcory hierarchy or |
450 | if you want to do stuff such as traversing a directory hierarchy or |
411 | processing all files in a directory. Libeio can assist thess complex tasks |
451 | processing all files in a directory. Libeio can assist these complex tasks |
412 | with it's C<eio_readdir> call. |
452 | with it's C<eio_readdir> call. |
413 | |
453 | |
414 | =over 4 |
454 | =over 4 |
415 | |
455 | |
416 | =item eio_readdir (const char *path, int flags, int pri, eio_cb cb, void *data) |
456 | =item eio_readdir (const char *path, int flags, int pri, eio_cb cb, void *data) |
… | |
… | |
419 | (via the C<opendir>, C<readdir> and C<closedir> calls) and returns either |
459 | (via the C<opendir>, C<readdir> and C<closedir> calls) and returns either |
420 | the names or an array of C<struct eio_dirent>, depending on the C<flags> |
460 | the names or an array of C<struct eio_dirent>, depending on the C<flags> |
421 | argument. |
461 | argument. |
422 | |
462 | |
423 | The C<< req->result >> indicates either the number of files found, or |
463 | The C<< req->result >> indicates either the number of files found, or |
424 | C<-1> on error. On success, zero-terminated names can be found as C<< req->ptr2 >>, |
464 | C<-1> on error. On success, null-terminated names can be found as C<< req->ptr2 >>, |
425 | and C<struct eio_dirents>, if requested by C<flags>, can be found via C<< |
465 | and C<struct eio_dirents>, if requested by C<flags>, can be found via C<< |
426 | req->ptr1 >>. |
466 | req->ptr1 >>. |
427 | |
467 | |
428 | Here is an example that prints all the names: |
468 | Here is an example that prints all the names: |
429 | |
469 | |
… | |
… | |
448 | |
488 | |
449 | If this flag is specified, then, in addition to the names in C<ptr2>, |
489 | If this flag is specified, then, in addition to the names in C<ptr2>, |
450 | also an array of C<struct eio_dirent> is returned, in C<ptr1>. A C<struct |
490 | also an array of C<struct eio_dirent> is returned, in C<ptr1>. A C<struct |
451 | eio_dirent> looks like this: |
491 | eio_dirent> looks like this: |
452 | |
492 | |
453 | struct eio_dirent |
493 | struct eio_dirent |
454 | { |
494 | { |
455 | int nameofs; /* offset of null-terminated name string in (char *)req->ptr2 */ |
495 | int nameofs; /* offset of null-terminated name string in (char *)req->ptr2 */ |
456 | unsigned short namelen; /* size of filename without trailing 0 */ |
496 | unsigned short namelen; /* size of filename without trailing 0 */ |
457 | unsigned char type; /* one of EIO_DT_* */ |
497 | unsigned char type; /* one of EIO_DT_* */ |
458 | signed char score; /* internal use */ |
498 | signed char score; /* internal use */ |
459 | ino_t inode; /* the inode number, if available, otherwise unspecified */ |
499 | ino_t inode; /* the inode number, if available, otherwise unspecified */ |
460 | }; |
500 | }; |
461 | |
501 | |
462 | The only members you normally would access are C<nameofs>, which is the |
502 | The only members you normally would access are C<nameofs>, which is the |
463 | byte-offset from C<ptr2> to the start of the name, C<namelen> and C<type>. |
503 | byte-offset from C<ptr2> to the start of the name, C<namelen> and C<type>. |
464 | |
504 | |
465 | C<type> can be one of: |
505 | C<type> can be one of: |
… | |
… | |
508 | When this flag is specified, then the names will be returned in an order |
548 | When this flag is specified, then the names will be returned in an order |
509 | suitable for stat()'ing each one. That is, when you plan to stat() |
549 | suitable for stat()'ing each one. That is, when you plan to stat() |
510 | all files in the given directory, then the returned order will likely |
550 | all files in the given directory, then the returned order will likely |
511 | be fastest. |
551 | be fastest. |
512 | |
552 | |
513 | If both this flag and C<EIO_READDIR_DIRS_FIRST> are specified, then |
553 | If both this flag and C<EIO_READDIR_DIRS_FIRST> are specified, then the |
514 | the likely dirs come first, resulting in a less optimal stat order. |
554 | likely directories come first, resulting in a less optimal stat order. |
515 | |
555 | |
516 | =item EIO_READDIR_FOUND_UNKNOWN |
556 | =item EIO_READDIR_FOUND_UNKNOWN |
517 | |
557 | |
518 | This flag should not be specified when calling C<eio_readdir>. Instead, |
558 | This flag should not be specified when calling C<eio_readdir>. Instead, |
519 | it is being set by C<eio_readdir> (you can access the C<flags> via C<< |
559 | it is being set by C<eio_readdir> (you can access the C<flags> via C<< |
520 | req->int1 >>, when any of the C<type>'s found were C<EIO_DT_UNKNOWN>. The |
560 | req->int1 >>, when any of the C<type>'s found were C<EIO_DT_UNKNOWN>. The |
521 | absense of this flag therefore indicates that all C<type>'s are known, |
561 | absence of this flag therefore indicates that all C<type>'s are known, |
522 | which can be used to speed up some algorithms. |
562 | which can be used to speed up some algorithms. |
523 | |
563 | |
524 | A typical use case would be to identify all subdirectories within a |
564 | A typical use case would be to identify all subdirectories within a |
525 | directory - you would ask C<eio_readdir> for C<EIO_READDIR_DIRS_FIRST>. If |
565 | directory - you would ask C<eio_readdir> for C<EIO_READDIR_DIRS_FIRST>. If |
526 | then this flag is I<NOT> set, then all the entries at the beginning of the |
566 | then this flag is I<NOT> set, then all the entries at the beginning of the |
… | |
… | |
556 | =item eio_readahead (int fd, off_t offset, size_t length, int pri, eio_cb cb, void *data) |
596 | =item eio_readahead (int fd, off_t offset, size_t length, int pri, eio_cb cb, void *data) |
557 | |
597 | |
558 | Calls C<readahead(2)>. If the syscall is missing, then the call is |
598 | Calls C<readahead(2)>. If the syscall is missing, then the call is |
559 | emulated by simply reading the data (currently in 64kiB chunks). |
599 | emulated by simply reading the data (currently in 64kiB chunks). |
560 | |
600 | |
|
|
601 | =item eio_syncfs (int fd, int pri, eio_cb cb, void *data) |
|
|
602 | |
|
|
603 | Calls Linux' C<syncfs> syscall, if available. Returns C<-1> and sets |
|
|
604 | C<errno> to C<ENOSYS> if the call is missing I<but still calls sync()>, |
|
|
605 | if the C<fd> is C<< >= 0 >>, so you can probe for the availability of the |
|
|
606 | syscall with a negative C<fd> argument and checking for C<-1/ENOSYS>. |
|
|
607 | |
561 | =item eio_sync_file_range (int fd, off_t offset, size_t nbytes, unsigned int flags, int pri, eio_cb cb, void *data) |
608 | =item eio_sync_file_range (int fd, off_t offset, size_t nbytes, unsigned int flags, int pri, eio_cb cb, void *data) |
562 | |
609 | |
563 | Calls C<sync_file_range>. If the syscall is missing, then this is the same |
610 | Calls C<sync_file_range>. If the syscall is missing, then this is the same |
564 | as calling C<fdatasync>. |
611 | as calling C<fdatasync>. |
565 | |
612 | |
|
|
613 | Flags can be any combination of C<EIO_SYNC_FILE_RANGE_WAIT_BEFORE>, |
|
|
614 | C<EIO_SYNC_FILE_RANGE_WRITE> and C<EIO_SYNC_FILE_RANGE_WAIT_AFTER>. |
|
|
615 | |
|
|
616 | =item eio_fallocate (int fd, int mode, off_t offset, off_t len, int pri, eio_cb cb, void *data) |
|
|
617 | |
|
|
618 | Calls C<fallocate> (note: I<NOT> C<posix_fallocate>!). If the syscall is |
|
|
619 | missing, then it returns failure and sets C<errno> to C<ENOSYS>. |
|
|
620 | |
|
|
621 | The C<mode> argument can be C<0> (for behaviour similar to |
|
|
622 | C<posix_fallocate>), or C<EIO_FALLOC_FL_KEEP_SIZE>, which keeps the size |
|
|
623 | of the file unchanged (but still preallocates space beyond end of file). |
|
|
624 | |
566 | =back |
625 | =back |
567 | |
626 | |
568 | =head3 LIBEIO-SPECIFIC REQUESTS |
627 | =head3 LIBEIO-SPECIFIC REQUESTS |
569 | |
628 | |
570 | These requests are specific to libeio and do not correspond to any OS call. |
629 | These requests are specific to libeio and do not correspond to any OS call. |
571 | |
630 | |
572 | =over 4 |
631 | =over 4 |
573 | |
632 | |
574 | =item eio_mtouch (void *addr, size_t length, int flags, int pri, eio_cb cb, void *data) |
633 | =item eio_mtouch (void *addr, size_t length, int flags, int pri, eio_cb cb, void *data) |
575 | |
634 | |
|
|
635 | Reads (C<flags == 0>) or modifies (C<flags == EIO_MT_MODIFY) the given |
|
|
636 | memory area, page-wise, that is, it reads (or reads and writes back) the |
|
|
637 | first octet of every page that spans the memory area. |
|
|
638 | |
|
|
639 | This can be used to page in some mmapped file, or dirty some pages. Note |
|
|
640 | that dirtying is an unlocked read-write access, so races can ensue when |
|
|
641 | the some other thread modifies the data stored in that memory area. |
|
|
642 | |
576 | =item eio_custom (void (*)(eio_req *) execute, int pri, eio_cb cb, void *data) |
643 | =item eio_custom (void (*)(eio_req *) execute, int pri, eio_cb cb, void *data) |
577 | |
644 | |
578 | Executes a custom request, i.e., a user-specified callback. |
645 | Executes a custom request, i.e., a user-specified callback. |
579 | |
646 | |
580 | The callback gets the C<eio_req *> as parameter and is expected to read |
647 | The callback gets the C<eio_req *> as parameter and is expected to read |
581 | and modify any request-specific members. Specifically, it should set C<< |
648 | and modify any request-specific members. Specifically, it should set C<< |
… | |
… | |
601 | req->result = open (req->data, O_RDONLY); |
668 | req->result = open (req->data, O_RDONLY); |
602 | } |
669 | } |
603 | |
670 | |
604 | eio_custom (my_open, 0, my_open_done, "/etc/passwd"); |
671 | eio_custom (my_open, 0, my_open_done, "/etc/passwd"); |
605 | |
672 | |
606 | =item eio_busy (eio_tstamp delay, int pri, eio_cb cb, void *data) |
673 | =item eio_busy (eio_tstamp delay, int pri, eio_cb cb, void *data) |
607 | |
674 | |
608 | This is a a request that takes C<delay> seconds to execute, but otherwise |
675 | This is a request that takes C<delay> seconds to execute, but otherwise |
609 | does nothing - it simply puts one of the worker threads to sleep for this |
676 | does nothing - it simply puts one of the worker threads to sleep for this |
610 | long. |
677 | long. |
611 | |
678 | |
612 | This request can be used to artificially increase load, e.g. for debugging |
679 | This request can be used to artificially increase load, e.g. for debugging |
613 | or benchmarking reasons. |
680 | or benchmarking reasons. |
614 | |
681 | |
615 | =item eio_nop (int pri, eio_cb cb, void *data) |
682 | =item eio_nop (int pri, eio_cb cb, void *data) |
616 | |
683 | |
617 | This request does nothing, except go through the whole request cycle. This |
684 | This request does nothing, except go through the whole request cycle. This |
618 | can be used to measure latency or in some cases to simplify code, but is |
685 | can be used to measure latency or in some cases to simplify code, but is |
619 | not really of much use. |
686 | not really of much use. |
620 | |
687 | |
621 | =back |
688 | =back |
622 | |
689 | |
623 | =head3 GROUPING AND LIMITING REQUESTS |
690 | =head3 GROUPING AND LIMITING REQUESTS |
624 | |
691 | |
|
|
692 | There is one more rather special request, C<eio_grp>. It is a very special |
|
|
693 | aio request: Instead of doing something, it is a container for other eio |
|
|
694 | requests. |
|
|
695 | |
|
|
696 | There are two primary use cases for this: a) bundle many requests into a |
|
|
697 | single, composite, request with a definite callback and the ability to |
|
|
698 | cancel the whole request with its subrequests and b) limiting the number |
|
|
699 | of "active" requests. |
|
|
700 | |
|
|
701 | Further below you will find more discussion of these topics - first |
|
|
702 | follows the reference section detailing the request generator and other |
|
|
703 | methods. |
|
|
704 | |
|
|
705 | =over 4 |
|
|
706 | |
|
|
707 | =item eio_req *grp = eio_grp (eio_cb cb, void *data) |
|
|
708 | |
|
|
709 | Creates, submits and returns a group request. Note that it doesn't have a |
|
|
710 | priority, unlike all other requests. |
|
|
711 | |
|
|
712 | =item eio_grp_add (eio_req *grp, eio_req *req) |
|
|
713 | |
|
|
714 | Adds a request to the request group. |
|
|
715 | |
|
|
716 | =item eio_grp_cancel (eio_req *grp) |
|
|
717 | |
|
|
718 | Cancels all requests I<in> the group, but I<not> the group request |
|
|
719 | itself. You can cancel the group request I<and> all subrequests via a |
|
|
720 | normal C<eio_cancel> call. |
|
|
721 | |
|
|
722 | =back |
|
|
723 | |
|
|
724 | =head4 GROUP REQUEST LIFETIME |
|
|
725 | |
|
|
726 | Left alone, a group request will instantly move to the pending state and |
|
|
727 | will be finished at the next call of C<eio_poll>. |
|
|
728 | |
|
|
729 | The usefulness stems from the fact that, if a subrequest is added to a |
|
|
730 | group I<before> a call to C<eio_poll>, via C<eio_grp_add>, then the group |
|
|
731 | will not finish until all the subrequests have finished. |
|
|
732 | |
|
|
733 | So the usage cycle of a group request is like this: after it is created, |
|
|
734 | you normally instantly add a subrequest. If none is added, the group |
|
|
735 | request will finish on it's own. As long as subrequests are added before |
|
|
736 | the group request is finished it will be kept from finishing, that is the |
|
|
737 | callbacks of any subrequests can, in turn, add more requests to the group, |
|
|
738 | and as long as any requests are active, the group request itself will not |
|
|
739 | finish. |
|
|
740 | |
|
|
741 | =head4 CREATING COMPOSITE REQUESTS |
|
|
742 | |
|
|
743 | Imagine you wanted to create an C<eio_load> request that opens a file, |
|
|
744 | reads it and closes it. This means it has to execute at least three eio |
|
|
745 | requests, but for various reasons it might be nice if that request looked |
|
|
746 | like any other eio request. |
|
|
747 | |
|
|
748 | This can be done with groups: |
|
|
749 | |
|
|
750 | =over 4 |
|
|
751 | |
|
|
752 | =item 1) create the request object |
|
|
753 | |
|
|
754 | Create a group that contains all further requests. This is the request you |
|
|
755 | can return as "the load request". |
|
|
756 | |
|
|
757 | =item 2) open the file, maybe |
|
|
758 | |
|
|
759 | Next, open the file with C<eio_open> and add the request to the group |
|
|
760 | request and you are finished setting up the request. |
|
|
761 | |
|
|
762 | If, for some reason, you cannot C<eio_open> (path is a null ptr?) you |
|
|
763 | can set C<< grp->result >> to C<-1> to signal an error and let the group |
|
|
764 | request finish on its own. |
|
|
765 | |
|
|
766 | =item 3) open callback adds more requests |
|
|
767 | |
|
|
768 | In the open callback, if the open was not successful, copy C<< |
|
|
769 | req->errorno >> to C<< grp->errorno >> and set C<< grp->errorno >> to |
|
|
770 | C<-1> to signal an error. |
|
|
771 | |
|
|
772 | Otherwise, malloc some memory or so and issue a read request, adding the |
|
|
773 | read request to the group. |
|
|
774 | |
|
|
775 | =item 4) continue issuing requests till finished |
|
|
776 | |
|
|
777 | In the real callback, check for errors and possibly continue with |
|
|
778 | C<eio_close> or any other eio request in the same way. |
|
|
779 | |
|
|
780 | As soon as no new requests are added the group request will finish. Make |
|
|
781 | sure you I<always> set C<< grp->result >> to some sensible value. |
|
|
782 | |
|
|
783 | =back |
|
|
784 | |
|
|
785 | =head4 REQUEST LIMITING |
|
|
786 | |
|
|
787 | |
625 | #TODO |
788 | #TODO |
626 | |
789 | |
627 | /*****************************************************************************/ |
|
|
628 | /* groups */ |
|
|
629 | |
|
|
630 | eio_req *eio_grp (eio_cb cb, void *data); |
|
|
631 | void eio_grp_feed (eio_req *grp, void (*feed)(eio_req *req), int limit); |
|
|
632 | void eio_grp_limit (eio_req *grp, int limit); |
790 | void eio_grp_limit (eio_req *grp, int limit); |
633 | void eio_grp_add (eio_req *grp, eio_req *req); |
|
|
634 | void eio_grp_cancel (eio_req *grp); /* cancels all sub requests but not the group */ |
|
|
635 | |
791 | |
636 | |
792 | |
637 | =back |
793 | =back |
638 | |
794 | |
639 | |
795 | |
… | |
… | |
645 | =head1 ANATOMY AND LIFETIME OF AN EIO REQUEST |
801 | =head1 ANATOMY AND LIFETIME OF AN EIO REQUEST |
646 | |
802 | |
647 | A request is represented by a structure of type C<eio_req>. To initialise |
803 | A request is represented by a structure of type C<eio_req>. To initialise |
648 | it, clear it to all zero bytes: |
804 | it, clear it to all zero bytes: |
649 | |
805 | |
650 | eio_req req; |
806 | eio_req req; |
651 | |
807 | |
652 | memset (&req, 0, sizeof (req)); |
808 | memset (&req, 0, sizeof (req)); |
653 | |
809 | |
654 | A more common way to initialise a new C<eio_req> is to use C<calloc>: |
810 | A more common way to initialise a new C<eio_req> is to use C<calloc>: |
655 | |
811 | |
656 | eio_req *req = calloc (1, sizeof (*req)); |
812 | eio_req *req = calloc (1, sizeof (*req)); |
657 | |
813 | |
658 | In either case, libeio neither allocates, initialises or frees the |
814 | In either case, libeio neither allocates, initialises or frees the |
659 | C<eio_req> structure for you - it merely uses it. |
815 | C<eio_req> structure for you - it merely uses it. |
660 | |
816 | |
661 | zero |
817 | zero |
662 | |
818 | |
663 | #TODO |
819 | #TODO |
|
|
820 | |
|
|
821 | =head2 CONFIGURATION |
|
|
822 | |
|
|
823 | The functions in this section can sometimes be useful, but the default |
|
|
824 | configuration will do in most case, so you should skip this section on |
|
|
825 | first reading. |
|
|
826 | |
|
|
827 | =over 4 |
|
|
828 | |
|
|
829 | =item eio_set_max_poll_time (eio_tstamp nseconds) |
|
|
830 | |
|
|
831 | This causes C<eio_poll ()> to return after it has detected that it was |
|
|
832 | running for C<nsecond> seconds or longer (this number can be fractional). |
|
|
833 | |
|
|
834 | This can be used to limit the amount of time spent handling eio requests, |
|
|
835 | for example, in interactive programs, you might want to limit this time to |
|
|
836 | C<0.01> seconds or so. |
|
|
837 | |
|
|
838 | Note that: |
|
|
839 | |
|
|
840 | =over 4 |
|
|
841 | |
|
|
842 | =item a) libeio doesn't know how long your request callbacks take, so the |
|
|
843 | time spent in C<eio_poll> is up to one callback invocation longer then |
|
|
844 | this interval. |
|
|
845 | |
|
|
846 | =item b) this is implemented by calling C<gettimeofday> after each |
|
|
847 | request, which can be costly. |
|
|
848 | |
|
|
849 | =item c) at least one request will be handled. |
|
|
850 | |
|
|
851 | =back |
|
|
852 | |
|
|
853 | =item eio_set_max_poll_reqs (unsigned int nreqs) |
|
|
854 | |
|
|
855 | When C<nreqs> is non-zero, then C<eio_poll> will not handle more than |
|
|
856 | C<nreqs> requests per invocation. This is a less costly way to limit the |
|
|
857 | amount of work done by C<eio_poll> then setting a time limit. |
|
|
858 | |
|
|
859 | If you know your callbacks are generally fast, you could use this to |
|
|
860 | encourage interactiveness in your programs by setting it to C<10>, C<100> |
|
|
861 | or even C<1000>. |
|
|
862 | |
|
|
863 | =item eio_set_min_parallel (unsigned int nthreads) |
|
|
864 | |
|
|
865 | Make sure libeio can handle at least this many requests in parallel. It |
|
|
866 | might be able handle more. |
|
|
867 | |
|
|
868 | =item eio_set_max_parallel (unsigned int nthreads) |
|
|
869 | |
|
|
870 | Set the maximum number of threads that libeio will spawn. |
|
|
871 | |
|
|
872 | =item eio_set_max_idle (unsigned int nthreads) |
|
|
873 | |
|
|
874 | Libeio uses threads internally to handle most requests, and will start and stop threads on demand. |
|
|
875 | |
|
|
876 | This call can be used to limit the number of idle threads (threads without |
|
|
877 | work to do): libeio will keep some threads idle in preparation for more |
|
|
878 | requests, but never longer than C<nthreads> threads. |
|
|
879 | |
|
|
880 | In addition to this, libeio will also stop threads when they are idle for |
|
|
881 | a few seconds, regardless of this setting. |
|
|
882 | |
|
|
883 | =item unsigned int eio_nthreads () |
|
|
884 | |
|
|
885 | Return the number of worker threads currently running. |
|
|
886 | |
|
|
887 | =item unsigned int eio_nreqs () |
|
|
888 | |
|
|
889 | Return the number of requests currently handled by libeio. This is the |
|
|
890 | total number of requests that have been submitted to libeio, but not yet |
|
|
891 | destroyed. |
|
|
892 | |
|
|
893 | =item unsigned int eio_nready () |
|
|
894 | |
|
|
895 | Returns the number of ready requests, i.e. requests that have been |
|
|
896 | submitted but have not yet entered the execution phase. |
|
|
897 | |
|
|
898 | =item unsigned int eio_npending () |
|
|
899 | |
|
|
900 | Returns the number of pending requests, i.e. requests that have been |
|
|
901 | executed and have results, but have not been finished yet by a call to |
|
|
902 | C<eio_poll>). |
|
|
903 | |
|
|
904 | =back |
664 | |
905 | |
665 | =head1 EMBEDDING |
906 | =head1 EMBEDDING |
666 | |
907 | |
667 | Libeio can be embedded directly into programs. This functionality is not |
908 | Libeio can be embedded directly into programs. This functionality is not |
668 | documented and not (yet) officially supported. |
909 | documented and not (yet) officially supported. |
… | |
… | |
685 | This symbol governs the stack size for each eio thread. Libeio itself |
926 | This symbol governs the stack size for each eio thread. Libeio itself |
686 | was written to use very little stackspace, but when using C<EIO_CUSTOM> |
927 | was written to use very little stackspace, but when using C<EIO_CUSTOM> |
687 | requests, you might want to increase this. |
928 | requests, you might want to increase this. |
688 | |
929 | |
689 | If this symbol is undefined (the default) then libeio will use its default |
930 | If this symbol is undefined (the default) then libeio will use its default |
690 | stack size (C<sizeof (long) * 4096> currently). If it is defined, but |
931 | stack size (C<sizeof (void *) * 4096> currently). If it is defined, but |
691 | C<0>, then the default operating system stack size will be used. In all |
932 | C<0>, then the default operating system stack size will be used. In all |
692 | other cases, the value must be an expression that evaluates to the desired |
933 | other cases, the value must be an expression that evaluates to the desired |
693 | stack size. |
934 | stack size. |
694 | |
935 | |
695 | =back |
936 | =back |