1 |
root |
1.1 |
=head1 NAME |
2 |
|
|
|
3 |
|
|
libeio - truly asynchronous POSIX I/O |
4 |
|
|
|
5 |
|
|
=head1 SYNOPSIS |
6 |
|
|
|
7 |
|
|
#include <eio.h> |
8 |
|
|
|
9 |
|
|
=head1 DESCRIPTION |
10 |
|
|
|
11 |
|
|
The newest version of this document is also available as an html-formatted |
12 |
|
|
web page you might find easier to navigate when reading it for the first |
13 |
|
|
time: L<http://pod.tst.eu/http://cvs.schmorp.de/libeio/eio.pod>. |
14 |
|
|
|
15 |
|
|
Note that this library is a by-product of the C<IO::AIO> perl |
16 |
sf-exg |
1.6 |
module, and many of the subtler points regarding requests lifetime |
17 |
root |
1.1 |
and so on are only documented in its documentation at the |
18 |
|
|
moment: L<http://pod.tst.eu/http://cvs.schmorp.de/IO-AIO/AIO.pm>. |
19 |
|
|
|
20 |
|
|
=head2 FEATURES |
21 |
|
|
|
22 |
|
|
This library provides fully asynchronous versions of most POSIX functions |
23 |
sf-exg |
1.6 |
dealing with I/O. Unlike most asynchronous libraries, this not only |
24 |
root |
1.1 |
includes C<read> and C<write>, but also C<open>, C<stat>, C<unlink> and |
25 |
|
|
similar functions, as well as less rarely ones such as C<mknod>, C<futime> |
26 |
|
|
or C<readlink>. |
27 |
|
|
|
28 |
|
|
It also offers wrappers around C<sendfile> (Solaris, Linux, HP-UX and |
29 |
|
|
FreeBSD, with emulation on other platforms) and C<readahead> (Linux, with |
30 |
|
|
emulation elsewhere>). |
31 |
|
|
|
32 |
root |
1.5 |
The goal is to enable you to write fully non-blocking programs. For |
33 |
root |
1.1 |
example, in a game server, you would not want to freeze for a few seconds |
34 |
|
|
just because the server is running a backup and you happen to call |
35 |
|
|
C<readdir>. |
36 |
|
|
|
37 |
|
|
=head2 TIME REPRESENTATION |
38 |
|
|
|
39 |
|
|
Libeio represents time as a single floating point number, representing the |
40 |
|
|
(fractional) number of seconds since the (POSIX) epoch (somewhere near |
41 |
|
|
the beginning of 1970, details are complicated, don't ask). This type is |
42 |
sf-exg |
1.6 |
called C<eio_tstamp>, but it is guaranteed to be of type C<double> (or |
43 |
root |
1.1 |
better), so you can freely use C<double> yourself. |
44 |
|
|
|
45 |
|
|
Unlike the name component C<stamp> might indicate, it is also used for |
46 |
|
|
time differences throughout libeio. |
47 |
|
|
|
48 |
|
|
=head2 FORK SUPPORT |
49 |
|
|
|
50 |
root |
1.17 |
Calling C<fork ()> is fully supported by this module - but you must not |
51 |
|
|
rely on this. It is currently implemented in these steps: |
52 |
root |
1.1 |
|
53 |
root |
1.17 |
1. wait till all requests in "execute" state have been handled |
54 |
|
|
(basically requests that are already handed over to the kernel). |
55 |
|
|
2. fork |
56 |
|
|
3. in the parent, continue business as usual, done |
57 |
|
|
4. in the child, destroy all ready and pending requests and free the |
58 |
|
|
memory used by the worker threads. This gives you a fully empty |
59 |
|
|
libeio queue. |
60 |
root |
1.1 |
|
61 |
root |
1.17 |
Note, however, since libeio does use threads, the above guarantee doesn't |
62 |
root |
1.7 |
cover your libc, for example, malloc and other libc functions are not |
63 |
root |
1.17 |
fork-safe, so there is very little you can do after a fork, and in fact, |
64 |
root |
1.7 |
the above might crash, and thus change. |
65 |
|
|
|
66 |
root |
1.1 |
=head1 INITIALISATION/INTEGRATION |
67 |
|
|
|
68 |
|
|
Before you can call any eio functions you first have to initialise the |
69 |
|
|
library. The library integrates into any event loop, but can also be used |
70 |
|
|
without one, including in polling mode. |
71 |
|
|
|
72 |
|
|
You have to provide the necessary glue yourself, however. |
73 |
|
|
|
74 |
|
|
=over 4 |
75 |
|
|
|
76 |
|
|
=item int eio_init (void (*want_poll)(void), void (*done_poll)(void)) |
77 |
|
|
|
78 |
|
|
This function initialises the library. On success it returns C<0>, on |
79 |
|
|
failure it returns C<-1> and sets C<errno> appropriately. |
80 |
|
|
|
81 |
|
|
It accepts two function pointers specifying callbacks as argument, both of |
82 |
|
|
which can be C<0>, in which case the callback isn't called. |
83 |
|
|
|
84 |
|
|
=item want_poll callback |
85 |
|
|
|
86 |
|
|
The C<want_poll> callback is invoked whenever libeio wants attention (i.e. |
87 |
|
|
it wants to be polled by calling C<eio_poll>). It is "edge-triggered", |
88 |
|
|
that is, it will only be called once when eio wants attention, until all |
89 |
|
|
pending requests have been handled. |
90 |
|
|
|
91 |
|
|
This callback is called while locks are being held, so I<you must |
92 |
|
|
not call any libeio functions inside this callback>. That includes |
93 |
|
|
C<eio_poll>. What you should do is notify some other thread, or wake up |
94 |
|
|
your event loop, and then call C<eio_poll>. |
95 |
|
|
|
96 |
|
|
=item done_poll callback |
97 |
|
|
|
98 |
|
|
This callback is invoked when libeio detects that all pending requests |
99 |
|
|
have been handled. It is "edge-triggered", that is, it will only be |
100 |
|
|
called once after C<want_poll>. To put it differently, C<want_poll> and |
101 |
|
|
C<done_poll> are invoked in pairs: after C<want_poll> you have to call |
102 |
|
|
C<eio_poll ()> until either C<eio_poll> indicates that everything has been |
103 |
|
|
handled or C<done_poll> has been called, which signals the same. |
104 |
|
|
|
105 |
|
|
Note that C<eio_poll> might return after C<done_poll> and C<want_poll> |
106 |
|
|
have been called again, so watch out for races in your code. |
107 |
|
|
|
108 |
sf-exg |
1.6 |
As with C<want_poll>, this callback is called while locks are being held, |
109 |
root |
1.1 |
so you I<must not call any libeio functions form within this callback>. |
110 |
|
|
|
111 |
|
|
=item int eio_poll () |
112 |
|
|
|
113 |
|
|
This function has to be called whenever there are pending requests that |
114 |
|
|
need finishing. You usually call this after C<want_poll> has indicated |
115 |
|
|
that you should do so, but you can also call this function regularly to |
116 |
|
|
poll for new results. |
117 |
|
|
|
118 |
|
|
If any request invocation returns a non-zero value, then C<eio_poll ()> |
119 |
|
|
immediately returns with that value as return value. |
120 |
|
|
|
121 |
|
|
Otherwise, if all requests could be handled, it returns C<0>. If for some |
122 |
|
|
reason not all requests have been handled, i.e. some are still pending, it |
123 |
|
|
returns C<-1>. |
124 |
|
|
|
125 |
|
|
=back |
126 |
|
|
|
127 |
|
|
For libev, you would typically use an C<ev_async> watcher: the |
128 |
|
|
C<want_poll> callback would invoke C<ev_async_send> to wake up the event |
129 |
|
|
loop. Inside the callback set for the watcher, one would call C<eio_poll |
130 |
root |
1.15 |
()>. |
131 |
|
|
|
132 |
|
|
If C<eio_poll ()> is configured to not handle all results in one go |
133 |
|
|
(i.e. it returns C<-1>) then you should start an idle watcher that calls |
134 |
|
|
C<eio_poll> until it returns something C<!= -1>. |
135 |
|
|
|
136 |
root |
1.16 |
A full-featured conenctor between libeio and libev would look as follows |
137 |
|
|
(if C<eio_poll> is handling all requests, it can of course be simplified a |
138 |
|
|
lot by removing the idle watcher logic): |
139 |
root |
1.15 |
|
140 |
root |
1.17 |
static struct ev_loop *loop; |
141 |
|
|
static ev_idle repeat_watcher; |
142 |
|
|
static ev_async ready_watcher; |
143 |
root |
1.15 |
|
144 |
root |
1.17 |
/* idle watcher callback, only used when eio_poll */ |
145 |
|
|
/* didn't handle all results in one call */ |
146 |
|
|
static void |
147 |
|
|
repeat (EV_P_ ev_idle *w, int revents) |
148 |
|
|
{ |
149 |
|
|
if (eio_poll () != -1) |
150 |
|
|
ev_idle_stop (EV_A_ w); |
151 |
|
|
} |
152 |
|
|
|
153 |
|
|
/* eio has some results, process them */ |
154 |
|
|
static void |
155 |
|
|
ready (EV_P_ ev_async *w, int revents) |
156 |
|
|
{ |
157 |
|
|
if (eio_poll () == -1) |
158 |
|
|
ev_idle_start (EV_A_ &repeat_watcher); |
159 |
|
|
} |
160 |
|
|
|
161 |
|
|
/* wake up the event loop */ |
162 |
|
|
static void |
163 |
|
|
want_poll (void) |
164 |
|
|
{ |
165 |
|
|
ev_async_send (loop, &ready_watcher) |
166 |
|
|
} |
167 |
|
|
|
168 |
|
|
void |
169 |
|
|
my_init_eio () |
170 |
|
|
{ |
171 |
|
|
loop = EV_DEFAULT; |
172 |
|
|
|
173 |
|
|
ev_idle_init (&repeat_watcher, repeat); |
174 |
|
|
ev_async_init (&ready_watcher, ready); |
175 |
|
|
ev_async_start (loop &watcher); |
176 |
|
|
|
177 |
|
|
eio_init (want_poll, 0); |
178 |
|
|
} |
179 |
root |
1.1 |
|
180 |
|
|
For most other event loops, you would typically use a pipe - the event |
181 |
sf-exg |
1.6 |
loop should be told to wait for read readiness on the read end. In |
182 |
root |
1.1 |
C<want_poll> you would write a single byte, in C<done_poll> you would try |
183 |
|
|
to read that byte, and in the callback for the read end, you would call |
184 |
root |
1.16 |
C<eio_poll>. |
185 |
|
|
|
186 |
|
|
You don't have to take special care in the case C<eio_poll> doesn't handle |
187 |
|
|
all requests, as the done callback will not be invoked, so the event loop |
188 |
|
|
will still signal readyness for the pipe until I<all> results have been |
189 |
|
|
processed. |
190 |
root |
1.1 |
|
191 |
|
|
|
192 |
root |
1.7 |
=head1 HIGH LEVEL REQUEST API |
193 |
|
|
|
194 |
|
|
Libeio has both a high-level API, which consists of calling a request |
195 |
|
|
function with a callback to be called on completion, and a low-level API |
196 |
|
|
where you fill out request structures and submit them. |
197 |
|
|
|
198 |
|
|
This section describes the high-level API. |
199 |
|
|
|
200 |
|
|
=head2 REQUEST SUBMISSION AND RESULT PROCESSING |
201 |
|
|
|
202 |
|
|
You submit a request by calling the relevant C<eio_TYPE> function with the |
203 |
|
|
required parameters, a callback of type C<int (*eio_cb)(eio_req *req)> |
204 |
|
|
(called C<eio_cb> below) and a freely usable C<void *data> argument. |
205 |
|
|
|
206 |
root |
1.12 |
The return value will either be 0, in case something went really wrong |
207 |
|
|
(which can basically only happen on very fatal errors, such as C<malloc> |
208 |
|
|
returning 0, which is rather unlikely), or a pointer to the newly-created |
209 |
|
|
and submitted C<eio_req *>. |
210 |
root |
1.7 |
|
211 |
|
|
The callback will be called with an C<eio_req *> which contains the |
212 |
|
|
results of the request. The members you can access inside that structure |
213 |
|
|
vary from request to request, except for: |
214 |
|
|
|
215 |
|
|
=over 4 |
216 |
|
|
|
217 |
|
|
=item C<ssize_t result> |
218 |
|
|
|
219 |
|
|
This contains the result value from the call (usually the same as the |
220 |
|
|
syscall of the same name). |
221 |
|
|
|
222 |
|
|
=item C<int errorno> |
223 |
|
|
|
224 |
|
|
This contains the value of C<errno> after the call. |
225 |
|
|
|
226 |
|
|
=item C<void *data> |
227 |
|
|
|
228 |
|
|
The C<void *data> member simply stores the value of the C<data> argument. |
229 |
|
|
|
230 |
|
|
=back |
231 |
|
|
|
232 |
|
|
The return value of the callback is normally C<0>, which tells libeio to |
233 |
|
|
continue normally. If a callback returns a nonzero value, libeio will |
234 |
|
|
stop processing results (in C<eio_poll>) and will return the value to its |
235 |
|
|
caller. |
236 |
|
|
|
237 |
|
|
Memory areas passed to libeio must stay valid as long as a request |
238 |
|
|
executes, with the exception of paths, which are being copied |
239 |
|
|
internally. Any memory libeio itself allocates will be freed after the |
240 |
|
|
finish callback has been called. If you want to manage all memory passed |
241 |
|
|
to libeio yourself you can use the low-level API. |
242 |
|
|
|
243 |
|
|
For example, to open a file, you could do this: |
244 |
|
|
|
245 |
|
|
static int |
246 |
|
|
file_open_done (eio_req *req) |
247 |
|
|
{ |
248 |
|
|
if (req->result < 0) |
249 |
|
|
{ |
250 |
|
|
/* open() returned -1 */ |
251 |
|
|
errno = req->errorno; |
252 |
|
|
perror ("open"); |
253 |
|
|
} |
254 |
|
|
else |
255 |
|
|
{ |
256 |
|
|
int fd = req->result; |
257 |
|
|
/* now we have the new fd in fd */ |
258 |
|
|
} |
259 |
|
|
|
260 |
|
|
return 0; |
261 |
|
|
} |
262 |
|
|
|
263 |
|
|
/* the first three arguments are passed to open(2) */ |
264 |
|
|
/* the remaining are priority, callback and data */ |
265 |
|
|
if (!eio_open ("/etc/passwd", O_RDONLY, 0, 0, file_open_done, 0)) |
266 |
|
|
abort (); /* something ent wrong, we will all die!!! */ |
267 |
|
|
|
268 |
|
|
Note that you additionally need to call C<eio_poll> when the C<want_cb> |
269 |
|
|
indicates that requests are ready to be processed. |
270 |
|
|
|
271 |
root |
1.17 |
=head2 CANCELLING REQUESTS |
272 |
|
|
|
273 |
|
|
Sometimes the need for a request goes away before the request is |
274 |
|
|
finished. In that case, one can cancel the reqiest by a call to |
275 |
|
|
C<eio_cancel>: |
276 |
|
|
|
277 |
|
|
=over 4 |
278 |
|
|
|
279 |
|
|
=item eio_cancel (eio_req *req) |
280 |
|
|
|
281 |
|
|
Cancel the request. If the request is currently executing it might still |
282 |
|
|
continue to execute, and in other cases it might still take a while till |
283 |
|
|
the request is cancelled. |
284 |
|
|
|
285 |
|
|
Even if cancelled, the finish callback will still be invoked - the |
286 |
|
|
callbacks of all cancellable requests need to check whether the request |
287 |
|
|
has been cancelled by calling C<EIO_CANCELLED (req)>: |
288 |
|
|
|
289 |
|
|
static int |
290 |
|
|
my_eio_cb (eio_req *req) |
291 |
|
|
{ |
292 |
|
|
if (EIO_CANCELLED (req)) |
293 |
|
|
return 0; |
294 |
|
|
} |
295 |
|
|
|
296 |
|
|
In addition, cancelled requests will either have C<< req->result >> set to |
297 |
|
|
C<-1> and C<errno> to C<ECANCELED>, or otherwise they were successfully |
298 |
|
|
executed despite being cancelled (e.g. when they have already been |
299 |
|
|
executed at the time they were cancelled). |
300 |
|
|
|
301 |
|
|
=back |
302 |
|
|
|
303 |
root |
1.7 |
=head2 AVAILABLE REQUESTS |
304 |
|
|
|
305 |
|
|
The following request functions are available. I<All> of them return the |
306 |
|
|
C<eio_req *> on success and C<0> on failure, and I<all> of them have the |
307 |
|
|
same three trailing arguments: C<pri>, C<cb> and C<data>. The C<cb> is |
308 |
|
|
mandatory, but in most cases, you pass in C<0> as C<pri> and C<0> or some |
309 |
|
|
custom data value as C<data>. |
310 |
|
|
|
311 |
|
|
=head3 POSIX API WRAPPERS |
312 |
|
|
|
313 |
|
|
These requests simply wrap the POSIX call of the same name, with the same |
314 |
root |
1.11 |
arguments. If a function is not implemented by the OS and cannot be emulated |
315 |
root |
1.10 |
in some way, then all of these return C<-1> and set C<errorno> to C<ENOSYS>. |
316 |
root |
1.7 |
|
317 |
|
|
=over 4 |
318 |
|
|
|
319 |
|
|
=item eio_open (const char *path, int flags, mode_t mode, int pri, eio_cb cb, void *data) |
320 |
|
|
|
321 |
|
|
=item eio_truncate (const char *path, off_t offset, int pri, eio_cb cb, void *data) |
322 |
|
|
|
323 |
|
|
=item eio_chown (const char *path, uid_t uid, gid_t gid, int pri, eio_cb cb, void *data) |
324 |
|
|
|
325 |
|
|
=item eio_chmod (const char *path, mode_t mode, int pri, eio_cb cb, void *data) |
326 |
|
|
|
327 |
|
|
=item eio_mkdir (const char *path, mode_t mode, int pri, eio_cb cb, void *data) |
328 |
|
|
|
329 |
|
|
=item eio_rmdir (const char *path, int pri, eio_cb cb, void *data) |
330 |
|
|
|
331 |
|
|
=item eio_unlink (const char *path, int pri, eio_cb cb, void *data) |
332 |
|
|
|
333 |
root |
1.10 |
=item eio_utime (const char *path, eio_tstamp atime, eio_tstamp mtime, int pri, eio_cb cb, void *data) |
334 |
root |
1.7 |
|
335 |
|
|
=item eio_mknod (const char *path, mode_t mode, dev_t dev, int pri, eio_cb cb, void *data) |
336 |
|
|
|
337 |
|
|
=item eio_link (const char *path, const char *new_path, int pri, eio_cb cb, void *data) |
338 |
|
|
|
339 |
|
|
=item eio_symlink (const char *path, const char *new_path, int pri, eio_cb cb, void *data) |
340 |
|
|
|
341 |
|
|
=item eio_rename (const char *path, const char *new_path, int pri, eio_cb cb, void *data) |
342 |
|
|
|
343 |
|
|
=item eio_mlock (void *addr, size_t length, int pri, eio_cb cb, void *data) |
344 |
|
|
|
345 |
|
|
=item eio_close (int fd, int pri, eio_cb cb, void *data) |
346 |
|
|
|
347 |
|
|
=item eio_sync (int pri, eio_cb cb, void *data) |
348 |
|
|
|
349 |
|
|
=item eio_fsync (int fd, int pri, eio_cb cb, void *data) |
350 |
|
|
|
351 |
|
|
=item eio_fdatasync (int fd, int pri, eio_cb cb, void *data) |
352 |
|
|
|
353 |
|
|
=item eio_futime (int fd, eio_tstamp atime, eio_tstamp mtime, int pri, eio_cb cb, void *data) |
354 |
|
|
|
355 |
|
|
=item eio_ftruncate (int fd, off_t offset, int pri, eio_cb cb, void *data) |
356 |
|
|
|
357 |
|
|
=item eio_fchmod (int fd, mode_t mode, int pri, eio_cb cb, void *data) |
358 |
|
|
|
359 |
|
|
=item eio_fchown (int fd, uid_t uid, gid_t gid, int pri, eio_cb cb, void *data) |
360 |
|
|
|
361 |
|
|
=item eio_dup2 (int fd, int fd2, int pri, eio_cb cb, void *data) |
362 |
|
|
|
363 |
|
|
These have the same semantics as the syscall of the same name, their |
364 |
|
|
return value is available as C<< req->result >> later. |
365 |
|
|
|
366 |
|
|
=item eio_read (int fd, void *buf, size_t length, off_t offset, int pri, eio_cb cb, void *data) |
367 |
|
|
|
368 |
|
|
=item eio_write (int fd, void *buf, size_t length, off_t offset, int pri, eio_cb cb, void *data) |
369 |
|
|
|
370 |
|
|
These two requests are called C<read> and C<write>, but actually wrap |
371 |
|
|
C<pread> and C<pwrite>. On systems that lack these calls (such as cygwin), |
372 |
|
|
libeio uses lseek/read_or_write/lseek and a mutex to serialise the |
373 |
|
|
requests, so all these requests run serially and do not disturb each |
374 |
|
|
other. However, they still disturb the file offset while they run, so it's |
375 |
|
|
not safe to call these functions concurrently with non-libeio functions on |
376 |
|
|
the same fd on these systems. |
377 |
|
|
|
378 |
|
|
Not surprisingly, pread and pwrite are not thread-safe on Darwin (OS/X), |
379 |
|
|
so it is advised not to submit multiple requests on the same fd on this |
380 |
|
|
horrible pile of garbage. |
381 |
|
|
|
382 |
root |
1.10 |
=item eio_mlockall (int flags, int pri, eio_cb cb, void *data) |
383 |
|
|
|
384 |
|
|
Like C<mlockall>, but the flag value constants are called |
385 |
|
|
C<EIO_MCL_CURRENT> and C<EIO_MCL_FUTURE>. |
386 |
|
|
|
387 |
|
|
=item eio_msync (void *addr, size_t length, int flags, int pri, eio_cb cb, void *data) |
388 |
|
|
|
389 |
|
|
Just like msync, except that the flag values are called C<EIO_MS_ASYNC>, |
390 |
|
|
C<EIO_MS_INVALIDATE> and C<EIO_MS_SYNC>. |
391 |
|
|
|
392 |
|
|
=item eio_readlink (const char *path, int pri, eio_cb cb, void *data) |
393 |
|
|
|
394 |
|
|
If successful, the path read by C<readlink(2)> can be accessed via C<< |
395 |
|
|
req->ptr2 >> and is I<NOT> null-terminated, with the length specified as |
396 |
|
|
C<< req->result >>. |
397 |
|
|
|
398 |
|
|
if (req->result >= 0) |
399 |
|
|
{ |
400 |
|
|
char *target = strndup ((char *)req->ptr2, req->result); |
401 |
|
|
|
402 |
|
|
free (target); |
403 |
|
|
} |
404 |
|
|
|
405 |
root |
1.13 |
=item eio_realpath (const char *path, int pri, eio_cb cb, void *data) |
406 |
|
|
|
407 |
|
|
Similar to the realpath libc function, but unlike that one, result is |
408 |
root |
1.14 |
C<-1> on failure and the length of the returned path in C<ptr2> (which is |
409 |
|
|
not 0-terminated) - this is similar to readlink. |
410 |
root |
1.13 |
|
411 |
root |
1.10 |
=item eio_stat (const char *path, int pri, eio_cb cb, void *data) |
412 |
|
|
|
413 |
|
|
=item eio_lstat (const char *path, int pri, eio_cb cb, void *data) |
414 |
|
|
|
415 |
root |
1.7 |
=item eio_fstat (int fd, int pri, eio_cb cb, void *data) |
416 |
|
|
|
417 |
|
|
Stats a file - if C<< req->result >> indicates success, then you can |
418 |
|
|
access the C<struct stat>-like structure via C<< req->ptr2 >>: |
419 |
|
|
|
420 |
root |
1.17 |
EIO_STRUCT_STAT *statdata = (EIO_STRUCT_STAT *)req->ptr2; |
421 |
root |
1.7 |
|
422 |
root |
1.10 |
=item eio_statvfs (const char *path, int pri, eio_cb cb, void *data) |
423 |
|
|
|
424 |
|
|
=item eio_fstatvfs (int fd, int pri, eio_cb cb, void *data) |
425 |
root |
1.7 |
|
426 |
|
|
Stats a filesystem - if C<< req->result >> indicates success, then you can |
427 |
|
|
access the C<struct statvfs>-like structure via C<< req->ptr2 >>: |
428 |
|
|
|
429 |
root |
1.17 |
EIO_STRUCT_STATVFS *statdata = (EIO_STRUCT_STATVFS *)req->ptr2; |
430 |
root |
1.7 |
|
431 |
|
|
=back |
432 |
|
|
|
433 |
|
|
=head3 READING DIRECTORIES |
434 |
|
|
|
435 |
|
|
Reading directories sounds simple, but can be rather demanding, especially |
436 |
|
|
if you want to do stuff such as traversing a diretcory hierarchy or |
437 |
|
|
processing all files in a directory. Libeio can assist thess complex tasks |
438 |
|
|
with it's C<eio_readdir> call. |
439 |
|
|
|
440 |
|
|
=over 4 |
441 |
|
|
|
442 |
|
|
=item eio_readdir (const char *path, int flags, int pri, eio_cb cb, void *data) |
443 |
|
|
|
444 |
|
|
This is a very complex call. It basically reads through a whole directory |
445 |
|
|
(via the C<opendir>, C<readdir> and C<closedir> calls) and returns either |
446 |
|
|
the names or an array of C<struct eio_dirent>, depending on the C<flags> |
447 |
|
|
argument. |
448 |
|
|
|
449 |
|
|
The C<< req->result >> indicates either the number of files found, or |
450 |
root |
1.10 |
C<-1> on error. On success, null-terminated names can be found as C<< req->ptr2 >>, |
451 |
root |
1.7 |
and C<struct eio_dirents>, if requested by C<flags>, can be found via C<< |
452 |
|
|
req->ptr1 >>. |
453 |
|
|
|
454 |
|
|
Here is an example that prints all the names: |
455 |
|
|
|
456 |
|
|
int i; |
457 |
|
|
char *names = (char *)req->ptr2; |
458 |
|
|
|
459 |
|
|
for (i = 0; i < req->result; ++i) |
460 |
|
|
{ |
461 |
|
|
printf ("name #%d: %s\n", i, names); |
462 |
|
|
|
463 |
|
|
/* move to next name */ |
464 |
|
|
names += strlen (names) + 1; |
465 |
|
|
} |
466 |
|
|
|
467 |
|
|
Pseudo-entries such as F<.> and F<..> are never returned by C<eio_readdir>. |
468 |
|
|
|
469 |
|
|
C<flags> can be any combination of: |
470 |
|
|
|
471 |
|
|
=over 4 |
472 |
|
|
|
473 |
|
|
=item EIO_READDIR_DENTS |
474 |
|
|
|
475 |
|
|
If this flag is specified, then, in addition to the names in C<ptr2>, |
476 |
|
|
also an array of C<struct eio_dirent> is returned, in C<ptr1>. A C<struct |
477 |
|
|
eio_dirent> looks like this: |
478 |
|
|
|
479 |
root |
1.17 |
struct eio_dirent |
480 |
|
|
{ |
481 |
|
|
int nameofs; /* offset of null-terminated name string in (char *)req->ptr2 */ |
482 |
|
|
unsigned short namelen; /* size of filename without trailing 0 */ |
483 |
|
|
unsigned char type; /* one of EIO_DT_* */ |
484 |
|
|
signed char score; /* internal use */ |
485 |
|
|
ino_t inode; /* the inode number, if available, otherwise unspecified */ |
486 |
|
|
}; |
487 |
root |
1.7 |
|
488 |
|
|
The only members you normally would access are C<nameofs>, which is the |
489 |
|
|
byte-offset from C<ptr2> to the start of the name, C<namelen> and C<type>. |
490 |
|
|
|
491 |
|
|
C<type> can be one of: |
492 |
|
|
|
493 |
|
|
C<EIO_DT_UNKNOWN> - if the type is not known (very common) and you have to C<stat> |
494 |
|
|
the name yourself if you need to know, |
495 |
|
|
one of the "standard" POSIX file types (C<EIO_DT_REG>, C<EIO_DT_DIR>, C<EIO_DT_LNK>, |
496 |
|
|
C<EIO_DT_FIFO>, C<EIO_DT_SOCK>, C<EIO_DT_CHR>, C<EIO_DT_BLK>) |
497 |
|
|
or some OS-specific type (currently |
498 |
|
|
C<EIO_DT_MPC> - multiplexed char device (v7+coherent), |
499 |
|
|
C<EIO_DT_NAM> - xenix special named file, |
500 |
|
|
C<EIO_DT_MPB> - multiplexed block device (v7+coherent), |
501 |
|
|
C<EIO_DT_NWK> - HP-UX network special, |
502 |
|
|
C<EIO_DT_CMP> - VxFS compressed, |
503 |
|
|
C<EIO_DT_DOOR> - solaris door, or |
504 |
|
|
C<EIO_DT_WHT>). |
505 |
|
|
|
506 |
|
|
This example prints all names and their type: |
507 |
|
|
|
508 |
|
|
int i; |
509 |
|
|
struct eio_dirent *ents = (struct eio_dirent *)req->ptr1; |
510 |
|
|
char *names = (char *)req->ptr2; |
511 |
|
|
|
512 |
|
|
for (i = 0; i < req->result; ++i) |
513 |
|
|
{ |
514 |
|
|
struct eio_dirent *ent = ents + i; |
515 |
|
|
char *name = names + ent->nameofs; |
516 |
|
|
|
517 |
|
|
printf ("name #%d: %s (type %d)\n", i, name, ent->type); |
518 |
|
|
} |
519 |
|
|
|
520 |
|
|
=item EIO_READDIR_DIRS_FIRST |
521 |
|
|
|
522 |
|
|
When this flag is specified, then the names will be returned in an order |
523 |
|
|
where likely directories come first, in optimal C<stat> order. This is |
524 |
|
|
useful when you need to quickly find directories, or you want to find all |
525 |
|
|
directories while avoiding to stat() each entry. |
526 |
|
|
|
527 |
|
|
If the system returns type information in readdir, then this is used |
528 |
|
|
to find directories directly. Otherwise, likely directories are names |
529 |
|
|
beginning with ".", or otherwise names with no dots, of which names with |
530 |
|
|
short names are tried first. |
531 |
|
|
|
532 |
|
|
=item EIO_READDIR_STAT_ORDER |
533 |
|
|
|
534 |
|
|
When this flag is specified, then the names will be returned in an order |
535 |
|
|
suitable for stat()'ing each one. That is, when you plan to stat() |
536 |
|
|
all files in the given directory, then the returned order will likely |
537 |
|
|
be fastest. |
538 |
|
|
|
539 |
|
|
If both this flag and C<EIO_READDIR_DIRS_FIRST> are specified, then |
540 |
|
|
the likely dirs come first, resulting in a less optimal stat order. |
541 |
|
|
|
542 |
|
|
=item EIO_READDIR_FOUND_UNKNOWN |
543 |
|
|
|
544 |
|
|
This flag should not be specified when calling C<eio_readdir>. Instead, |
545 |
|
|
it is being set by C<eio_readdir> (you can access the C<flags> via C<< |
546 |
|
|
req->int1 >>, when any of the C<type>'s found were C<EIO_DT_UNKNOWN>. The |
547 |
|
|
absense of this flag therefore indicates that all C<type>'s are known, |
548 |
|
|
which can be used to speed up some algorithms. |
549 |
|
|
|
550 |
|
|
A typical use case would be to identify all subdirectories within a |
551 |
|
|
directory - you would ask C<eio_readdir> for C<EIO_READDIR_DIRS_FIRST>. If |
552 |
|
|
then this flag is I<NOT> set, then all the entries at the beginning of the |
553 |
|
|
returned array of type C<EIO_DT_DIR> are the directories. Otherwise, you |
554 |
|
|
should start C<stat()>'ing the entries starting at the beginning of the |
555 |
|
|
array, stopping as soon as you found all directories (the count can be |
556 |
|
|
deduced by the link count of the directory). |
557 |
|
|
|
558 |
|
|
=back |
559 |
|
|
|
560 |
|
|
=back |
561 |
|
|
|
562 |
|
|
=head3 OS-SPECIFIC CALL WRAPPERS |
563 |
|
|
|
564 |
|
|
These wrap OS-specific calls (usually Linux ones), and might or might not |
565 |
|
|
be emulated on other operating systems. Calls that are not emulated will |
566 |
|
|
return C<-1> and set C<errno> to C<ENOSYS>. |
567 |
|
|
|
568 |
|
|
=over 4 |
569 |
|
|
|
570 |
|
|
=item eio_sendfile (int out_fd, int in_fd, off_t in_offset, size_t length, int pri, eio_cb cb, void *data) |
571 |
|
|
|
572 |
|
|
Wraps the C<sendfile> syscall. The arguments follow the Linux version, but |
573 |
|
|
libeio supports and will use similar calls on FreeBSD, HP/UX, Solaris and |
574 |
|
|
Darwin. |
575 |
|
|
|
576 |
|
|
If the OS doesn't support some sendfile-like call, or the call fails, |
577 |
|
|
indicating support for the given file descriptor type (for example, |
578 |
|
|
Linux's sendfile might not support file to file copies), then libeio will |
579 |
|
|
emulate the call in userspace, so there are almost no limitations on its |
580 |
|
|
use. |
581 |
|
|
|
582 |
|
|
=item eio_readahead (int fd, off_t offset, size_t length, int pri, eio_cb cb, void *data) |
583 |
|
|
|
584 |
|
|
Calls C<readahead(2)>. If the syscall is missing, then the call is |
585 |
|
|
emulated by simply reading the data (currently in 64kiB chunks). |
586 |
|
|
|
587 |
|
|
=item eio_sync_file_range (int fd, off_t offset, size_t nbytes, unsigned int flags, int pri, eio_cb cb, void *data) |
588 |
|
|
|
589 |
|
|
Calls C<sync_file_range>. If the syscall is missing, then this is the same |
590 |
|
|
as calling C<fdatasync>. |
591 |
|
|
|
592 |
root |
1.10 |
Flags can be any combination of C<EIO_SYNC_FILE_RANGE_WAIT_BEFORE>, |
593 |
|
|
C<EIO_SYNC_FILE_RANGE_WRITE> and C<EIO_SYNC_FILE_RANGE_WAIT_AFTER>. |
594 |
|
|
|
595 |
root |
1.7 |
=back |
596 |
|
|
|
597 |
|
|
=head3 LIBEIO-SPECIFIC REQUESTS |
598 |
|
|
|
599 |
|
|
These requests are specific to libeio and do not correspond to any OS call. |
600 |
|
|
|
601 |
|
|
=over 4 |
602 |
|
|
|
603 |
root |
1.9 |
=item eio_mtouch (void *addr, size_t length, int flags, int pri, eio_cb cb, void *data) |
604 |
root |
1.7 |
|
605 |
root |
1.9 |
Reads (C<flags == 0>) or modifies (C<flags == EIO_MT_MODIFY) the given |
606 |
|
|
memory area, page-wise, that is, it reads (or reads and writes back) the |
607 |
|
|
first octet of every page that spans the memory area. |
608 |
|
|
|
609 |
|
|
This can be used to page in some mmapped file, or dirty some pages. Note |
610 |
|
|
that dirtying is an unlocked read-write access, so races can ensue when |
611 |
|
|
the some other thread modifies the data stored in that memory area. |
612 |
|
|
|
613 |
|
|
=item eio_custom (void (*)(eio_req *) execute, int pri, eio_cb cb, void *data) |
614 |
root |
1.7 |
|
615 |
|
|
Executes a custom request, i.e., a user-specified callback. |
616 |
|
|
|
617 |
|
|
The callback gets the C<eio_req *> as parameter and is expected to read |
618 |
|
|
and modify any request-specific members. Specifically, it should set C<< |
619 |
|
|
req->result >> to the result value, just like other requests. |
620 |
|
|
|
621 |
|
|
Here is an example that simply calls C<open>, like C<eio_open>, but it |
622 |
|
|
uses the C<data> member as filename and uses a hardcoded C<O_RDONLY>. If |
623 |
|
|
you want to pass more/other parameters, you either need to pass some |
624 |
|
|
struct or so via C<data> or provide your own wrapper using the low-level |
625 |
|
|
API. |
626 |
|
|
|
627 |
|
|
static int |
628 |
|
|
my_open_done (eio_req *req) |
629 |
|
|
{ |
630 |
|
|
int fd = req->result; |
631 |
|
|
|
632 |
|
|
return 0; |
633 |
|
|
} |
634 |
|
|
|
635 |
|
|
static void |
636 |
|
|
my_open (eio_req *req) |
637 |
|
|
{ |
638 |
|
|
req->result = open (req->data, O_RDONLY); |
639 |
|
|
} |
640 |
|
|
|
641 |
|
|
eio_custom (my_open, 0, my_open_done, "/etc/passwd"); |
642 |
|
|
|
643 |
root |
1.9 |
=item eio_busy (eio_tstamp delay, int pri, eio_cb cb, void *data) |
644 |
root |
1.7 |
|
645 |
|
|
This is a a request that takes C<delay> seconds to execute, but otherwise |
646 |
|
|
does nothing - it simply puts one of the worker threads to sleep for this |
647 |
|
|
long. |
648 |
|
|
|
649 |
|
|
This request can be used to artificially increase load, e.g. for debugging |
650 |
|
|
or benchmarking reasons. |
651 |
|
|
|
652 |
root |
1.9 |
=item eio_nop (int pri, eio_cb cb, void *data) |
653 |
root |
1.7 |
|
654 |
|
|
This request does nothing, except go through the whole request cycle. This |
655 |
|
|
can be used to measure latency or in some cases to simplify code, but is |
656 |
|
|
not really of much use. |
657 |
|
|
|
658 |
|
|
=back |
659 |
|
|
|
660 |
|
|
=head3 GROUPING AND LIMITING REQUESTS |
661 |
root |
1.1 |
|
662 |
root |
1.12 |
There is one more rather special request, C<eio_grp>. It is a very special |
663 |
|
|
aio request: Instead of doing something, it is a container for other eio |
664 |
|
|
requests. |
665 |
|
|
|
666 |
|
|
There are two primary use cases for this: a) bundle many requests into a |
667 |
|
|
single, composite, request with a definite callback and the ability to |
668 |
|
|
cancel the whole request with its subrequests and b) limiting the number |
669 |
|
|
of "active" requests. |
670 |
|
|
|
671 |
|
|
Further below you will find more dicussion of these topics - first follows |
672 |
|
|
the reference section detailing the request generator and other methods. |
673 |
|
|
|
674 |
|
|
=over 4 |
675 |
|
|
|
676 |
root |
1.17 |
=item eio_req *grp = eio_grp (eio_cb cb, void *data) |
677 |
|
|
|
678 |
|
|
Creates, submits and returns a group request. |
679 |
|
|
|
680 |
|
|
=item eio_grp_add (eio_req *grp, eio_req *req) |
681 |
|
|
|
682 |
|
|
Adds a request to the request group. |
683 |
|
|
|
684 |
|
|
=item eio_grp_cancel (eio_req *grp) |
685 |
|
|
|
686 |
|
|
Cancels all requests I<in> the group, but I<not> the group request |
687 |
|
|
itself. You can cancel the group request via a normal C<eio_cancel> call. |
688 |
|
|
|
689 |
root |
1.12 |
|
690 |
|
|
|
691 |
|
|
=back |
692 |
|
|
|
693 |
|
|
|
694 |
|
|
|
695 |
root |
1.1 |
#TODO |
696 |
|
|
|
697 |
root |
1.7 |
/*****************************************************************************/ |
698 |
|
|
/* groups */ |
699 |
root |
1.1 |
|
700 |
root |
1.7 |
eio_req *eio_grp (eio_cb cb, void *data); |
701 |
|
|
void eio_grp_feed (eio_req *grp, void (*feed)(eio_req *req), int limit); |
702 |
|
|
void eio_grp_limit (eio_req *grp, int limit); |
703 |
|
|
void eio_grp_cancel (eio_req *grp); /* cancels all sub requests but not the group */ |
704 |
root |
1.1 |
|
705 |
|
|
|
706 |
|
|
=back |
707 |
|
|
|
708 |
|
|
|
709 |
|
|
=head1 LOW LEVEL REQUEST API |
710 |
|
|
|
711 |
|
|
#TODO |
712 |
|
|
|
713 |
root |
1.7 |
|
714 |
|
|
=head1 ANATOMY AND LIFETIME OF AN EIO REQUEST |
715 |
|
|
|
716 |
|
|
A request is represented by a structure of type C<eio_req>. To initialise |
717 |
|
|
it, clear it to all zero bytes: |
718 |
|
|
|
719 |
root |
1.17 |
eio_req req; |
720 |
root |
1.7 |
|
721 |
root |
1.17 |
memset (&req, 0, sizeof (req)); |
722 |
root |
1.7 |
|
723 |
|
|
A more common way to initialise a new C<eio_req> is to use C<calloc>: |
724 |
|
|
|
725 |
root |
1.17 |
eio_req *req = calloc (1, sizeof (*req)); |
726 |
root |
1.7 |
|
727 |
|
|
In either case, libeio neither allocates, initialises or frees the |
728 |
|
|
C<eio_req> structure for you - it merely uses it. |
729 |
|
|
|
730 |
|
|
zero |
731 |
|
|
|
732 |
|
|
#TODO |
733 |
|
|
|
734 |
root |
1.8 |
=head2 CONFIGURATION |
735 |
|
|
|
736 |
|
|
The functions in this section can sometimes be useful, but the default |
737 |
|
|
configuration will do in most case, so you should skip this section on |
738 |
|
|
first reading. |
739 |
|
|
|
740 |
|
|
=over 4 |
741 |
|
|
|
742 |
|
|
=item eio_set_max_poll_time (eio_tstamp nseconds) |
743 |
|
|
|
744 |
|
|
This causes C<eio_poll ()> to return after it has detected that it was |
745 |
|
|
running for C<nsecond> seconds or longer (this number can be fractional). |
746 |
|
|
|
747 |
|
|
This can be used to limit the amount of time spent handling eio requests, |
748 |
|
|
for example, in interactive programs, you might want to limit this time to |
749 |
|
|
C<0.01> seconds or so. |
750 |
|
|
|
751 |
|
|
Note that: |
752 |
|
|
|
753 |
|
|
a) libeio doesn't know how long your request callbacks take, so the time |
754 |
|
|
spent in C<eio_poll> is up to one callback invocation longer then this |
755 |
|
|
interval. |
756 |
|
|
|
757 |
|
|
b) this is implemented by calling C<gettimeofday> after each request, |
758 |
|
|
which can be costly. |
759 |
|
|
|
760 |
|
|
c) at least one request will be handled. |
761 |
|
|
|
762 |
|
|
=item eio_set_max_poll_reqs (unsigned int nreqs) |
763 |
|
|
|
764 |
|
|
When C<nreqs> is non-zero, then C<eio_poll> will not handle more than |
765 |
|
|
C<nreqs> requests per invocation. This is a less costly way to limit the |
766 |
|
|
amount of work done by C<eio_poll> then setting a time limit. |
767 |
|
|
|
768 |
|
|
If you know your callbacks are generally fast, you could use this to |
769 |
|
|
encourage interactiveness in your programs by setting it to C<10>, C<100> |
770 |
|
|
or even C<1000>. |
771 |
|
|
|
772 |
|
|
=item eio_set_min_parallel (unsigned int nthreads) |
773 |
|
|
|
774 |
|
|
Make sure libeio can handle at least this many requests in parallel. It |
775 |
|
|
might be able handle more. |
776 |
|
|
|
777 |
|
|
=item eio_set_max_parallel (unsigned int nthreads) |
778 |
|
|
|
779 |
|
|
Set the maximum number of threads that libeio will spawn. |
780 |
|
|
|
781 |
|
|
=item eio_set_max_idle (unsigned int nthreads) |
782 |
|
|
|
783 |
|
|
Libeio uses threads internally to handle most requests, and will start and stop threads on demand. |
784 |
|
|
|
785 |
|
|
This call can be used to limit the number of idle threads (threads without |
786 |
|
|
work to do): libeio will keep some threads idle in preparation for more |
787 |
|
|
requests, but never longer than C<nthreads> threads. |
788 |
|
|
|
789 |
|
|
In addition to this, libeio will also stop threads when they are idle for |
790 |
|
|
a few seconds, regardless of this setting. |
791 |
|
|
|
792 |
|
|
=item unsigned int eio_nthreads () |
793 |
|
|
|
794 |
|
|
Return the number of worker threads currently running. |
795 |
|
|
|
796 |
|
|
=item unsigned int eio_nreqs () |
797 |
|
|
|
798 |
|
|
Return the number of requests currently handled by libeio. This is the |
799 |
|
|
total number of requests that have been submitted to libeio, but not yet |
800 |
|
|
destroyed. |
801 |
|
|
|
802 |
|
|
=item unsigned int eio_nready () |
803 |
|
|
|
804 |
|
|
Returns the number of ready requests, i.e. requests that have been |
805 |
|
|
submitted but have not yet entered the execution phase. |
806 |
|
|
|
807 |
|
|
=item unsigned int eio_npending () |
808 |
|
|
|
809 |
|
|
Returns the number of pending requests, i.e. requests that have been |
810 |
|
|
executed and have results, but have not been finished yet by a call to |
811 |
|
|
C<eio_poll>). |
812 |
|
|
|
813 |
|
|
=back |
814 |
|
|
|
815 |
root |
1.1 |
=head1 EMBEDDING |
816 |
|
|
|
817 |
|
|
Libeio can be embedded directly into programs. This functionality is not |
818 |
|
|
documented and not (yet) officially supported. |
819 |
|
|
|
820 |
root |
1.3 |
Note that, when including C<libeio.m4>, you are responsible for defining |
821 |
|
|
the compilation environment (C<_LARGEFILE_SOURCE>, C<_GNU_SOURCE> etc.). |
822 |
|
|
|
823 |
root |
1.2 |
If you need to know how, check the C<IO::AIO> perl module, which does |
824 |
root |
1.1 |
exactly that. |
825 |
|
|
|
826 |
|
|
|
827 |
root |
1.4 |
=head1 COMPILETIME CONFIGURATION |
828 |
|
|
|
829 |
|
|
These symbols, if used, must be defined when compiling F<eio.c>. |
830 |
|
|
|
831 |
|
|
=over 4 |
832 |
|
|
|
833 |
|
|
=item EIO_STACKSIZE |
834 |
|
|
|
835 |
|
|
This symbol governs the stack size for each eio thread. Libeio itself |
836 |
|
|
was written to use very little stackspace, but when using C<EIO_CUSTOM> |
837 |
|
|
requests, you might want to increase this. |
838 |
|
|
|
839 |
|
|
If this symbol is undefined (the default) then libeio will use its default |
840 |
|
|
stack size (C<sizeof (long) * 4096> currently). If it is defined, but |
841 |
|
|
C<0>, then the default operating system stack size will be used. In all |
842 |
|
|
other cases, the value must be an expression that evaluates to the desired |
843 |
|
|
stack size. |
844 |
|
|
|
845 |
|
|
=back |
846 |
|
|
|
847 |
|
|
|
848 |
root |
1.1 |
=head1 PORTABILITY REQUIREMENTS |
849 |
|
|
|
850 |
|
|
In addition to a working ISO-C implementation, libeio relies on a few |
851 |
|
|
additional extensions: |
852 |
|
|
|
853 |
|
|
=over 4 |
854 |
|
|
|
855 |
|
|
=item POSIX threads |
856 |
|
|
|
857 |
|
|
To be portable, this module uses threads, specifically, the POSIX threads |
858 |
|
|
library must be available (and working, which partially excludes many xBSD |
859 |
|
|
systems, where C<fork ()> is buggy). |
860 |
|
|
|
861 |
|
|
=item POSIX-compatible filesystem API |
862 |
|
|
|
863 |
|
|
This is actually a harder portability requirement: The libeio API is quite |
864 |
|
|
demanding regarding POSIX API calls (symlinks, user/group management |
865 |
|
|
etc.). |
866 |
|
|
|
867 |
|
|
=item C<double> must hold a time value in seconds with enough accuracy |
868 |
|
|
|
869 |
|
|
The type C<double> is used to represent timestamps. It is required to |
870 |
|
|
have at least 51 bits of mantissa (and 9 bits of exponent), which is good |
871 |
|
|
enough for at least into the year 4000. This requirement is fulfilled by |
872 |
|
|
implementations implementing IEEE 754 (basically all existing ones). |
873 |
|
|
|
874 |
|
|
=back |
875 |
|
|
|
876 |
|
|
If you know of other additional requirements drop me a note. |
877 |
|
|
|
878 |
|
|
|
879 |
|
|
=head1 AUTHOR |
880 |
|
|
|
881 |
|
|
Marc Lehmann <libeio@schmorp.de>. |
882 |
|
|
|