ViewVC Help
View File | Revision Log | Show Annotations | Download File
/cvs/libeio/eio.pod
(Generate patch)

Comparing libeio/eio.pod (file contents):
Revision 1.15 by root, Tue Jul 5 16:57:41 2011 UTC vs.
Revision 1.20 by sf-exg, Thu Jul 7 17:35:52 2011 UTC

45Unlike the name component C<stamp> might indicate, it is also used for 45Unlike the name component C<stamp> might indicate, it is also used for
46time differences throughout libeio. 46time differences throughout libeio.
47 47
48=head2 FORK SUPPORT 48=head2 FORK SUPPORT
49 49
50Calling C<fork ()> is fully supported by this module. It is implemented in these steps: 50Calling C<fork ()> is fully supported by this module - but you must not
51rely on this. It is currently implemented in these steps:
51 52
52 1. wait till all requests in "execute" state have been handled 53 1. wait till all requests in "execute" state have been handled
53 (basically requests that are already handed over to the kernel). 54 (basically requests that are already handed over to the kernel).
54 2. fork 55 2. fork
55 3. in the parent, continue business as usual, done 56 3. in the parent, continue business as usual, done
56 4. in the child, destroy all ready and pending requests and free the 57 4. in the child, destroy all ready and pending requests and free the
57 memory used by the worker threads. This gives you a fully empty 58 memory used by the worker threads. This gives you a fully empty
58 libeio queue. 59 libeio queue.
59 60
60Note, however, since libeio does use threads, thr above guarantee doesn't 61Note, however, since libeio does use threads, the above guarantee doesn't
61cover your libc, for example, malloc and other libc functions are not 62cover your libc, for example, malloc and other libc functions are not
62fork-safe, so there is very little you can do after a fork, and in fatc, 63fork-safe, so there is very little you can do after a fork, and in fact,
63the above might crash, and thus change. 64the above might crash, and thus change.
64 65
65=head1 INITIALISATION/INTEGRATION 66=head1 INITIALISATION/INTEGRATION
66 67
67Before you can call any eio functions you first have to initialise the 68Before you can call any eio functions you first have to initialise the
130 131
131If C<eio_poll ()> is configured to not handle all results in one go 132If C<eio_poll ()> is configured to not handle all results in one go
132(i.e. it returns C<-1>) then you should start an idle watcher that calls 133(i.e. it returns C<-1>) then you should start an idle watcher that calls
133C<eio_poll> until it returns something C<!= -1>. 134C<eio_poll> until it returns something C<!= -1>.
134 135
135A full-featured wrapper would look as follows (if C<eio_poll> is handling 136A full-featured connector between libeio and libev would look as follows
136all requests, it can of course be simplified a lot by removing the idle 137(if C<eio_poll> is handling all requests, it can of course be simplified a
137watcher logic): 138lot by removing the idle watcher logic):
138 139
139 static struct ev_loop *loop; 140 static struct ev_loop *loop;
140 static ev_idle repeat_watcher; 141 static ev_idle repeat_watcher;
141 static ev_async ready_watcher; 142 static ev_async ready_watcher;
142 143
143 /* idle watcher callback, only used when eio_poll */ 144 /* idle watcher callback, only used when eio_poll */
144 /* didn't handle all results in one call */ 145 /* didn't handle all results in one call */
145 static void 146 static void
146 repeat (EV_P_ ev_idle *w, int revents) 147 repeat (EV_P_ ev_idle *w, int revents)
147 { 148 {
148 if (eio_poll () != -1) 149 if (eio_poll () != -1)
149 ev_idle_stop (EV_A_ w); 150 ev_idle_stop (EV_A_ w);
150 } 151 }
151 152
152 /* eio has some results, process them */ 153 /* eio has some results, process them */
153 static void 154 static void
154 ready (EV_P_ ev_async *w, int revents) 155 ready (EV_P_ ev_async *w, int revents)
155 { 156 {
156 if (eio_poll () == -1) 157 if (eio_poll () == -1)
157 ev_idle_start (EV_A_ &repeat_watcher); 158 ev_idle_start (EV_A_ &repeat_watcher);
158 } 159 }
159 160
160 /* wake up the event loop */ 161 /* wake up the event loop */
161 static void 162 static void
162 want_poll (void) 163 want_poll (void)
163 { 164 {
164 ev_async_send (loop, &ready_watcher) 165 ev_async_send (loop, &ready_watcher)
165 } 166 }
166 167
167 void 168 void
168 my_init_eio () 169 my_init_eio ()
169 { 170 {
170 loop = EV_DEFAULT; 171 loop = EV_DEFAULT;
171 172
172 ev_idle_init (&repeat_watcher, repeat); 173 ev_idle_init (&repeat_watcher, repeat);
173 ev_async_init (&ready_watcher, ready); 174 ev_async_init (&ready_watcher, ready);
174 ev_async_start (loop &watcher); 175 ev_async_start (loop &watcher);
175 176
176 eio_init (want_poll, 0); 177 eio_init (want_poll, 0);
177 } 178 }
178 179
179For most other event loops, you would typically use a pipe - the event 180For most other event loops, you would typically use a pipe - the event
180loop should be told to wait for read readiness on the read end. In 181loop should be told to wait for read readiness on the read end. In
181C<want_poll> you would write a single byte, in C<done_poll> you would try 182C<want_poll> you would write a single byte, in C<done_poll> you would try
182to read that byte, and in the callback for the read end, you would call 183to read that byte, and in the callback for the read end, you would call
183C<eio_poll>. The race is avoided here because the event loop should invoke 184C<eio_poll>.
184your callback again and again until the byte has been read (as the pipe 185
185read callback does not read it, only C<done_poll>). 186You don't have to take special care in the case C<eio_poll> doesn't handle
187all requests, as the done callback will not be invoked, so the event loop
188will still signal readiness for the pipe until I<all> results have been
189processed.
186 190
187 191
188=head1 HIGH LEVEL REQUEST API 192=head1 HIGH LEVEL REQUEST API
189 193
190Libeio has both a high-level API, which consists of calling a request 194Libeio has both a high-level API, which consists of calling a request
257 } 261 }
258 262
259 /* the first three arguments are passed to open(2) */ 263 /* the first three arguments are passed to open(2) */
260 /* the remaining are priority, callback and data */ 264 /* the remaining are priority, callback and data */
261 if (!eio_open ("/etc/passwd", O_RDONLY, 0, 0, file_open_done, 0)) 265 if (!eio_open ("/etc/passwd", O_RDONLY, 0, 0, file_open_done, 0))
262 abort (); /* something ent wrong, we will all die!!! */ 266 abort (); /* something went wrong, we will all die!!! */
263 267
264Note that you additionally need to call C<eio_poll> when the C<want_cb> 268Note that you additionally need to call C<eio_poll> when the C<want_cb>
265indicates that requests are ready to be processed. 269indicates that requests are ready to be processed.
270
271=head2 CANCELLING REQUESTS
272
273Sometimes the need for a request goes away before the request is
274finished. In that case, one can cancel the request by a call to
275C<eio_cancel>:
276
277=over 4
278
279=item eio_cancel (eio_req *req)
280
281Cancel the request (and all its subrequests). If the request is currently
282executing it might still continue to execute, and in other cases it might
283still take a while till the request is cancelled.
284
285Even if cancelled, the finish callback will still be invoked - the
286callbacks of all cancellable requests need to check whether the request
287has been cancelled by calling C<EIO_CANCELLED (req)>:
288
289 static int
290 my_eio_cb (eio_req *req)
291 {
292 if (EIO_CANCELLED (req))
293 return 0;
294 }
295
296In addition, cancelled requests will I<either> have C<< req->result >>
297set to C<-1> and C<errno> to C<ECANCELED>, or I<otherwise> they were
298successfully executed, despite being cancelled (e.g. when they have
299already been executed at the time they were cancelled).
300
301C<EIO_CANCELLED> is still true for requests that have successfully
302executed, as long as C<eio_cancel> was called on them at some point.
303
304=back
266 305
267=head2 AVAILABLE REQUESTS 306=head2 AVAILABLE REQUESTS
268 307
269The following request functions are available. I<All> of them return the 308The following request functions are available. I<All> of them return the
270C<eio_req *> on success and C<0> on failure, and I<all> of them have the 309C<eio_req *> on success and C<0> on failure, and I<all> of them have the
379=item eio_fstat (int fd, int pri, eio_cb cb, void *data) 418=item eio_fstat (int fd, int pri, eio_cb cb, void *data)
380 419
381Stats a file - if C<< req->result >> indicates success, then you can 420Stats a file - if C<< req->result >> indicates success, then you can
382access the C<struct stat>-like structure via C<< req->ptr2 >>: 421access the C<struct stat>-like structure via C<< req->ptr2 >>:
383 422
384 EIO_STRUCT_STAT *statdata = (EIO_STRUCT_STAT *)req->ptr2; 423 EIO_STRUCT_STAT *statdata = (EIO_STRUCT_STAT *)req->ptr2;
385 424
386=item eio_statvfs (const char *path, int pri, eio_cb cb, void *data) 425=item eio_statvfs (const char *path, int pri, eio_cb cb, void *data)
387 426
388=item eio_fstatvfs (int fd, int pri, eio_cb cb, void *data) 427=item eio_fstatvfs (int fd, int pri, eio_cb cb, void *data)
389 428
390Stats a filesystem - if C<< req->result >> indicates success, then you can 429Stats a filesystem - if C<< req->result >> indicates success, then you can
391access the C<struct statvfs>-like structure via C<< req->ptr2 >>: 430access the C<struct statvfs>-like structure via C<< req->ptr2 >>:
392 431
393 EIO_STRUCT_STATVFS *statdata = (EIO_STRUCT_STATVFS *)req->ptr2; 432 EIO_STRUCT_STATVFS *statdata = (EIO_STRUCT_STATVFS *)req->ptr2;
394 433
395=back 434=back
396 435
397=head3 READING DIRECTORIES 436=head3 READING DIRECTORIES
398 437
399Reading directories sounds simple, but can be rather demanding, especially 438Reading directories sounds simple, but can be rather demanding, especially
400if you want to do stuff such as traversing a diretcory hierarchy or 439if you want to do stuff such as traversing a directory hierarchy or
401processing all files in a directory. Libeio can assist thess complex tasks 440processing all files in a directory. Libeio can assist these complex tasks
402with it's C<eio_readdir> call. 441with it's C<eio_readdir> call.
403 442
404=over 4 443=over 4
405 444
406=item eio_readdir (const char *path, int flags, int pri, eio_cb cb, void *data) 445=item eio_readdir (const char *path, int flags, int pri, eio_cb cb, void *data)
438 477
439If this flag is specified, then, in addition to the names in C<ptr2>, 478If this flag is specified, then, in addition to the names in C<ptr2>,
440also an array of C<struct eio_dirent> is returned, in C<ptr1>. A C<struct 479also an array of C<struct eio_dirent> is returned, in C<ptr1>. A C<struct
441eio_dirent> looks like this: 480eio_dirent> looks like this:
442 481
443 struct eio_dirent 482 struct eio_dirent
444 { 483 {
445 int nameofs; /* offset of null-terminated name string in (char *)req->ptr2 */ 484 int nameofs; /* offset of null-terminated name string in (char *)req->ptr2 */
446 unsigned short namelen; /* size of filename without trailing 0 */ 485 unsigned short namelen; /* size of filename without trailing 0 */
447 unsigned char type; /* one of EIO_DT_* */ 486 unsigned char type; /* one of EIO_DT_* */
448 signed char score; /* internal use */ 487 signed char score; /* internal use */
449 ino_t inode; /* the inode number, if available, otherwise unspecified */ 488 ino_t inode; /* the inode number, if available, otherwise unspecified */
450 }; 489 };
451 490
452The only members you normally would access are C<nameofs>, which is the 491The only members you normally would access are C<nameofs>, which is the
453byte-offset from C<ptr2> to the start of the name, C<namelen> and C<type>. 492byte-offset from C<ptr2> to the start of the name, C<namelen> and C<type>.
454 493
455C<type> can be one of: 494C<type> can be one of:
498When this flag is specified, then the names will be returned in an order 537When this flag is specified, then the names will be returned in an order
499suitable for stat()'ing each one. That is, when you plan to stat() 538suitable for stat()'ing each one. That is, when you plan to stat()
500all files in the given directory, then the returned order will likely 539all files in the given directory, then the returned order will likely
501be fastest. 540be fastest.
502 541
503If both this flag and C<EIO_READDIR_DIRS_FIRST> are specified, then 542If both this flag and C<EIO_READDIR_DIRS_FIRST> are specified, then the
504the likely dirs come first, resulting in a less optimal stat order. 543likely directories come first, resulting in a less optimal stat order.
505 544
506=item EIO_READDIR_FOUND_UNKNOWN 545=item EIO_READDIR_FOUND_UNKNOWN
507 546
508This flag should not be specified when calling C<eio_readdir>. Instead, 547This flag should not be specified when calling C<eio_readdir>. Instead,
509it is being set by C<eio_readdir> (you can access the C<flags> via C<< 548it is being set by C<eio_readdir> (you can access the C<flags> via C<<
510req->int1 >>, when any of the C<type>'s found were C<EIO_DT_UNKNOWN>. The 549req->int1 >>, when any of the C<type>'s found were C<EIO_DT_UNKNOWN>. The
511absense of this flag therefore indicates that all C<type>'s are known, 550absence of this flag therefore indicates that all C<type>'s are known,
512which can be used to speed up some algorithms. 551which can be used to speed up some algorithms.
513 552
514A typical use case would be to identify all subdirectories within a 553A typical use case would be to identify all subdirectories within a
515directory - you would ask C<eio_readdir> for C<EIO_READDIR_DIRS_FIRST>. If 554directory - you would ask C<eio_readdir> for C<EIO_READDIR_DIRS_FIRST>. If
516then this flag is I<NOT> set, then all the entries at the beginning of the 555then this flag is I<NOT> set, then all the entries at the beginning of the
604 643
605 eio_custom (my_open, 0, my_open_done, "/etc/passwd"); 644 eio_custom (my_open, 0, my_open_done, "/etc/passwd");
606 645
607=item eio_busy (eio_tstamp delay, int pri, eio_cb cb, void *data) 646=item eio_busy (eio_tstamp delay, int pri, eio_cb cb, void *data)
608 647
609This is a a request that takes C<delay> seconds to execute, but otherwise 648This is a request that takes C<delay> seconds to execute, but otherwise
610does nothing - it simply puts one of the worker threads to sleep for this 649does nothing - it simply puts one of the worker threads to sleep for this
611long. 650long.
612 651
613This request can be used to artificially increase load, e.g. for debugging 652This request can be used to artificially increase load, e.g. for debugging
614or benchmarking reasons. 653or benchmarking reasons.
630There are two primary use cases for this: a) bundle many requests into a 669There are two primary use cases for this: a) bundle many requests into a
631single, composite, request with a definite callback and the ability to 670single, composite, request with a definite callback and the ability to
632cancel the whole request with its subrequests and b) limiting the number 671cancel the whole request with its subrequests and b) limiting the number
633of "active" requests. 672of "active" requests.
634 673
635Further below you will find more dicussion of these topics - first follows 674Further below you will find more discussion of these topics - first
636the reference section detailing the request generator and other methods. 675follows the reference section detailing the request generator and other
676methods.
637 677
638=over 4 678=over 4
639 679
640=item eio_grp (eio_cb cb, void *data) 680=item eio_req *grp = eio_grp (eio_cb cb, void *data)
641 681
642Creates and submits a group request. 682Creates, submits and returns a group request.
683
684=item eio_grp_add (eio_req *grp, eio_req *req)
685
686Adds a request to the request group.
687
688=item eio_grp_cancel (eio_req *grp)
689
690Cancels all requests I<in> the group, but I<not> the group request
691itself. You can cancel the group request via a normal C<eio_cancel> call.
692
693
643 694
644=back 695=back
645 696
646 697
647 698
651/* groups */ 702/* groups */
652 703
653eio_req *eio_grp (eio_cb cb, void *data); 704eio_req *eio_grp (eio_cb cb, void *data);
654void eio_grp_feed (eio_req *grp, void (*feed)(eio_req *req), int limit); 705void eio_grp_feed (eio_req *grp, void (*feed)(eio_req *req), int limit);
655void eio_grp_limit (eio_req *grp, int limit); 706void eio_grp_limit (eio_req *grp, int limit);
656void eio_grp_add (eio_req *grp, eio_req *req);
657void eio_grp_cancel (eio_req *grp); /* cancels all sub requests but not the group */ 707void eio_grp_cancel (eio_req *grp); /* cancels all sub requests but not the group */
658 708
659 709
660=back 710=back
661 711
668=head1 ANATOMY AND LIFETIME OF AN EIO REQUEST 718=head1 ANATOMY AND LIFETIME OF AN EIO REQUEST
669 719
670A request is represented by a structure of type C<eio_req>. To initialise 720A request is represented by a structure of type C<eio_req>. To initialise
671it, clear it to all zero bytes: 721it, clear it to all zero bytes:
672 722
673 eio_req req; 723 eio_req req;
674 724
675 memset (&req, 0, sizeof (req)); 725 memset (&req, 0, sizeof (req));
676 726
677A more common way to initialise a new C<eio_req> is to use C<calloc>: 727A more common way to initialise a new C<eio_req> is to use C<calloc>:
678 728
679 eio_req *req = calloc (1, sizeof (*req)); 729 eio_req *req = calloc (1, sizeof (*req));
680 730
681In either case, libeio neither allocates, initialises or frees the 731In either case, libeio neither allocates, initialises or frees the
682C<eio_req> structure for you - it merely uses it. 732C<eio_req> structure for you - it merely uses it.
683 733
684zero 734zero
702for example, in interactive programs, you might want to limit this time to 752for example, in interactive programs, you might want to limit this time to
703C<0.01> seconds or so. 753C<0.01> seconds or so.
704 754
705Note that: 755Note that:
706 756
757=over 4
758
707a) libeio doesn't know how long your request callbacks take, so the time 759=item a) libeio doesn't know how long your request callbacks take, so the
708spent in C<eio_poll> is up to one callback invocation longer then this 760time spent in C<eio_poll> is up to one callback invocation longer then
709interval. 761this interval.
710 762
711b) this is implemented by calling C<gettimeofday> after each request, 763=item b) this is implemented by calling C<gettimeofday> after each
712which can be costly. 764request, which can be costly.
713 765
714c) at least one request will be handled. 766=item c) at least one request will be handled.
767
768=back
715 769
716=item eio_set_max_poll_reqs (unsigned int nreqs) 770=item eio_set_max_poll_reqs (unsigned int nreqs)
717 771
718When C<nreqs> is non-zero, then C<eio_poll> will not handle more than 772When C<nreqs> is non-zero, then C<eio_poll> will not handle more than
719C<nreqs> requests per invocation. This is a less costly way to limit the 773C<nreqs> requests per invocation. This is a less costly way to limit the

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines