ViewVC Help
View File | Revision Log | Show Annotations | Download File
/cvs/libev/ev_iouring.c
(Generate patch)

Comparing libev/ev_iouring.c (file contents):
Revision 1.9 by root, Fri Dec 27 16:15:58 2019 UTC vs.
Revision 1.10 by root, Fri Dec 27 21:17:11 2019 UTC

430 int fd = cqe->user_data & 0xffffffffU; 430 int fd = cqe->user_data & 0xffffffffU;
431 uint32_t gen = cqe->user_data >> 32; 431 uint32_t gen = cqe->user_data >> 32;
432 int res = cqe->res; 432 int res = cqe->res;
433 433
434 /* ignore fd removal events, if there are any. TODO: verify */ 434 /* ignore fd removal events, if there are any. TODO: verify */
435 /* TODO: yes, this triggers */
435 if (cqe->user_data == (__u64)-1) 436 if (cqe->user_data == (__u64)-1)
436 abort ();//D 437 return;
437 438
438 assert (("libev: io_uring fd must be in-bounds", fd >= 0 && fd < anfdmax)); 439 assert (("libev: io_uring fd must be in-bounds", fd >= 0 && fd < anfdmax));
439 440
440 /* documentation lies, of course. the result value is NOT like 441 /* documentation lies, of course. the result value is NOT like
441 * normal syscalls, but like linux raw syscalls, i.e. negative 442 * normal syscalls, but like linux raw syscalls, i.e. negative
487iouring_overflow (EV_P) 488iouring_overflow (EV_P)
488{ 489{
489 /* we have two options, resize the queue (by tearing down 490 /* we have two options, resize the queue (by tearing down
490 * everything and recreating it, or living with it 491 * everything and recreating it, or living with it
491 * and polling. 492 * and polling.
492 * we implement this by resizing tghe queue, and, if that fails, 493 * we implement this by resizing the queue, and, if that fails,
493 * we just recreate the state on every failure, which 494 * we just recreate the state on every failure, which
494 * kind of is a very inefficient poll. 495 * kind of is a very inefficient poll.
495 * one danger is, due to the bios toward lower fds, 496 * one danger is, due to the bios toward lower fds,
496 * we will only really get events for those, so 497 * we will only really get events for those, so
497 * maybe we need a poll() fallback, after all. 498 * maybe we need a poll() fallback, after all.
509 else 510 else
510 { 511 {
511 /* we hit the kernel limit, we should fall back to something else. 512 /* we hit the kernel limit, we should fall back to something else.
512 * we can either poll() a few times and hope for the best, 513 * we can either poll() a few times and hope for the best,
513 * poll always, or switch to epoll. 514 * poll always, or switch to epoll.
514 * since we use epoll anyways, go epoll. 515 * TODO: is this necessary with newer kernels?
515 */ 516 */
516 517
517 iouring_internal_destroy (EV_A); 518 iouring_internal_destroy (EV_A);
518 519
519 /* this should make it so that on return, we don'T call any uring functions */ 520 /* this should make it so that on return, we don't call any uring functions */
520 iouring_to_submit = 0; 521 iouring_to_submit = 0;
521 522
522 for (;;) 523 for (;;)
523 { 524 {
524 backend = epoll_init (EV_A_ 0); 525 backend = epoll_init (EV_A_ 0);

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines