Rbd-638 !link! 🎉 💫
The most likely cause appears to be a path‑translation bug in the CLI when handling a remote pool spec. We have the following work‑arounds in place: • Copy the destination image locally first, then run export‑diff. • Use `rbd diff` + manual transfer. • Map the remote image as a block device and operate on `/dev/rbdX`.
| Item | Details | |------|---------| | Bug ID | RBD‑638 | | Title | “ rbd export‑diff fails with “No such file or directory” when using a remote pool” | | Reported By | alice@example.com (2024‑11‑03) | | Component | Ceph → RBD → CLI | | Severity | High (blocks backup automation) | | Environment | - Ceph Octopus 15.2.7 (cluster ID: ceph-qa-01 ) - RBD client version: rbd‑tool 15.2.7‑0 - OS: Ubuntu 22.04 LTS (kernel 6.5) - Exporter node: backup‑01 (connected via CephFS) | | Reproducibility | Consistent (≈100 % on the test cluster) | | Current Status | Open – triaged, awaiting more logs | 1️⃣ Summary of the Issue The rbd export-diff command works fine when the source and destination pools reside on the same Ceph cluster, but fails with the following error when the destination pool is on a different Ceph cluster (or a remote CephFS mount): rbd-638
Please let us know if additional information is needed. The most likely cause appears to be a