Applies to: Zimbra Collaboration Suite 10.1.15, 10.1.16 with external object storage (Scality, S3, OpenIO)
Summary
- A regression in ZCS 10.1.15 causes zmpurgeoldmbox to delete external blobs (Scality, S3, OpenIO) after mailbox migration, even when zimbraMailboxMoveSkipBlobs=TRUE.
- This results in permanent, irrecoverable email data loss on the destination server.
- A fix is targeted for the 10.1.17 patch release.
- Until then, do not run zmpurgeoldmbox after mailbox migrations if you use external or unified object storage.
What Happened
In ZCS 9.0, running zmpurgeoldmbox after a zmmboxmove correctly cleaned up only local metadata (MySQL + Lucene indexes) on the source server, leaving external blobs untouched. In ZCS 10.1.15, a code change inadvertently altered the logic that identifies external/centralized storage. As a result:
- zmpurgeoldmbox now deletes external blobs even without the –forceDeleteBlobs flag
- The destination server’s mailbox still references those deleted blobs
- Affected users see “missing blob” errors and lose access to their emails permanently
Am I Affected?
You are affected if all of the following are true:
- You are running ZCS 10.1.15 or 10.1.16
- You use external object storage (Scality, S3, OpenIO, or similar) as a primary or secondary (HSM) volume — especially with unified storage enabled
- You perform mailbox migrations using zmmboxmove with blob-skipping attributes (zimbraMailboxMoveSkipBlobs=TRUE or zimbraMailboxMoveSkipHsmBlobs=TRUE)
- You run zmpurgeoldmbox on the source server after migration
If you use only local (internal) storage and no external object storage, this issue does not affect you.
Behavior Comparison: ZCS 9.0 vs 10.1.15
| Scenario | ZCS 9.0 | ZCS 10.1.15 (Bug) |
| External/unified storage, no –forceDeleteBlobs | Blobs preserved | Blobs DELETED |
| External/unified storage, with –forceDeleteBlobs | Blobs preserved * | Blobs DELETED |
| Internal (local) storage only | Blobs deleted (expected) | Blobs deleted (expected) |
* In ZCS 9.0, –forceDeleteBlobs was not implemented for external stores (Bug 96149). It is being properly implemented as part of the fix.
Immediate Workaround
- Do not run zmpurgeoldmbox (or PurgeMovedMailboxRequest via SOAP) after mailbox migrations if your environment uses external object storage.
- Disable any automation or scripts that trigger zmpurgeoldmbox as part of post-migration cleanup.
- Skipping the purge leaves residual local metadata (MySQL + Lucene) on the source server. This is harmless and can be cleaned up after the patch is applied.
- zmmboxmove itself is not affected — mailbox migrations continue to work correctly. Only the post-migration purge step is problematic.
The Fix
The corrected behavior in 10.1.17 will be:
| Storage Type | Without –forceDeleteBlobs | With –forceDeleteBlobs |
| Internal (local) | Blobs deleted | Blobs deleted |
| External (non-unified) | Blobs preserved | Blobs deleted |
| External (unified) | Blobs preserved | Blobs deleted |
- External blobs will only be deleted with an explicit –forceDeleteBlobs flag.
- Targeted release: ZCS 10.1.17 patch.
- If you need an early-access build, contact Zimbra Support.
What To Do Next
- Immediately stop running zmpurgeoldmbox after mailbox migrations in any environment with external storage. Disable related automation.
- Audit recent migrations: if zmpurgeoldmbox was already run, verify blob integrity on the destination server using:
zmprov gmi user@example.com
zmblobchk -m <mailboxId> -v –output-used-blobs start
If the output shows “blob not found” errors for external locators (containing @@), those blobs have been deleted.
- If data loss has occurred, check if your object storage provider supports versioning or soft-delete, there may be a recovery path.
- Plan for the 10.1.17 patch: after upgrading, you can safely resume zmpurgeoldmbox and clean up residual metadata from the workaround period.
Questions?
If you have questions, please contact Zimbra Support.
Affected versions: ZCS 10.1.15 through 10.1.16 (all editions)
Fix version: ZCS 10.1.17 (targeted)
Tracking reference: ZBUG-5265
Severity: Critical — potential data loss

No comments yet.