Export enpoint does not work in an acceptable time
Good day! I’m facing an issue with the export endpoint. I’m trying to use it to transfer users together with their passwords between two organizations within Zitadel. I was able to export 1–2 users from the test organization, but exporting users in production (>20,000) results in a timeout — both with a 30-minute limit and with a 60-minute limit. Because of this, I cannot complete the transfer.
I also noticed that the responseOutput parameter has no effect on the endpoint’s behavior. I tried exporting to S3 using s3Output as well — with valid credentials, no new file appears in the bucket, no error occurs (not even in Zitadel logs), and the export still returns data in the HTTP response body (in the test organization).
Is there any solution for this? Or are there any alternative ways to move users from one organization to another without changing their passwords?
13 Replies
Hi @notfoundium, I'm an AI assistant for ZITADEL. While a team member reviews this question, I'll see if I can find relevant sources for your question.
Answer
I wasn't able to find a direct answer, here are some relevant sources:
1. Export Data :
Describes the export API, including parameters like s3Output and responseOutput, but does not address large exports or timeouts. 2. Migrate from ZITADEL :
Outlines migration/export/import flows and parameters for org/user data, but does not cover timeout or batching for large user sets. Feel free to tag @Ask Inkeep with additional questions. Got it. Tagging @Rajat for additional help.
Describes the export API, including parameters like s3Output and responseOutput, but does not address large exports or timeouts. 2. Migrate from ZITADEL :
Outlines migration/export/import flows and parameters for org/user data, but does not cover timeout or batching for large user sets. Feel free to tag @Ask Inkeep with additional questions. Got it. Tagging @Rajat for additional help.
hey @notfoundium thanks for your question.
May I know what version of zitadel are you on and how are you hosted?
May I also know the steps you tried to export or any guide that you followed?. What was the timeoue you were giving?.
You can share all the commnads you've used and I will help you with that
Hello @Rajat
Current Zitadel version - 4.1.0 (Self-hosted).
The only thing I did was request to export endpoint:
https://zitadel.com/docs/apis/resources/admin/admin-service-export-data
POST https://$CUSTOM-DOMAIN/admin/v1/export
I also tried exporting with the following request body:
The credentials used in S3 must be correct, but even if they are not, no error has occurred. In the test organization, a response was successfully returned org with 2 users and no file was created in s3.
In general, I followed the advice that was generated by the AI in this thread:
https://discordapp.com/channels/927474939156643850/1420362949679386714
@Rajat
Apologies for the repeated mentioning. At the moment, this is a critical request from an important customer running in production. Could you please let us know if there are any possible ways to achieve this in Zitadel?
If it is not technically feasible, that’s perfectly fine — we’ll explore alternative solutions for user migration.
hey @notfoundium I will raise this internally and will get back to you. Thanks
Hey @notfoundium , afaik the Export Data endpoint
/admin/v1/export should work on Zitadel v4.1.0. There was new functionality (related to user metadata) added in v4.2.0 that momentarily broke this export data endpoint, and it was fixed in v4.3.0. But since you are running a previous version, this shouldn't affect you.
I personally haven't tested with an S3 bucket, this functionality is old (was meant for users migrating from Zitadel v1 to v2), so something might have changed in AWS. Have you tried a local export using the localOutput option? I will ask around to see if we have any test results to understand how long should it take to export 20k users and will get back to you on that. Are all users from one single organization?Hello, @Matías
Thanks for the quick reply! I’ll try the option with localOutput then. In the documentation at
https://$CUSTOM-DOMAIN/admin/v1/export, this parameter isn’t described in detail — is it just the absolute path to a file within the Zitadel instance’s local environment?
There’s one more nuance: it seems that the standard Zitadel Docker image doesn’t include a shell, so accessing it directly might be tricky. This is further complicated by the fact that it’s deployed in Kubernetes as a separate Pod.
To answer your last question: yes, all 20k+ users belong to a single organization.Hi @notfoundium, I followed the example from this doc exactly: https://zitadel.com/docs/guides/migrate/sources/zitadel#export-to-file
I did a cURL request as shown, from my computer terminal, I changed directory (cd) to the folder where I wanted the file to be downloaded, and used the
-o flag to specify the output file name.
@Matías, thanks for the reply! This guide uses a curl
POST request — and even without the localOutput parameter. In general, that’s exactly the problem: the POST request doesn’t complete for 20k users within a reasonable amount of time (increasing the HTTP request timeout beyond an hour doesn’t seem like a good solution). Therefore, if possible, it looks like the export result shouldn’t be received directly in the HTTP response (ideally, it should persist even after a timeout). That’s why I initially came up with the idea of using S3, even though it turned out not to work.@Matías, I have an update on this topic. In our stage environment (a bit over 5k users), I tried performing the export via curl, and the export completed successfully in less than a minute.
However, attempting the same operation in production, which currently has over 20k users (around 28k at the moment), was unsuccessful. The request was hanging for more than 10 minutes (pic 1) and eventually returned an error (pic 2). Both environments are running version 4.1.0 (self-hosted), and the production server has significantly more resources.
I also created a GCS bucket and specified it as the export file destination (pic 3) in the stage environment. The export completed successfully, but no new file appeared in the bucket and no error returned. The service account has the “Storage Object Creator” role, as described in the documentation, and the service account JSON was fully Base64-encoded.



Hi @notfoundium, thanks for sharing those details.
Based on the above, the prod error you saw earlier (grpc: received message larger than max (… vs 4194304)) is the canonical gRPC 4-MB receive cap being hit when the export is returned inline. That explains why stage (~5k users) works and prod (~28k) does not when the server tries to send the export body back.
Unfortunately, I've checked the export.go handler, and it seems like there is no code path at all in that handler that looks at
response_output, gcs_output, or s3_output, and there’s no writer/uploader to GCS/S3. The ExportData RPC simply builds the DataOrg structs and returns them in the RPC response. I don't know if this was planned and never implemented, or if it was removed at some point, but the functionality seems to be missing.
The only option I can think of now, is to try using grpcurl to bypass the HTTP gateway and call gRPC directly with a client that raises the receive limit. For example:
- -max-msg-sz 200000000 sets a ~200 MB receive cap; bump higher if needed.
- If your cert is self-signed or you’re doing local testing, add -insecure (or use -cacert /path/to/ca.pem).\@Matías Thank you so much for your help! It really worked, and I received the file with all ~28k users of the organization. It seems like it took only about a minute, even with that many users. Thanks for paying attention to this issue! I think this problem is now resolved, and I hope you’ll be able to finish this feature in the future 👍
Hi @notfoundium I'm really glad to know that the provided workaround was useful! Could I ask you to react to my above message with the ✅ emoji to mark this thread as resolved?
It helps other users experiencing similar issues to find solutions more easily! Thank you! 🙇♂️
🎉 Looks like you just helped out another community member! Thanks for being so helpful <@1354542547577344093>! You're now one step closer to leveling up—keep up the amazing peer support! 🚀