I'm testing merging some of the s3 buckets on our test system, see how that works. I have both regular and direct-upload s3 buckets.
We're still on 5.14. Per the documentation (https://guides.dataverse.org/en/5.14/developers/big-data-support.html?highlight=big%20data#s3-direct-upload-and-download) if I reconfigure the regular s3 bucket for direct uploads some features will be disabled (https://guides.dataverse.org/en/5.14/developers/big-data-support.html?highlight=big%20data#features-that-are-disabled-if-s3-direct-upload-is-enabled):
I'm wondering how much this will impact the users. We do have astronomy data in FITS format. Has anyone else reconfigured a regular s3 bucket to do direct-upload? Any compaints?
Thanks,
Jamie
My guess is that the big difference is that zip files are not unzipped.
We do have the astronomy department self-depositing telescope data which is FITS. Not sure if this will affect them.
Oh. I see. Do you know if they like that feature? Extracting from FITS into the astro block?
I have to contact them and get some feedback. I guess no one else has heard complaints.
We have talked about how it might be nice to still extract from FITS and NetCDF when direct upload is in play, but I don't think there's an issue for it.
ok, thank you
If you think there should be some kind of ingest service that alters this behavior by extracting the ZIP files after upload or analyzes FITS etc, feel free to create an issue and describe your use case. :smile: (Not sure if there already is another issue for this)
Last updated: Oct 30 2025 at 06:21 UTC