UVa wants to create a "virtual" S3 bucket... or an alias??
We only have one S3 bucket, but want to allow users of one collection the ability to upload max filesize 100GB, while keeping the default bucket for all other collections at 6GB.
I'm not sure which JVM options are needed to define this virtual pool.
And would CORS be needed to be set up on the "virtual" bucket - or since it is already set up on the real bucket - the virtual one will use it?
Harvard Dataverse does this. We have several stores pointed at the same bucket. The upload limits for each store can be configured separately.
Can you send me the JVM lines for Harvard's default set up and one of the virtual ones? I'm confused as to what is "s3" that specifies the Amazon S3 versus the <id> s3 in our JVM lines.
Not sure which ones we need for virtual ones.
We have this now, only one store:
-Ddataverse.files.s3.type=s3
-Ddataverse.files.s3.label=Dataverse S3 Storage
-Ddataverse.files.s3.bucket-name=dataverse-storage-production
-Ddataverse.files.s3.download-redirect=true
-Ddataverse.files.storage-driver-id=s3
-Ddataverse.files.s3.upload-redirect=true
-Ddataverse.files.s3.profile=default
-Ddataverse.files.s3.url-expiration-minutes=480
So I was assuming we add lines (w/ "3d" as the ID for our new virtual store):
For the new "3d" (id=3d) store:
(the following lines like "s3" above, except "storage-driver" that is only for the default, which is still "s3"??)
?? And the bucket name stays the same... or is that different???
-Ddataverse.files.3d.type=s3
-Ddataverse.files.3d.label=3D Cultural Heritage
-Ddataverse.files.3d.bucket-name=dataverse-storage-production
-Ddataverse.files.3d.download-redirect=true
-Ddataverse.files.3d.upload-redirect=true
-Ddataverse.files.3d.profile=default
-Ddataverse.files.3d.url-expiration-minutes=480
I'm pretty sure the bucket name stays the same.
I'm refreshing myself at https://guides.dataverse.org/en/6.6/installation/config.html#multi-store-basics but yes "storage-driver" is how you set the default store, it looks like.
It looks like Jim is saying similar stuff at https://groups.google.com/g/dataverse-community/c/ZCIJU-IknGg/m/dmSXCMFuAAAJ
Can someone send me how Harvard.dataverse has setup virtual storage "pools"? Maybe just the output of what the files JVM settings are?
The output of this command:
bin/asadmin list-jvm-options | grep files
I'm really having trouble trying to figure out which JVM lines to use for the storage alias. I tried just two, "type" and "label" and got things really messed up, then I added a third defining the bucket name (same as our S3), made our system happy again.... but getting errors about the storage and AWS, see error from log file at end:
I added these 3 commands, seems that I am missing a few options, see error message:
./asadmin create-jvm-options "-Ddataverse.files.3d.type=s3"
./asadmin create-jvm-options "-Ddataverse.files.3d.label=3D Cultural Heritage"
./asadmin create-jvm-options "-Ddataverse.files.3d.bucket-name=dataverse-storage-production"
[2025-06-17T20:18:39.212+0000] [Payara 6.2025.2] [WARNING] [] [edu.harvard.iq.dataverse.ThumbnailServiceWrapper] [tid: _ThreadID=90 _ThreadName=http-thread-pool::jk-connector(4)] [timeMillis: 1750191519212] [levelValue: 900] [[
getDatasetCardImageAsUrl(): Failed to initialize dataset StorageIO for 3d://10.80100/FK2/EUJETR (S3AccessIO: Failed to cache auxilary object : dataset_logo_original)]]
I'd ask Leonid but he's super busy right now. @Steven Winship is there anything you could copy/paste from domain.xml? No passwords, etc. please!
Thanks @Philip Durbin
I just saw you were off on vacation.
These are my last few days at UVA, retirement starts after Friday.
Right, right, I just warmed up #community > News from Sherry Lake (UVA) :smile:
Last updated: Oct 30 2025 at 06:21 UTC