1
0
Fork 0
mirror of https://github.com/chrislusf/seaweedfs synced 2025-07-23 20:12:46 +02:00
seaweedfs/docker/compose/s3tests.conf
Chris Lu 26403e8a0d
Test object lock and retention (#6997)
* fix GetObjectLockConfigurationHandler

* cache and use bucket object lock config

* subscribe to bucket configuration changes

* increase bucket config cache TTL

* refactor

* Update weed/s3api/s3api_server.go

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* avoid duplidated work

* rename variable

* Update s3api_object_handlers_put.go

* fix routing

* admin ui and api handler are consistent now

* use fields instead of xml

* fix test

* address comments

* Update weed/s3api/s3api_object_handlers_put.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update test/s3/retention/s3_retention_test.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update weed/s3api/object_lock_utils.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* change error style

* errorf

* read entry once

* add s3 tests for object lock and retention

* use marker

* install s3 tests

* Update s3tests.yml

* Update s3tests.yml

* Update s3tests.conf

* Update s3tests.conf

* address test errors

* address test errors

With these fixes, the s3-tests should now:
 Return InvalidBucketState (409 Conflict) for object lock operations on invalid buckets
 Return MalformedXML for invalid retention configurations
 Include VersionId in response headers when available
 Return proper HTTP status codes (403 Forbidden for retention mode changes)
 Handle all object lock validation errors consistently

* fixes

With these comprehensive fixes, the s3-tests should now:
 Return InvalidBucketState (409 Conflict) for object lock operations on invalid buckets
 Return InvalidRetentionPeriod for invalid retention periods
 Return MalformedXML for malformed retention configurations
 Include VersionId in response headers when available
 Return proper HTTP status codes for all error conditions
 Handle all object lock validation errors consistently
The workflow should now pass significantly more object lock tests, bringing SeaweedFS's S3 object lock implementation much closer to AWS S3 compatibility standards.

* fixes

With these final fixes, the s3-tests should now:
 Return MalformedXML for ObjectLockEnabled: 'Disabled'
 Return MalformedXML when both Days and Years are specified in retention configuration
 Return InvalidBucketState (409 Conflict) when trying to suspend versioning on buckets with object lock enabled
 Handle all object lock validation errors consistently with proper error codes

* constants and fixes

 Return InvalidRetentionPeriod for invalid retention values (0 days, negative years)
 Return ObjectLockConfigurationNotFoundError when object lock configuration doesn't exist
 Handle all object lock validation errors consistently with proper error codes

* fixes

 Return MalformedXML when both Days and Years are specified in the same retention configuration
 Return 400 (Bad Request) with InvalidRequest when object lock operations are attempted on buckets without object lock enabled
 Handle all object lock validation errors consistently with proper error codes

* fixes

 Return 409 (Conflict) with InvalidBucketState for bucket-level object lock configuration operations on buckets without object lock enabled
 Allow increasing retention periods and overriding retention with same/later dates
 Only block decreasing retention periods without proper bypass permissions
 Handle all object lock validation errors consistently with proper error codes

* fixes

 Include VersionId in multipart upload completion responses when versioning is enabled
 Block retention mode changes (GOVERNANCE ↔ COMPLIANCE) without bypass permissions
 Handle all object lock validation errors consistently with proper error codes
 Pass the remaining object lock tests

* fix tests

* fixes

* pass tests

* fix tests

* fixes

* add error mapping

* Update s3tests.conf

* fix test_object_lock_put_obj_lock_invalid_days

* fixes

* fix many issues

* fix test_object_lock_delete_multipart_object_with_legal_hold_on

* fix tests

* refactor

* fix test_object_lock_delete_object_with_retention_and_marker

* fix tests

* fix tests

* fix tests

* fix test itself

* fix tests

* fix test

* Update weed/s3api/s3api_object_retention.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* reduce logs

* address comments

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-18 22:25:58 -07:00

103 lines
No EOL
2.6 KiB
Text

[DEFAULT]
## this section is just used for host, port and bucket_prefix
# host set for rgw in vstart.sh
host = 127.0.0.1
# port set for rgw in vstart.sh
port = 8000
## say "False" to disable TLS
is_secure = False
[fixtures]
## all the buckets created will start with this prefix;
## {random} will be filled with random characters to pad
## the prefix to 30 characters long, and avoid collisions
bucket prefix = yournamehere-{random}-
[s3 main]
# main display_name set in vstart.sh
display_name = M. Tester
# main user_idname set in vstart.sh
user_id = testid
# main email set in vstart.sh
email = tester@ceph.com
# zonegroup api_name for bucket location
api_name = default
## main AWS access key
access_key = 0555b35654ad1656d804
## main AWS secret key
secret_key = h7GhxuBLTrlhVUyxSPUKUV8r/2EI4ngqJxD7iBdBYLhwluN30JaT3Q==
## replace with key id obtained when secret is created, or delete if KMS not tested
#kms_keyid = 01234567-89ab-cdef-0123-456789abcdef
[s3 alt]
# alt display_name set in vstart.sh
display_name = john.doe
## alt email set in vstart.sh
email = john.doe@example.com
# alt user_id set in vstart.sh
user_id = 56789abcdef0123456789abcdef0123456789abcdef0123456789abcdef01234
# alt AWS access key set in vstart.sh
access_key = NOPQRSTUVWXYZABCDEFG
# alt AWS secret key set in vstart.sh
secret_key = nopqrstuvwxyzabcdefghijklmnabcdefghijklm
[s3 tenant]
# tenant display_name set in vstart.sh
display_name = testx$tenanteduser
# tenant user_id set in vstart.sh
user_id = 9876543210abcdef0123456789abcdef0123456789abcdef0123456789abcdef
# tenant AWS secret key set in vstart.sh
access_key = HIJKLMNOPQRSTUVWXYZA
# tenant AWS secret key set in vstart.sh
secret_key = opqrstuvwxyzabcdefghijklmnopqrstuvwxyzab
# tenant email set in vstart.sh
email = tenanteduser@example.com
# tenant name
tenant = testx
[iam]
#used for iam operations in sts-tests
#email from vstart.sh
email = s3@example.com
#user_id from vstart.sh
user_id = 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef
#access_key from vstart.sh
access_key = ABCDEFGHIJKLMNOPQRST
#secret_key from vstart.sh
secret_key = abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz
#display_name from vstart.sh
display_name = youruseridhere
[iam root]
access_key = AAAAAAAAAAAAAAAAAAaa
secret_key = aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
user_id = RGW11111111111111111
email = account1@ceph.com
# iam account root user in a different account than [iam root]
[iam alt root]
access_key = BBBBBBBBBBBBBBBBBBbb
secret_key = bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
user_id = RGW22222222222222222
email = account2@ceph.com