mirror of
https://github.com/chrislusf/seaweedfs
synced 2025-07-25 21:12:47 +02:00
* it compiles * refactored * reduce to 4 concurrent chunk upload * CopyObjectPartHandler * copy a range of the chunk data, fix offset size in copied chunks * Update s3api_object_handlers_copy.go What the PR Accomplishes: CopyObjectHandler - Now copies entire objects by copying chunks individually instead of downloading/uploading the entire file CopyObjectPartHandler - Handles copying parts of objects for multipart uploads by copying only the relevant chunk portions Efficient Chunk Copying - Uses direct chunk-to-chunk copying with proper volume assignment and concurrent processing (limited to 4 concurrent operations) Range Support - Properly handles range-based copying for partial object copies * fix compilation * fix part destination * handling small objects * use mkFile * copy to existing file or part * add testing tools * adjust tests * fix chunk lookup * refactoring * fix TestObjectCopyRetainingMetadata * ensure bucket name not conflicting * fix conditional copying tests * remove debug messages * add custom s3 copy tests
9 lines
No EOL
222 B
JSON
9 lines
No EOL
222 B
JSON
{
|
|
"endpoint": "http://localhost:8333",
|
|
"access_key": "some_access_key1",
|
|
"secret_key": "some_secret_key1",
|
|
"region": "us-east-1",
|
|
"bucket_prefix": "test-copying-",
|
|
"use_ssl": false,
|
|
"skip_verify_ssl": true
|
|
} |