1
0
Fork 0
mirror of https://github.com/chrislusf/seaweedfs synced 2025-08-16 00:52:48 +02:00
seaweedfs/docker
Chris Lu 891a2fb6eb
Admin: misc improvements on admin server and workers. EC now works. (#7055)
* initial design

* added simulation as tests

* reorganized the codebase to move the simulation framework and tests into their own dedicated package

* integration test. ec worker task

* remove "enhanced" reference

* start master, volume servers, filer

Current Status
 Master: Healthy and running (port 9333)
 Filer: Healthy and running (port 8888)
 Volume Servers: All 6 servers running (ports 8080-8085)
🔄 Admin/Workers: Will start when dependencies are ready

* generate write load

* tasks are assigned

* admin start wtih grpc port. worker has its own working directory

* Update .gitignore

* working worker and admin. Task detection is not working yet.

* compiles, detection uses volumeSizeLimitMB from master

* compiles

* worker retries connecting to admin

* build and restart

* rendering pending tasks

* skip task ID column

* sticky worker id

* test canScheduleTaskNow

* worker reconnect to admin

* clean up logs

* worker register itself first

* worker can run ec work and report status

but:
1. one volume should not be repeatedly worked on.
2. ec shards needs to be distributed and source data should be deleted.

* move ec task logic

* listing ec shards

* local copy, ec. Need to distribute.

* ec is mostly working now

* distribution of ec shards needs improvement
* need configuration to enable ec

* show ec volumes

* interval field UI component

* rename

* integration test with vauuming

* garbage percentage threshold

* fix warning

* display ec shard sizes

* fix ec volumes list

* Update ui.go

* show default values

* ensure correct default value

* MaintenanceConfig use ConfigField

* use schema defined defaults

* config

* reduce duplication

* refactor to use BaseUIProvider

* each task register its schema

* checkECEncodingCandidate use ecDetector

* use vacuumDetector

* use volumeSizeLimitMB

* remove

remove

* remove unused

* refactor

* use new framework

* remove v2 reference

* refactor

* left menu can scroll now

* The maintenance manager was not being initialized when no data directory was configured for persistent storage.

* saving config

* Update task_config_schema_templ.go

* enable/disable tasks

* protobuf encoded task configurations

* fix system settings

* use ui component

* remove logs

* interface{} Reduction

* reduce interface{}

* reduce interface{}

* avoid from/to map

* reduce interface{}

* refactor

* keep it DRY

* added logging

* debug messages

* debug level

* debug

* show the log caller line

* use configured task policy

* log level

* handle admin heartbeat response

* Update worker.go

* fix EC rack and dc count

* Report task status to admin server

* fix task logging, simplify interface checking, use erasure_coding constants

* factor in empty volume server during task planning

* volume.list adds disk id

* track disk id also

* fix locking scheduled and manual scanning

* add active topology

* simplify task detector

* ec task completed, but shards are not showing up

* implement ec in ec_typed.go

* adjust log level

* dedup

* implementing ec copying shards and only ecx files

* use disk id when distributing ec shards

🎯 Planning: ActiveTopology creates DestinationPlan with specific TargetDisk
📦 Task Creation: maintenance_integration.go creates ECDestination with DiskId
🚀 Task Execution: EC task passes DiskId in VolumeEcShardsCopyRequest
💾 Volume Server: Receives disk_id and stores shards on specific disk (vs.store.Locations[req.DiskId])
📂 File System: EC shards and metadata land in the exact disk directory planned

* Delete original volume from all locations

* clean up existing shard locations

* local encoding and distributing

* Update docker/admin_integration/EC-TESTING-README.md

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* check volume id range

* simplify

* fix tests

* fix types

* clean up logs and tests

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-07-30 12:38:03 -07:00
..
admin_integration Admin: misc improvements on admin server and workers. EC now works. (#7055) 2025-07-30 12:38:03 -07:00
compose test versioning also (#7000) 2025-07-19 21:43:34 -07:00
nginx docker-compose 2021-01-17 18:33:14 +05:00
prometheus stats master_replica_placement_mismatch 2022-06-10 15:30:40 +05:00
tarantool Tarantool filer store (#6669) 2025-03-29 21:12:06 -07:00
Dockerfile.e2e Add an End-to-End workflow for FUSE mount (#3562) 2022-08-31 09:27:53 -07:00
Dockerfile.go_build change version directory 2025-06-03 22:46:10 -07:00
Dockerfile.local working 2024-05-03 21:27:06 -07:00
Dockerfile.rocksdb_dev_env gorocksdb 1.10.1 ~ rocksdb 10.2.1 2025-06-03 22:46:10 -07:00
Dockerfile.rocksdb_large change version directory 2025-06-03 22:46:10 -07:00
Dockerfile.rocksdb_large_local change version directory 2025-06-03 22:46:10 -07:00
Dockerfile.s3tests fix s3tests.conf file name 2024-06-24 17:15:16 -07:00
Dockerfile.tarantool.dev_env Tarantool filer store (#6669) 2025-03-29 21:12:06 -07:00
entrypoint.sh feat: Send commands to weed shell from the docker image. 2022-06-01 15:47:10 -07:00
filer.toml filer: default to leveldb2 2019-06-30 00:44:57 -07:00
filer_rocksdb.toml add filer.toml for rocksdb to docker image for rocksdb 2022-02-09 00:12:53 -08:00
Makefile Admin: misc improvements on admin server and workers. EC now works. (#7055) 2025-07-30 12:38:03 -07:00
README.md refactor(compose)!: upgrade to v2 closes #3699 (#3705) 2022-10-16 14:02:33 -07:00
seaweedfs-compose.yml refactor(compose)!: upgrade to v2 closes #3699 (#3705) 2022-10-16 14:02:33 -07:00
seaweedfs-dev-compose.yml refactor(compose)!: upgrade to v2 closes #3699 (#3705) 2022-10-16 14:02:33 -07:00
seaweedfs.sql filer.store.mysql: Use utf8mb4 instead of 3 byte UTF8 (#4094) 2023-01-01 05:07:53 -08:00
test.py fix S3 per-user-directory Policy (#6443) 2025-01-17 01:03:17 -08:00

Docker

Compose V2

SeaweedFS now uses the v2 syntax docker compose

If you rely on using Docker Compose as docker-compose (with a hyphen), you can set up Compose V2 to act as a drop-in replacement of the previous docker-compose. Refer to the Installing Compose section for detailed instructions on upgrading.

Confirm your system has docker compose v2 with a version check

$ docker compose version
Docker Compose version v2.10.2

Try it out


wget https://raw.githubusercontent.com/seaweedfs/seaweedfs/master/docker/seaweedfs-compose.yml

docker compose -f seaweedfs-compose.yml -p seaweedfs up

Try latest tip


wget https://raw.githubusercontent.com/seaweedfs/seaweedfs/master/docker/seaweedfs-dev-compose.yml

docker compose -f seaweedfs-dev-compose.yml -p seaweedfs up

Local Development

cd $GOPATH/src/github.com/seaweedfs/seaweedfs/docker
make

S3 cmd

list

s3cmd --no-ssl --host=127.0.0.1:8333 ls s3://

Build and push a multiarch build

Make sure that docker buildx is supported (might be an experimental docker feature)

BUILDER=$(docker buildx create --driver docker-container --use)
docker buildx build --pull --push --platform linux/386,linux/amd64,linux/arm64,linux/arm/v7,linux/arm/v6 . -t chrislusf/seaweedfs
docker buildx stop $BUILDER

Minio debugging

mc config host add local http://127.0.0.1:9000 some_access_key1 some_secret_key1
mc admin trace --all --verbose local