1
0
Fork 0
mirror of https://github.com/chrislusf/seaweedfs synced 2025-08-17 01:22:47 +02:00
seaweedfs/weed/admin/maintenance/config_schema.go
Chris Lu 891a2fb6eb
Admin: misc improvements on admin server and workers. EC now works. (#7055)
* initial design

* added simulation as tests

* reorganized the codebase to move the simulation framework and tests into their own dedicated package

* integration test. ec worker task

* remove "enhanced" reference

* start master, volume servers, filer

Current Status
 Master: Healthy and running (port 9333)
 Filer: Healthy and running (port 8888)
 Volume Servers: All 6 servers running (ports 8080-8085)
🔄 Admin/Workers: Will start when dependencies are ready

* generate write load

* tasks are assigned

* admin start wtih grpc port. worker has its own working directory

* Update .gitignore

* working worker and admin. Task detection is not working yet.

* compiles, detection uses volumeSizeLimitMB from master

* compiles

* worker retries connecting to admin

* build and restart

* rendering pending tasks

* skip task ID column

* sticky worker id

* test canScheduleTaskNow

* worker reconnect to admin

* clean up logs

* worker register itself first

* worker can run ec work and report status

but:
1. one volume should not be repeatedly worked on.
2. ec shards needs to be distributed and source data should be deleted.

* move ec task logic

* listing ec shards

* local copy, ec. Need to distribute.

* ec is mostly working now

* distribution of ec shards needs improvement
* need configuration to enable ec

* show ec volumes

* interval field UI component

* rename

* integration test with vauuming

* garbage percentage threshold

* fix warning

* display ec shard sizes

* fix ec volumes list

* Update ui.go

* show default values

* ensure correct default value

* MaintenanceConfig use ConfigField

* use schema defined defaults

* config

* reduce duplication

* refactor to use BaseUIProvider

* each task register its schema

* checkECEncodingCandidate use ecDetector

* use vacuumDetector

* use volumeSizeLimitMB

* remove

remove

* remove unused

* refactor

* use new framework

* remove v2 reference

* refactor

* left menu can scroll now

* The maintenance manager was not being initialized when no data directory was configured for persistent storage.

* saving config

* Update task_config_schema_templ.go

* enable/disable tasks

* protobuf encoded task configurations

* fix system settings

* use ui component

* remove logs

* interface{} Reduction

* reduce interface{}

* reduce interface{}

* avoid from/to map

* reduce interface{}

* refactor

* keep it DRY

* added logging

* debug messages

* debug level

* debug

* show the log caller line

* use configured task policy

* log level

* handle admin heartbeat response

* Update worker.go

* fix EC rack and dc count

* Report task status to admin server

* fix task logging, simplify interface checking, use erasure_coding constants

* factor in empty volume server during task planning

* volume.list adds disk id

* track disk id also

* fix locking scheduled and manual scanning

* add active topology

* simplify task detector

* ec task completed, but shards are not showing up

* implement ec in ec_typed.go

* adjust log level

* dedup

* implementing ec copying shards and only ecx files

* use disk id when distributing ec shards

🎯 Planning: ActiveTopology creates DestinationPlan with specific TargetDisk
📦 Task Creation: maintenance_integration.go creates ECDestination with DiskId
🚀 Task Execution: EC task passes DiskId in VolumeEcShardsCopyRequest
💾 Volume Server: Receives disk_id and stores shards on specific disk (vs.store.Locations[req.DiskId])
📂 File System: EC shards and metadata land in the exact disk directory planned

* Delete original volume from all locations

* clean up existing shard locations

* local encoding and distributing

* Update docker/admin_integration/EC-TESTING-README.md

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* check volume id range

* simplify

* fix tests

* fix types

* clean up logs and tests

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-07-30 12:38:03 -07:00

190 lines
6.6 KiB
Go

package maintenance
import (
"github.com/seaweedfs/seaweedfs/weed/admin/config"
)
// Type aliases for backward compatibility
type ConfigFieldType = config.FieldType
type ConfigFieldUnit = config.FieldUnit
type ConfigField = config.Field
// Constant aliases for backward compatibility
const (
FieldTypeBool = config.FieldTypeBool
FieldTypeInt = config.FieldTypeInt
FieldTypeDuration = config.FieldTypeDuration
FieldTypeInterval = config.FieldTypeInterval
FieldTypeString = config.FieldTypeString
FieldTypeFloat = config.FieldTypeFloat
)
const (
UnitSeconds = config.UnitSeconds
UnitMinutes = config.UnitMinutes
UnitHours = config.UnitHours
UnitDays = config.UnitDays
UnitCount = config.UnitCount
UnitNone = config.UnitNone
)
// Function aliases for backward compatibility
var (
SecondsToIntervalValueUnit = config.SecondsToIntervalValueUnit
IntervalValueUnitToSeconds = config.IntervalValueUnitToSeconds
)
// MaintenanceConfigSchema defines the schema for maintenance configuration
type MaintenanceConfigSchema struct {
config.Schema // Embed common schema functionality
}
// GetMaintenanceConfigSchema returns the schema for maintenance configuration
func GetMaintenanceConfigSchema() *MaintenanceConfigSchema {
return &MaintenanceConfigSchema{
Schema: config.Schema{
Fields: []*config.Field{
{
Name: "enabled",
JSONName: "enabled",
Type: config.FieldTypeBool,
DefaultValue: true,
Required: false,
DisplayName: "Enable Maintenance System",
Description: "When enabled, the system will automatically scan for and execute maintenance tasks",
HelpText: "Toggle this to enable or disable the entire maintenance system",
InputType: "checkbox",
CSSClasses: "form-check-input",
},
{
Name: "scan_interval_seconds",
JSONName: "scan_interval_seconds",
Type: config.FieldTypeInterval,
DefaultValue: 30 * 60, // 30 minutes in seconds
MinValue: 1 * 60, // 1 minute
MaxValue: 24 * 60 * 60, // 24 hours
Required: true,
DisplayName: "Scan Interval",
Description: "How often to scan for maintenance tasks",
HelpText: "The system will check for new maintenance tasks at this interval",
Placeholder: "30",
Unit: config.UnitMinutes,
InputType: "interval",
CSSClasses: "form-control",
},
{
Name: "worker_timeout_seconds",
JSONName: "worker_timeout_seconds",
Type: config.FieldTypeInterval,
DefaultValue: 5 * 60, // 5 minutes
MinValue: 1 * 60, // 1 minute
MaxValue: 60 * 60, // 1 hour
Required: true,
DisplayName: "Worker Timeout",
Description: "How long to wait for worker heartbeat before considering it inactive",
HelpText: "Workers that don't send heartbeats within this time are considered offline",
Placeholder: "5",
Unit: config.UnitMinutes,
InputType: "interval",
CSSClasses: "form-control",
},
{
Name: "task_timeout_seconds",
JSONName: "task_timeout_seconds",
Type: config.FieldTypeInterval,
DefaultValue: 2 * 60 * 60, // 2 hours
MinValue: 10 * 60, // 10 minutes
MaxValue: 24 * 60 * 60, // 24 hours
Required: true,
DisplayName: "Task Timeout",
Description: "Maximum time allowed for a task to complete",
HelpText: "Tasks that exceed this duration will be marked as failed",
Placeholder: "2",
Unit: config.UnitHours,
InputType: "interval",
CSSClasses: "form-control",
},
{
Name: "retry_delay_seconds",
JSONName: "retry_delay_seconds",
Type: config.FieldTypeInterval,
DefaultValue: 15 * 60, // 15 minutes
MinValue: 1 * 60, // 1 minute
MaxValue: 24 * 60 * 60, // 24 hours
Required: true,
DisplayName: "Retry Delay",
Description: "How long to wait before retrying a failed task",
HelpText: "Failed tasks will be retried after this delay",
Placeholder: "15",
Unit: config.UnitMinutes,
InputType: "interval",
CSSClasses: "form-control",
},
{
Name: "max_retries",
JSONName: "max_retries",
Type: config.FieldTypeInt,
DefaultValue: 3,
MinValue: 0,
MaxValue: 10,
Required: true,
DisplayName: "Max Retries",
Description: "Maximum number of times to retry a failed task",
HelpText: "Tasks that fail more than this many times will be marked as permanently failed",
Placeholder: "3",
Unit: config.UnitCount,
InputType: "number",
CSSClasses: "form-control",
},
{
Name: "cleanup_interval_seconds",
JSONName: "cleanup_interval_seconds",
Type: config.FieldTypeInterval,
DefaultValue: 24 * 60 * 60, // 24 hours
MinValue: 1 * 60 * 60, // 1 hour
MaxValue: 7 * 24 * 60 * 60, // 7 days
Required: true,
DisplayName: "Cleanup Interval",
Description: "How often to run maintenance cleanup operations",
HelpText: "Removes old task records and temporary files at this interval",
Placeholder: "24",
Unit: config.UnitHours,
InputType: "interval",
CSSClasses: "form-control",
},
{
Name: "task_retention_seconds",
JSONName: "task_retention_seconds",
Type: config.FieldTypeInterval,
DefaultValue: 7 * 24 * 60 * 60, // 7 days
MinValue: 1 * 24 * 60 * 60, // 1 day
MaxValue: 30 * 24 * 60 * 60, // 30 days
Required: true,
DisplayName: "Task Retention",
Description: "How long to keep completed task records",
HelpText: "Task history older than this duration will be automatically deleted",
Placeholder: "7",
Unit: config.UnitDays,
InputType: "interval",
CSSClasses: "form-control",
},
{
Name: "global_max_concurrent",
JSONName: "global_max_concurrent",
Type: config.FieldTypeInt,
DefaultValue: 10,
MinValue: 1,
MaxValue: 100,
Required: true,
DisplayName: "Global Max Concurrent Tasks",
Description: "Maximum number of maintenance tasks that can run simultaneously across all workers",
HelpText: "Limits the total number of maintenance operations to control system load",
Placeholder: "10",
Unit: config.UnitCount,
InputType: "number",
CSSClasses: "form-control",
},
},
},
}
}