mirror of
https://github.com/chrislusf/seaweedfs
synced 2025-08-17 17:42:47 +02:00
* initial design * added simulation as tests * reorganized the codebase to move the simulation framework and tests into their own dedicated package * integration test. ec worker task * remove "enhanced" reference * start master, volume servers, filer Current Status ✅ Master: Healthy and running (port 9333) ✅ Filer: Healthy and running (port 8888) ✅ Volume Servers: All 6 servers running (ports 8080-8085) 🔄 Admin/Workers: Will start when dependencies are ready * generate write load * tasks are assigned * admin start wtih grpc port. worker has its own working directory * Update .gitignore * working worker and admin. Task detection is not working yet. * compiles, detection uses volumeSizeLimitMB from master * compiles * worker retries connecting to admin * build and restart * rendering pending tasks * skip task ID column * sticky worker id * test canScheduleTaskNow * worker reconnect to admin * clean up logs * worker register itself first * worker can run ec work and report status but: 1. one volume should not be repeatedly worked on. 2. ec shards needs to be distributed and source data should be deleted. * move ec task logic * listing ec shards * local copy, ec. Need to distribute. * ec is mostly working now * distribution of ec shards needs improvement * need configuration to enable ec * show ec volumes * interval field UI component * rename * integration test with vauuming * garbage percentage threshold * fix warning * display ec shard sizes * fix ec volumes list * Update ui.go * show default values * ensure correct default value * MaintenanceConfig use ConfigField * use schema defined defaults * config * reduce duplication * refactor to use BaseUIProvider * each task register its schema * checkECEncodingCandidate use ecDetector * use vacuumDetector * use volumeSizeLimitMB * remove remove * remove unused * refactor * use new framework * remove v2 reference * refactor * left menu can scroll now * The maintenance manager was not being initialized when no data directory was configured for persistent storage. * saving config * Update task_config_schema_templ.go * enable/disable tasks * protobuf encoded task configurations * fix system settings * use ui component * remove logs * interface{} Reduction * reduce interface{} * reduce interface{} * avoid from/to map * reduce interface{} * refactor * keep it DRY * added logging * debug messages * debug level * debug * show the log caller line * use configured task policy * log level * handle admin heartbeat response * Update worker.go * fix EC rack and dc count * Report task status to admin server * fix task logging, simplify interface checking, use erasure_coding constants * factor in empty volume server during task planning * volume.list adds disk id * track disk id also * fix locking scheduled and manual scanning * add active topology * simplify task detector * ec task completed, but shards are not showing up * implement ec in ec_typed.go * adjust log level * dedup * implementing ec copying shards and only ecx files * use disk id when distributing ec shards 🎯 Planning: ActiveTopology creates DestinationPlan with specific TargetDisk 📦 Task Creation: maintenance_integration.go creates ECDestination with DiskId 🚀 Task Execution: EC task passes DiskId in VolumeEcShardsCopyRequest 💾 Volume Server: Receives disk_id and stores shards on specific disk (vs.store.Locations[req.DiskId]) 📂 File System: EC shards and metadata land in the exact disk directory planned * Delete original volume from all locations * clean up existing shard locations * local encoding and distributing * Update docker/admin_integration/EC-TESTING-README.md Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * check volume id range * simplify * fix tests * fix types * clean up logs and tests --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
313 lines
No EOL
15 KiB
Text
313 lines
No EOL
15 KiB
Text
package app
|
|
|
|
import (
|
|
"fmt"
|
|
"github.com/seaweedfs/seaweedfs/weed/admin/dash"
|
|
)
|
|
|
|
templ EcVolumeDetails(data dash.EcVolumeDetailsData) {
|
|
<div class="d-flex justify-content-between flex-wrap flex-md-nowrap align-items-center pt-3 pb-2 mb-3 border-bottom">
|
|
<div>
|
|
<h1 class="h2">
|
|
<i class="fas fa-th-large me-2"></i>EC Volume Details
|
|
</h1>
|
|
<nav aria-label="breadcrumb">
|
|
<ol class="breadcrumb">
|
|
<li class="breadcrumb-item"><a href="/admin" class="text-decoration-none">Dashboard</a></li>
|
|
<li class="breadcrumb-item"><a href="/cluster/ec-shards" class="text-decoration-none">EC Volumes</a></li>
|
|
<li class="breadcrumb-item active" aria-current="page">Volume {fmt.Sprintf("%d", data.VolumeID)}</li>
|
|
</ol>
|
|
</nav>
|
|
</div>
|
|
<div class="btn-toolbar mb-2 mb-md-0">
|
|
<div class="btn-group me-2">
|
|
<button type="button" class="btn btn-sm btn-outline-secondary" onclick="history.back()">
|
|
<i class="fas fa-arrow-left me-1"></i>Back
|
|
</button>
|
|
<button type="button" class="btn btn-sm btn-outline-primary" onclick="window.location.reload()">
|
|
<i class="fas fa-refresh me-1"></i>Refresh
|
|
</button>
|
|
</div>
|
|
</div>
|
|
</div>
|
|
|
|
<!-- EC Volume Summary -->
|
|
<div class="row mb-4">
|
|
<div class="col-md-6">
|
|
<div class="card">
|
|
<div class="card-header">
|
|
<h5 class="card-title mb-0">
|
|
<i class="fas fa-info-circle me-2"></i>Volume Information
|
|
</h5>
|
|
</div>
|
|
<div class="card-body">
|
|
<table class="table table-borderless">
|
|
<tr>
|
|
<td><strong>Volume ID:</strong></td>
|
|
<td>{fmt.Sprintf("%d", data.VolumeID)}</td>
|
|
</tr>
|
|
<tr>
|
|
<td><strong>Collection:</strong></td>
|
|
<td>
|
|
if data.Collection != "" {
|
|
<span class="badge bg-info">{data.Collection}</span>
|
|
} else {
|
|
<span class="text-muted">default</span>
|
|
}
|
|
</td>
|
|
</tr>
|
|
<tr>
|
|
<td><strong>Status:</strong></td>
|
|
<td>
|
|
if data.IsComplete {
|
|
<span class="badge bg-success">
|
|
<i class="fas fa-check me-1"></i>Complete ({data.TotalShards}/14 shards)
|
|
</span>
|
|
} else {
|
|
<span class="badge bg-warning">
|
|
<i class="fas fa-exclamation-triangle me-1"></i>Incomplete ({data.TotalShards}/14 shards)
|
|
</span>
|
|
}
|
|
</td>
|
|
</tr>
|
|
if !data.IsComplete {
|
|
<tr>
|
|
<td><strong>Missing Shards:</strong></td>
|
|
<td>
|
|
for i, shardID := range data.MissingShards {
|
|
if i > 0 {
|
|
<span>, </span>
|
|
}
|
|
<span class="badge bg-danger">{fmt.Sprintf("%02d", shardID)}</span>
|
|
}
|
|
</td>
|
|
</tr>
|
|
}
|
|
<tr>
|
|
<td><strong>Data Centers:</strong></td>
|
|
<td>
|
|
for i, dc := range data.DataCenters {
|
|
if i > 0 {
|
|
<span>, </span>
|
|
}
|
|
<span class="badge bg-primary">{dc}</span>
|
|
}
|
|
</td>
|
|
</tr>
|
|
<tr>
|
|
<td><strong>Servers:</strong></td>
|
|
<td>
|
|
<span class="text-muted">{fmt.Sprintf("%d servers", len(data.Servers))}</span>
|
|
</td>
|
|
</tr>
|
|
<tr>
|
|
<td><strong>Last Updated:</strong></td>
|
|
<td>
|
|
<span class="text-muted">{data.LastUpdated.Format("2006-01-02 15:04:05")}</span>
|
|
</td>
|
|
</tr>
|
|
</table>
|
|
</div>
|
|
</div>
|
|
</div>
|
|
|
|
<div class="col-md-6">
|
|
<div class="card">
|
|
<div class="card-header">
|
|
<h5 class="card-title mb-0">
|
|
<i class="fas fa-chart-pie me-2"></i>Shard Distribution
|
|
</h5>
|
|
</div>
|
|
<div class="card-body">
|
|
<div class="row text-center">
|
|
<div class="col-4">
|
|
<div class="border rounded p-3">
|
|
<h3 class="text-primary mb-1">{fmt.Sprintf("%d", data.TotalShards)}</h3>
|
|
<small class="text-muted">Total Shards</small>
|
|
</div>
|
|
</div>
|
|
<div class="col-4">
|
|
<div class="border rounded p-3">
|
|
<h3 class="text-success mb-1">{fmt.Sprintf("%d", len(data.DataCenters))}</h3>
|
|
<small class="text-muted">Data Centers</small>
|
|
</div>
|
|
</div>
|
|
<div class="col-4">
|
|
<div class="border rounded p-3">
|
|
<h3 class="text-info mb-1">{fmt.Sprintf("%d", len(data.Servers))}</h3>
|
|
<small class="text-muted">Servers</small>
|
|
</div>
|
|
</div>
|
|
</div>
|
|
|
|
<!-- Shard Distribution Visualization -->
|
|
<div class="mt-3">
|
|
<h6>Present Shards:</h6>
|
|
<div class="d-flex flex-wrap gap-1">
|
|
for _, shard := range data.Shards {
|
|
<span class="badge bg-success me-1 mb-1">{fmt.Sprintf("%02d", shard.ShardID)}</span>
|
|
}
|
|
</div>
|
|
if len(data.MissingShards) > 0 {
|
|
<h6 class="mt-2">Missing Shards:</h6>
|
|
<div class="d-flex flex-wrap gap-1">
|
|
for _, shardID := range data.MissingShards {
|
|
<span class="badge bg-secondary me-1 mb-1">{fmt.Sprintf("%02d", shardID)}</span>
|
|
}
|
|
</div>
|
|
}
|
|
</div>
|
|
</div>
|
|
</div>
|
|
</div>
|
|
</div>
|
|
|
|
<!-- Shard Details Table -->
|
|
<div class="card">
|
|
<div class="card-header">
|
|
<h5 class="card-title mb-0">
|
|
<i class="fas fa-list me-2"></i>Shard Details
|
|
</h5>
|
|
</div>
|
|
<div class="card-body">
|
|
if len(data.Shards) > 0 {
|
|
<div class="table-responsive">
|
|
<table class="table table-striped table-hover">
|
|
<thead>
|
|
<tr>
|
|
<th>
|
|
<a href="#" onclick="sortBy('shard_id')" class="text-dark text-decoration-none">
|
|
Shard ID
|
|
if data.SortBy == "shard_id" {
|
|
if data.SortOrder == "asc" {
|
|
<i class="fas fa-sort-up ms-1"></i>
|
|
} else {
|
|
<i class="fas fa-sort-down ms-1"></i>
|
|
}
|
|
} else {
|
|
<i class="fas fa-sort ms-1 text-muted"></i>
|
|
}
|
|
</a>
|
|
</th>
|
|
<th>
|
|
<a href="#" onclick="sortBy('server')" class="text-dark text-decoration-none">
|
|
Server
|
|
if data.SortBy == "server" {
|
|
if data.SortOrder == "asc" {
|
|
<i class="fas fa-sort-up ms-1"></i>
|
|
} else {
|
|
<i class="fas fa-sort-down ms-1"></i>
|
|
}
|
|
} else {
|
|
<i class="fas fa-sort ms-1 text-muted"></i>
|
|
}
|
|
</a>
|
|
</th>
|
|
<th>
|
|
<a href="#" onclick="sortBy('data_center')" class="text-dark text-decoration-none">
|
|
Data Center
|
|
if data.SortBy == "data_center" {
|
|
if data.SortOrder == "asc" {
|
|
<i class="fas fa-sort-up ms-1"></i>
|
|
} else {
|
|
<i class="fas fa-sort-down ms-1"></i>
|
|
}
|
|
} else {
|
|
<i class="fas fa-sort ms-1 text-muted"></i>
|
|
}
|
|
</a>
|
|
</th>
|
|
<th>
|
|
<a href="#" onclick="sortBy('rack')" class="text-dark text-decoration-none">
|
|
Rack
|
|
if data.SortBy == "rack" {
|
|
if data.SortOrder == "asc" {
|
|
<i class="fas fa-sort-up ms-1"></i>
|
|
} else {
|
|
<i class="fas fa-sort-down ms-1"></i>
|
|
}
|
|
} else {
|
|
<i class="fas fa-sort ms-1 text-muted"></i>
|
|
}
|
|
</a>
|
|
</th>
|
|
<th class="text-dark">Disk Type</th>
|
|
<th class="text-dark">Shard Size</th>
|
|
<th class="text-dark">Actions</th>
|
|
</tr>
|
|
</thead>
|
|
<tbody>
|
|
for _, shard := range data.Shards {
|
|
<tr>
|
|
<td>
|
|
<span class="badge bg-primary">{fmt.Sprintf("%02d", shard.ShardID)}</span>
|
|
</td>
|
|
<td>
|
|
<a href={ templ.URL("/cluster/volume-servers/" + shard.Server) } class="text-primary text-decoration-none">
|
|
<code class="small">{shard.Server}</code>
|
|
</a>
|
|
</td>
|
|
<td>
|
|
<span class="badge bg-primary text-white">{shard.DataCenter}</span>
|
|
</td>
|
|
<td>
|
|
<span class="badge bg-secondary text-white">{shard.Rack}</span>
|
|
</td>
|
|
<td>
|
|
<span class="text-dark">{shard.DiskType}</span>
|
|
</td>
|
|
<td>
|
|
<span class="text-success">{bytesToHumanReadableUint64(shard.Size)}</span>
|
|
</td>
|
|
<td>
|
|
<a href={ templ.SafeURL(fmt.Sprintf("http://%s/ui/index.html", shard.Server)) } target="_blank" class="btn btn-sm btn-primary">
|
|
<i class="fas fa-external-link-alt me-1"></i>Volume Server
|
|
</a>
|
|
</td>
|
|
</tr>
|
|
}
|
|
</tbody>
|
|
</table>
|
|
</div>
|
|
} else {
|
|
<div class="text-center py-4">
|
|
<i class="fas fa-exclamation-triangle fa-3x text-warning mb-3"></i>
|
|
<h5>No EC shards found</h5>
|
|
<p class="text-muted">This volume may not be EC encoded yet.</p>
|
|
</div>
|
|
}
|
|
</div>
|
|
</div>
|
|
|
|
<script>
|
|
// Sorting functionality
|
|
function sortBy(field) {
|
|
const currentSort = new URLSearchParams(window.location.search).get('sort_by');
|
|
const currentOrder = new URLSearchParams(window.location.search).get('sort_order') || 'asc';
|
|
|
|
let newOrder = 'asc';
|
|
if (currentSort === field && currentOrder === 'asc') {
|
|
newOrder = 'desc';
|
|
}
|
|
|
|
const url = new URL(window.location);
|
|
url.searchParams.set('sort_by', field);
|
|
url.searchParams.set('sort_order', newOrder);
|
|
window.location.href = url.toString();
|
|
}
|
|
</script>
|
|
}
|
|
|
|
// Helper function to convert bytes to human readable format (uint64 version)
|
|
func bytesToHumanReadableUint64(bytes uint64) string {
|
|
const unit = 1024
|
|
if bytes < unit {
|
|
return fmt.Sprintf("%dB", bytes)
|
|
}
|
|
div, exp := uint64(unit), 0
|
|
for n := bytes / unit; n >= unit; n /= unit {
|
|
div *= unit
|
|
exp++
|
|
}
|
|
return fmt.Sprintf("%.1f%cB", float64(bytes)/float64(div), "KMGTPE"[exp])
|
|
} |