1
0
Fork 0
mirror of https://github.com/chrislusf/seaweedfs synced 2024-06-02 16:50:25 +02:00
seaweedfs/k8s/helm_charts2
Kaiwalya Joshi bee482d49e
fix: Remove invalid serviceName from s3-deployments template.
Remove `deployments.spec.serviceName` from the s3 deployment template.

`serviceName` isn't a valid field and causes issues when deploying the
chart.

This is the full output for `kubectl explain deployments.spec`
```
KIND:     Deployment
VERSION:  apps/v1

RESOURCE: spec <Object>

DESCRIPTION:
     Specification of the desired behavior of the Deployment.

     DeploymentSpec is the specification of the desired behavior of the
     Deployment.

FIELDS:
   minReadySeconds      <integer>
     Minimum number of seconds for which a newly created pod should be ready
     without any of its container crashing, for it to be considered available.
     Defaults to 0 (pod will be considered available as soon as it is ready)

   paused       <boolean>
     Indicates that the deployment is paused.

   progressDeadlineSeconds      <integer>
     The maximum time in seconds for a deployment to make progress before it is
     considered to be failed. The deployment controller will continue to process
     failed deployments and a condition with a ProgressDeadlineExceeded reason
     will be surfaced in the deployment status. Note that progress will not be
     estimated during the time a deployment is paused. Defaults to 600s.

   replicas     <integer>
     Number of desired pods. This is a pointer to distinguish between explicit
     zero and not specified. Defaults to 1.

   revisionHistoryLimit <integer>
     The number of old ReplicaSets to retain to allow rollback. This is a
     pointer to distinguish between explicit zero and not specified. Defaults to
     10.

   selector     <Object> -required-
     Label selector for pods. Existing ReplicaSets whose pods are selected by
     this will be the ones affected by this deployment. It must match the pod
     template's labels.

   strategy     <Object>
     The deployment strategy to use to replace existing pods with new ones.

   template     <Object> -required-
     Template describes the pods that will be created.
```
2022-05-19 15:01:27 -07:00
..
dashboards
templates fix: Remove invalid serviceName from s3-deployments template. 2022-05-19 15:01:27 -07:00
.helmignore
Chart.yaml 3.04 2022-05-15 21:32:21 -07:00
README.md
values.yaml fix filer helm pvc configuration 2022-05-07 14:47:13 +08:00

SEAWEEDFS - helm chart (2.x)

info:

  • master/filer/volume are stateful sets with anti-affinity on the hostname, so your deployment will be spread/HA.
  • chart is using memsql(mysql) as the filer backend to enable HA (multiple filer instances) and backup/HA memsql can provide.
  • mysql user/password are created in a k8s secret (secret-seaweedfs-db.yaml) and injected to the filer with ENV.
  • cert config exists and can be enabled, but not been tested.

prerequisites

kubernetes node have labels which help to define which node(Host) will run which pod.

s3/filer/master needs the label sw-backend=true

volume need the label sw-volume=true

to label a node to be able to run all pod types in k8s:

kubectl label node YOUR_NODE_NAME sw-volume=true,sw-backend=true

on production k8s deployment you will want each pod to have a different host, especially the volume server & the masters, currently all pods (master/volume/filer) have anti-affinity rule to disallow running multiple pod type on the same host. if you still want to run multiple pods of the same type (master/volume/filer) on the same host please set/update the corresponding affinity rule in values.yaml to an empty one:

affinity: ""

PVC - storage class

on the volume stateful set added support for K8S PVC, currently example with the simple local-path-provisioner from Rancher (comes included with k3d / k3s) https://github.com/rancher/local-path-provisioner

you can use ANY storage class you like, just update the correct storage-class for your deployment.

current instances config (AIO):

1 instance for each type (master/filer+s3/volume)

you can update the replicas count for each node type in values.yaml, need to add more nodes with the corresponding labels.

most of the configuration are available through values.yaml