1
0
Fork 0
mirror of https://github.com/chrislusf/seaweedfs synced 2024-06-30 22:31:06 +02:00

Compare commits

..

No commits in common. "2158d4fe4d613fcb17ca4f59ea2ebd4e5cd97bbb" and "1b1ab331f60c7ee4bb127d237620e0152606187c" have entirely different histories.

3 changed files with 3 additions and 8 deletions

View file

@ -146,7 +146,7 @@ Faster and Cheaper than direct cloud storage!
* [WebDAV] accesses as a mapped drive on Mac and Windows, or from mobile devices.
* [AES256-GCM Encrypted Storage][FilerDataEncryption] safely stores the encrypted data.
* [Super Large Files][SuperLargeFiles] stores large or super large files in tens of TB.
* [Cloud Drive][CloudDrive] mount cloud data to local cluster for fast read and write with asynchronous write back.
* [Cloud Data Accelerator][RemoteStorage] transparently read and write existing cloud data at local speed with content cache, metadata cache, and asynchronous write back.
## Kubernetes ##
* [Kubernetes CSI Driver][SeaweedFsCsiDriver] A Container Storage Interface (CSI) Driver. [![Docker Pulls](https://img.shields.io/docker/pulls/chrislusf/seaweedfs-csi-driver.svg?maxAge=4800)](https://hub.docker.com/r/chrislusf/seaweedfs-csi-driver/)
@ -169,7 +169,7 @@ Faster and Cheaper than direct cloud storage!
[ActiveActiveAsyncReplication]: https://github.com/chrislusf/seaweedfs/wiki/Filer-Active-Active-cross-cluster-continuous-synchronization
[FilerStoreReplication]: https://github.com/chrislusf/seaweedfs/wiki/Filer-Store-Replication
[KeyLargeValueStore]: https://github.com/chrislusf/seaweedfs/wiki/Filer-as-a-Key-Large-Value-Store
[CloudDrive]: https://github.com/chrislusf/seaweedfs/wiki/Cloud-Drive-Architecture
[RemoteStorage]: https://github.com/chrislusf/seaweedfs/wiki/Cloud-Cache-Architecture
[Back to TOC](#table-of-contents)

View file

@ -32,14 +32,11 @@ func (c *commandRemoteCache) Help() string {
remote.cache -dir=/xxx
remote.cache -dir=/xxx/some/sub/dir
remote.cache -dir=/xxx/some/sub/dir -include=*.pdf
remote.cache -dir=/xxx/some/sub/dir -exclude=*.txt
remote.cache -maxSize=1024000 # cache files smaller than 100K
remote.cache -maxAge=3600 # cache files less than 1 hour old
This is designed to run regularly. So you can add it to some cronjob.
If a file is already synchronized with the remote copy, the file will be skipped to avoid unnecessary copy.
The actual data copying goes through volume severs in parallel.
The actual data copying goes through volume severs.
`
}

View file

@ -33,8 +33,6 @@ func (c *commandRemoteUncache) Help() string {
remote.uncache -dir=/xxx/some/sub/dir
remote.uncache -dir=/xxx/some/sub/dir -include=*.pdf
remote.uncache -dir=/xxx/some/sub/dir -exclude=*.txt
remote.uncache -minSize=1024000 # uncache files larger than 100K
remote.uncache -minAge=3600 # uncache files older than 1 hour
`
}