1
0
Fork 0
mirror of https://github.com/chrislusf/seaweedfs synced 2024-06-28 21:31:56 +02:00

filer.sync: fix replicating partially updated file

Run two servers with volumes and fillers:
server -dir=Server1alpha -master.port=11000 -filer -filer.port=11001 -volume.port=11002
server -dir=Server1sigma -master.port=11006 -filer -filer.port=11007 -volume.port=11008

Run Active-Passive filler.sync:
filer.sync -a localhost:11007 -b localhost:11001 -isActivePassive

Upload file to 11007 port:
curl -F file=@/Desktop/9.xml "http://localhost:11007/testFacebook/"

If we request a file on two servers now, everything will be correct, even if we add data to the file and upload it again:
curl "http://localhost:11007/testFacebook/9.xml"
EQUALS
curl "http://localhost:11001/testFacebook/9.xml"

However, if we change the already existing data in the file (for example, we change the first line in the file, reducing its length), then this file on the second server will not be valid and will not be equivalent to the first file

Снимок экрана 2022-02-07 в 14 21 11

This problem occurs on line 202 in the filer_sink.go file. In particular, this is due to incorrect mapping of chunk names in the DoMinusChunks function. The names of deletedChunks do not match the chunks of existingEntry.Chunks, since the first chunks come from another server and have a different addressing (name) compared to the addressing on the server where the file is being overwritten.

Deleted chunks are not actually deleted on the server to which the file is replicated.
This commit is contained in:
chrislu 2022-02-07 03:46:28 -08:00
parent 433fde4b18
commit 9405eaefdb
2 changed files with 16 additions and 1 deletions

View file

@ -101,6 +101,21 @@ func DoMinusChunks(as, bs []*filer_pb.FileChunk) (delta []*filer_pb.FileChunk) {
return
}
func DoMinusChunksBySourceFileId(as, bs []*filer_pb.FileChunk) (delta []*filer_pb.FileChunk) {
fileIds := make(map[string]bool)
for _, interval := range bs {
fileIds[interval.GetFileIdString()] = true
}
for _, chunk := range as {
if _, found := fileIds[chunk.GetSourceFileId()]; !found {
delta = append(delta, chunk)
}
}
return
}
type ChunkView struct {
FileId string
Offset int64

View file

@ -199,7 +199,7 @@ func (fs *FilerSink) UpdateEntry(key string, oldEntry *filer_pb.Entry, newParent
// delete the chunks that are deleted from the source
if deleteIncludeChunks {
// remove the deleted chunks. Actual data deletion happens in filer UpdateEntry FindUnusedFileChunks
existingEntry.Chunks = filer.DoMinusChunks(existingEntry.Chunks, deletedChunks)
existingEntry.Chunks = filer.DoMinusChunksBySourceFileId(existingEntry.Chunks, deletedChunks)
}
// replicate the chunks that are new in the source