From 2371770fe8f3ac1e483e0bd3eeb1469fe37be01f Mon Sep 17 00:00:00 2001 From: Chris Lu Date: Sat, 8 Dec 2018 00:50:58 -0800 Subject: [PATCH] Adding Hadoop Compatible File System --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index c0ea1cecf..c4532ac65 100644 --- a/README.md +++ b/README.md @@ -81,12 +81,14 @@ SeaweedFS can work very well with just the object store. [[Filer]] is added late * [filer server][Filer] provide "normal" directories and files via http. * [mount filer][Mount] to read and write files directly as a local directory via FUSE. * [Amazon S3 compatible API][AmazonS3API] to access files with S3 tooling. +* [Hadoop Compatible File System][Hadoop] to access files from Hadoop/Spark/Flink/etc jobs. * [Async Backup To Cloud][BackupToCloud] can enjoy extreme fast local access and backup to Amazon S3, Google Cloud Storage, Azure, BackBlaze. [Filer]: https://github.com/chrislusf/seaweedfs/wiki/Directories-and-Files [Mount]: https://github.com/chrislusf/seaweedfs/wiki/Mount [AmazonS3API]: https://github.com/chrislusf/seaweedfs/wiki/Amazon-S3-API [BackupToCloud]: https://github.com/chrislusf/seaweedfs/wiki/Backup-to-Cloud +[Hadoop]: https://github.com/chrislusf/seaweedfs/wiki/Hadoop-Compatible-File-System ## Example Usage By default, the master node runs on port 9333, and the volume nodes run on port 8080.