Comparison of distributed file systems

In computing, a distributed file system (DFS) or network file system is any file system that allows access to files from multiple hosts sharing via a computer network. This makes it possible for multiple users on multiple machines to share files and storage resources.

Distributed file systems differ in their performance, mutability of content, handling of concurrent writes, handling of permanent or temporary loss of nodes or storage, and their policy of storing content.

Locally managed

FOSS

Client Written in License Access API High availability Shards Initial release year Memory requirements (GB)
Alluxio (Virtual Distributed File System) Java Apache License 2.0 HDFS, FUSE, HTTP/REST, S3 hot standby No 2013
Ceph C++ LGPL librados (C, C++, Python, Ruby), S3, Swift, FUSE Yes Yes 2010 1 per TB of storage
GlusterFS C GPLv3 libglusterfs, FUSE, NFS, SMB, Swift, libgfapi Yes No 2005
MooseFS C GPLv2 POSIX, FUSE master No 2008
Quantcast File System C Apache License 2.0 C++ client, FUSE (C++ server: MetaServer and ChunkServer are both in C++) master No 2012
Kertish-DFS Go GPLv3 HTTP(REST), CLI, C# Client, Go Client Yes 2020
LizardFS C++ GPLv3 POSIX, FUSE, NFS-Ganesha, Ceph FSAL (via libcephfs) master No 2013
Lustre C GPLv2 POSIX, NFS-Ganesha, NFS, SMB Yes Yes 2003
MinIO Go Apache Licence 2.0 AWS S3 API Yes Yes 2014
OpenAFS C IBM Public License Virtual file system, Installable File System 2000 [1]
OpenIO[2] C AGPLv3 / LGPLv3 Native (Python, C, Java), HTTP/REST, S3, Swift, FUSE (POSIX, NFS, SMB, FTP) Yes 2015 0.5
SeaweedFS Go, Java Apache License 2.0 HTTP (REST), POSIX, FUSE, S3, HDFS requires CockroachDB, undocumented config 2015
Tahoe-LAFS Python GNU GPL [3] HTTP (browser or CLI), SFTP, FTP, FUSE via SSHFS, pyfilesystem 2007
HDFS Java Apache License 2.0 Java and C client, HTTP transparent master failover No 2005
XtreemFS Java, C++ BSD License libxtreemfs (Java, C++), FUSE 2009
Ori[4] C, C++ MIT libori, FUSE 2012

Proprietary

Client Written in License Access API
BeeGFS C / C++ FRAUNHOFER FS (FhGFS) EULA,[5]

GPLv2 client

POSIX
ObjectiveFS[6] C Proprietary POSIX, FUSE
Spectrum Scale (GPFS) C, C++ Proprietary POSIX, NFS, SMB, Swift
MapR-FS C, C++ Proprietary POSIX, NFS, FUSE, S3
PanFS C, C++ Proprietary DirectFlow, POSIX, NFS, SMB/CIFS, HTTP, CLI
Infinit[7] C++ Proprietary (to be open sourced)[8] FUSE, Installable File System, NFS/SMB, POSIX, CLI, SDK (libinfinit)
Isilon OneFS C/C++ Proprietary POSIX, NFS, SMB/CIFS, HDFS, HTTP, FTP, SWIFT Object, CLI, Rest API
Scality C Proprietary FUSE, NFS, REST, AWS S3
Quobyte Java, C++ Proprietary POSIX, FUSE, NFS, SMB/CIFS, HDFS, AWS S3, TensorFlow Plugin, CLI, Rest API

Remote access

Name Run by Access API
Amazon S3 Amazon.com HTTP (REST/SOAP)
Google Cloud Storage Google HTTP (REST)
SWIFT (part of OpenStack) Rackspace, Hewlett-Packard, others HTTP (REST)
Microsoft Azure Microsoft HTTP (REST)
IBM Cloud Object Storage IBM (formerly Cleversafe)[9] HTTP (REST)

Comparison

Some researchers have made a functional and experimental analysis of several distributed file systems including HDFS, Ceph, Gluster, Lustre and old (1.6.x) version of MooseFS, although this document is over 4 years old and a lot of information may be outdated (e.g. MooseFS has at the time of writing this stable 2.0 and beta 3.0 version and HA for Metadata Server - since 2.0 and it is not mentioned in this document).[10]

The cloud based remote distributed storage from major vendors have different APIs and different consistency models.[11]

See also

References

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.