Original author(s) Gluster
Developer(s) Red Hat, Inc.
Stable release
3.11.1[1] / June 28, 2017 (2017-06-28)
Preview release
3.8.9[2] / February 16, 2017 (2017-02-16)
Operating system Linux, OS X, FreeBSD, NetBSD, OpenSolaris
Type Distributed file system
License GNU General Public License v3[3]

GlusterFS is a scale-out network-attached storage file system. It has found applications including cloud computing, streaming media services, and content delivery networks. GlusterFS was developed originally by Gluster, Inc. and then by Red Hat, Inc., as a result of Red Hat acquiring Gluster in 2011.[4]

In June 2012, Red Hat Storage Server was announced as a commercially supported integration of GlusterFS with Red Hat Enterprise Linux.[5] Red Hat bought Inktank Storage in April 2014, which is the company behind the Ceph distributed file system, and re-branded GlusterFS-based Red Hat Storage Server to "Red Hat Gluster Storage".[6]


GlusterFS aggregates various storage servers over Ethernet or Infiniband RDMA interconnect into one large parallel network file system. It is free software, with some parts licensed under the GNU General Public License (GPL) v3 while others are dual licensed under either GPL v2 or the Lesser General Public License (LGPL) v3. GlusterFS is based on a stackable user space design.

GlusterFS has a client and server component. Servers are typically deployed as storage bricks, with each server running a glusterfsd daemon to export a local file system as a volume. The glusterfs client process, which connects to servers with a custom protocol over TCP/IP, InfiniBand or Sockets Direct Protocol, creates composite virtual volumes from multiple remote servers using stackable translators. By default, files are stored whole, but striping of files across multiple remote volumes is also supported. The final volume may then be mounted by the client host using its own native protocol via the FUSE mechanism, using NFS v3 protocol using a built-in server translator, or accessed via gfapi client library. Native-protocol mounts may then be re-exported e.g. via the kernel NFSv4 server, SAMBA, or the object-based OpenStack Storage (Swift) protocol using the "UFO" (Unified File and Object) translator.

Most of the functionality of GlusterFS is implemented as translators, including file-based mirroring and replication, file-based striping, file-based load balancing, volume failover, scheduling and disk caching, storage quotas, and volume snapshots with user serviceability (since GlusterFS version 3.6).

The GlusterFS server is intentionally kept simple: it exports an existing directory as-is, leaving it up to client-side translators to structure the store. The clients themselves are stateless, do not communicate with each other, and are expected to have translator configurations consistent with each other. GlusterFS relies on an elastic hashing algorithm, rather than using either a centralized or distributed metadata model. With version 3.1 and later of GlusterFS, volumes can be added, deleted, or migrated dynamically, helping to avoid configuration coherency problems, and allowing GlusterFS to scale up to several petabytes on commodity hardware by avoiding bottlenecks that normally affect more tightly coupled distributed file systems.

GlusterFS has been used as the foundation for academic research[7][8] and a survey article.[9]

Red Hat markets the software for three markets: "on-premises", public cloud and "private cloud".[10]

See also


  1. ^ "Announcing Gluster release 3.11.1". 28 June 2017. Retrieved 2017. 
  2. ^ "glusterfs-3.7.8 released". 10 Feb 2016. Retrieved 2016. 
  3. ^ "Gluster 3.1: Understanding the GlusterFS License". Gluster Documentation. Retrieved . 
  4. ^ Timothy Prickett Morgan (4 Oct 2011). "Red Hat snatches storage Gluster file system for $136m". The Register. Retrieved 2016. 
  5. ^ Timothy Prickett Morgan (June 27, 2012). "Red Hat Storage Server NAS takes on Lustre, NetApp". The Register. Retrieved 2013. 
  6. ^ "Red Hat Storage. New product names. Same great features.". Mar 20, 2015. Archived from the original on 2015-04-02. Retrieved . 
  7. ^ Noronha, Ranjit; Panda, Dhabaleswar K (9-12 September 2008). IMCa: A High Performance Caching Front-End for GlusterFS on InfiniBand (PDF). 37th International Conference on Parallel Processing, 2008. ICPP '08. IEEE. doi:10.1109/ICPP.2008.84. Retrieved 2011. 
  8. ^ Kwidama, Sevickson (2007-2008), Streaming and storing CineGrid data: A study on optimization methods (PDF), University of Amsterdam System and Network Engineering, retrieved 2011 
  9. ^ Klaver, Jeroen; van der Jagt, Roel (14 July 2010), Distributed file system on the SURFnet network Report (PDF), University of Amsterdam System and Network Engineering, retrieved 2012 
  10. ^ "Red Hat Storage Server". Web site. Red Hat. Retrieved 2013. 

External links

  This article uses material from the Wikipedia page available here. It is released under the Creative Commons Attribution-Share-Alike License 3.0.


Connect with defaultLogic
What We've Done
Led Digital Marketing Efforts of Top 500 e-Retailers.
Worked with Top Brands at Leading Agencies.
Successfully Managed Over $50 million in Digital Ad Spend.
Developed Strategies and Processes that Enabled Brands to Grow During an Economic Downturn.
Taught Advanced Internet Marketing Strategies at the graduate level.

Manage research, learning and skills at defaultLogic. Create an account using LinkedIn or facebook to manage and organize your IT knowledge. defaultLogic works like a shopping cart for information -- helping you to save, discuss and share.

  Contact Us