Projects / The Global File System

The Global File System

The Global File System (GFS) is a 64-bit shared disk cluster file system for Linux. GFS cluster nodes physically share the same storage by means of Fibre Channel or shared SCSI devices. The file system appears to be local on each node and GFS synchronizes file access across the cluster. GFS is fully symmetric, meaning that all nodes are equal and there is no server which may be a bottleneck or single point of failure. GFS uses read and write caching while maintaining full UNIX file system semantics. GFS supports journaling, recovery from client failures, and many other features.

Tags
Licenses
Operating Systems

Recent releases

  •  08 Apr 2005 22:39

    Release Notes: Numerous minor bugfixes and code cleanups.

    •  27 Feb 2005 06:36

      Release Notes: This version has been ported to recent Linux kernels. There are many speedups, enhancements, and bugfixes.

      •  18 Jun 2001 00:41

        Release Notes: New kernel patches for Linux 2.4.5, a new hard_panic patch, a new STOMITH method (apc_ms), initialization script updates, and a slew of updates to the man pages. The following bugs have also been addressed: undefined rscsi_disks symbol, memexp kernel thread issue, memexpd performance degradation over time, problems with HIMEM systems, pool and passamble fixes, GNBD fixes, user-space LFS enhancements, and gfs_jadd improvements.

        •  22 May 2001 20:08

          Release Notes: Support for Linux 2.4.4, addition of Lock Value Blocks (LVBs) for performance enhancement, updated flock and fcntl support, a rewrite of the Pool tools for enhanced functionality, performance improvements when GFS is used as a local vs. cluster FS, a fix for an atime bug, improved df performance, and new STOMITH methods. There are incompatibilities in the GFS modules between 4.0.1 and 4.1; please read the Release Notes on how to upgrade.

          •  22 Mar 2001 06:16

            Release Notes: The following bugs have been fixed: change to the sorting method for directory code that caused stack problems on some architectures, the ability to mount mulitple GFS MemExp file systems, lockups on umount in rare situations have been eliminated, flock now performs correctly when under major contention loads, time stamp modifications on truncate() have been fixed, obscure bug that caused nodes to try and STOMITH themselves has been eliminated, and a reservation bug occasionally encountered when expanding a file system has been fixed.

            Screenshot

            Project Spotlight

            OpenStack4j

            A Fluent OpenStack client API for Java.

            Screenshot

            Project Spotlight

            TurnKey TWiki Appliance

            A TWiki appliance that is easy to use and lightweight.