General Information

Description: GlusterFS is a scalable network filesystem. Uses common off the shelf hardware. It is free and open source.


  • Distro(s): Enterprise Linux 6

These examples all assume that there are:

  • 2 servers: server1 and server2
  • 1 client system

  • It is recommended to dedicate a disk to GlusterFS storage on the server.
  • XFS is the preferred file system as it can handle more files than others (such as EXT4)
  • Gluster recommends the following naming convention setup:
    • /data/glusterfs/<volume-name>/<brick#>/brick
  • Reasoning:
    • This allows for multiple disks (bricks) to be mounted under a Gluster volume and directory structure.
    • By creating an additional “/brick” directory under the “<brick#>” directory, the brick will fail to start if the underlying xfs mount is not available.
  1. Install XFS utilities
    1. yum install xfsprogs
  2. Create file system with GlusterFS recommended fileystem and inode size
    1. mkfs.xfs -i size=512 /dev/mapper/vgdata-lvdata1
  3. Create directory structure that data will be stored in
    1. mkdir -p /data/glusterfs/myvol/brick1
  4. /etc/fstab entry for mount point
    1. ## XFS Mount Used for GlusterFS
      /dev/mapper/vgdata-lvdata1 /data/glusterfs/myvol/brick1 xfs  defaults 1 2
  5. Mount data directory
    1. mount -a
  6. Create top level “brick” directory - All data will go under here
    1. mkdir /data/glusterfs/myvol/brick1/brick


Installation steps taken for the glusterfs servers and clients.

On the Gluster Servers:

  1. Add the gluster repo
    1. wget -P /etc/yum.repos.d
  2. Install glusterfs-server
    1. yum install glusterfs-server
  1. Start gluster daemon
    1. service glusterd start
  2. Ensure gluster daemon is enabled on startup
    1. chkconfig glusterd on

On the Gluster Clients:

  1. Install Gluster repo
    1. wget -P /etc/yum.repos.d
  2. Install glusterfs and fuse
    1. yum install glusterfs glusterfs-fuse


Configuration steps to setup gluster servers and clients.

  1. Configure Trusted Pool
    1. From server1
      • gluster peer probe server2
      • Note: Once a trusted pool has been created, only existing pool members can probe new servers.
      • New servers cannot probe a pool member.
  2. Create a GlusterFS Volume ⇒ How the volume is created depicts whether data is distributed, striped, replicated, or a combination.
    • Note: A volume cannot be created at a mount point; it must be a subdirectory within the mount point
    • Distributed(default): A normal “gluster volume create” with no arguments creates a distributed volume. This has no redundancy; files will round robin between bricks that are in the volume. The following alternates between server1 and server2 storage.
      • gluster volume create myvol server1:/data/glusterfs/myvol/brick1/brick server2:/data/glusterfs/myvol/brick1/brick
    • Replicated: A copy of the data is kept on each brick. (Amount determined by the replica number, but should equal # of bricks) The following keeps a copy on server1 and server2.
      • gluster volume create myvol replica 2 transport tcp server1:/data/glusterfs/myvol/brick1/brick server2:/data/glusterfs/myvol/brick1/brick
    • Distributed Replicated: Files distributed across replicated sets of bricks. (# bricks should be multiple of # of replicas and adjacent servers are the replicated sets) The following round robins between the “replicated sets” server1/server2 and server3/server4. Data will exist on two servers at all times.
      • gluster volume create myvol replica 2 transport tcp server1:/data/glusterfs/myvol/brick1/brick server2:/data/glusterfs/myvol/brick1/brick server3:/data/glusterfs/myvol/brick1/brick server4:/data/glusterfs/myvol/brick1/brick
    • More types:
  3. Start the Gluster volume to make it usable by clients
    1. gluster volume start myvol
  • Temporary mount
    • mount -t glusterfs server1:/myvol /mnt
  • Persistent mount in /etc/fstab
    • vim /etc/fstab
      ## Gluster mounts
      server1:/myvol /data glusterfs defaults 0 0


  • View peer status
    • gluster peer status
  • View gluster volume info
    • gluster volume info
  • Add server to the trusted storage pool
    • gluster peer probe <server-name>
  • Remove server from trusted storage pool
    • gluster peer detach <server-name>
  • Add a brick to the volume (add to trusted storage pool first)
    • gluster volume add-brick myvol server5:/data/glusterfs/myvol/brick1/brick
  • Remove a brick from the volume (and then remove from trusted storage pool)
    • gluster volume remove-brick myvol server2:/data/glusterfs/myvol/brick1/brick start
      gluster volume remove-brick myvol server2:/data/glusterfs/myvol/brick1/brick status
      gluster volume remove-brick myvol server2:/data/glusterfs/myvol/brick1/brick commit
    • Gluster will auto-rebalance data between the remaining bricks.
    • If any type of replication or stripe sets exist, you must remove a number of bricks equal to a multiple.
  • Replace a faulty brick
    • gluster volume replace-brick myvol server3:/data/glusterfs/myvol/brick1/brick server5:/data/glusterfs/myvol/brick1/brick commit force
  • Start Volume
    • gluster volume start myvol
  • Stop Volume
    • gluster volume stop myvol

Data can be re-balanced live and should be done after adding/removing nodes to the volume. There are two types of rebalance options:

  • Fix Layout: Fixes layout changes (added bricks), so newly added files can be stored on new nodes.
    • gluster volume rebalance myvol fix-layout start
  • Fix Layout and Migrate Data: Fix layout changes and also migrate data to new nodes.
    • gluster volume rebalance myvol start
  • View re-balance status
    • gluster volume rebalance status
  • Allow only clients from a specific network (networks comma separated for multiple)
    • gluster volume set myvol auth.allow 10.1.2.*
    • Note: Gluster servers do not need to be added to this list. The servers will use an auto-generated username/password when they are added to the trusted storage pool.

  • linux_wiki/glusterfs.txt
  • Last modified: 2019/05/25 23:50
  • (external edit)