linux_wiki:glusterfs

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

linux_wiki:glusterfs [2015/10/20 09:10]
billdozor [Volumes]
linux_wiki:glusterfs [2019/05/25 23:50]
Line 1: Line 1:
-====== Glusterfs ====== 
  
-**General Information** 
- 
-Description: GlusterFS is a scalable network filesystem. Uses common off the shelf hardware. It is free and open source. 
- 
-  * Official Site: [[http://www.gluster.org/]] 
-  * Gluster Terminology: [[http://gluster.readthedocs.org/en/latest/Quick-Start-Guide/Terminologies/]] 
-  * Gluster Architecture: [[http://gluster.readthedocs.org/en/latest/Quick-Start-Guide/Architecture/]] 
- 
-**Checklist** 
-  * Distro: Enterprise Linux 6 
- 
----- 
- 
-===== Servers and Client ===== 
- 
-These examples all assume that there are: 
- 
-  * 2 servers: server1 and server2 
-  * 1 client system 
- 
----- 
- 
-===== Data Directory Config ===== 
- 
-==== Some important recommendations ==== 
-  * It is recommended to dedicate a disk to GlusterFS storage on the server. 
-  * XFS is the preferred file system as it can handle more files than others (such as EXT4) 
-  * Gluster recommends the following naming convention setup: 
-    * <code bash> 
-/data/glusterfs/<volume-name>/<brick#>/brick 
-</code> 
-  * Reasoning: 
-    * This allows for multiple disks (bricks) to be mounted under a Gluster volume and directory structure. 
-    * By creating an additional "/brick" directory under the "<brick#>" directory, the brick will fail to start if the underlying xfs mount is not available. 
- 
-==== Servers: Data Dir Config Steps ==== 
- 
-  - Install XFS utilities 
-    - <code bash>yum install xfsprogs</code> 
-  - Create file system with GlusterFS recommended fileystem and inode size 
-    - <code bash>mkfs.xfs -i size=512 /dev/mapper/vgdata-lvdata1</code> 
-  - Create directory structure that data will be stored in 
-    - <code bash>mkdir -p /data/glusterfs/myvol/brick1</code> 
-  - /etc/fstab entry for mount point 
-    - <code bash>## XFS Mount Used for GlusterFS 
-/dev/mapper/vgdata-lvdata1 /data/glusterfs/myvol/brick1 xfs  defaults 1 2</code> 
-  - Mount data directory 
-    - <code bash>mount -a</code> 
-  - Create top level "brick" directory - All data will go under here 
-    - <code bash>mkdir /data/glusterfs/myvol/brick1/brick</code> 
- 
----- 
- 
-====== Installation ====== 
- 
-Installation steps taken for the glusterfs servers and clients. 
- 
-===== Server Install ===== 
- 
-On the Gluster Servers: 
-  - Add the gluster repo 
-    - <code bash>wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo</code> 
-  - Install glusterfs-server 
-    - <code bash>yum install glusterfs-server</code> 
- 
-  - Start gluster daemon 
-    - <code bash>service glusterd start</code> 
-  - Ensure gluster daemon is enabled on startup 
-    - <code bash>chkconfig glusterd on</code> 
- 
-===== Client Install ===== 
- 
-On the Gluster Clients: 
-  - Install Gluster repo 
-    - <code bash>wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo</code> 
-  - Install glusterfs and fuse 
-    - <code bash>yum install glusterfs glusterfs-fuse</code> 
- 
----- 
- 
-====== Configuration ====== 
- 
-Configuration steps to setup gluster servers and clients. 
- 
-===== Server Config ===== 
- 
-  - Configure Trusted Pool 
-    - From server1 
-      * <code bash>gluster peer probe server2</code> 
-      * Note: Once a trusted pool has been created, only existing pool members can probe new servers. 
-      * New servers cannot probe a pool member. 
-  - Create a GlusterFS Volume => **How the volume is created depicts whether data is distributed, striped, replicated, or a combination.** 
-    * Note: A volume cannot be created at a mount point; it must be a subdirectory within the mount point 
-    * **Distributed(default)**: A normal "gluster volume create" with no arguments creates a distributed volume. This has no redundancy; files will round robin between bricks that are in the volume. The following alternates between server1 and server2 storage. 
-      * <code bash>gluster volume create myvol server1:/data/glusterfs/myvol/brick1/brick server2:/data/glusterfs/myvol/brick1/brick</code> 
-    * **Replicated**: A copy of the data is kept on each brick. (Amount determined by the replica number, but should equal # of bricks) The following keeps a copy on server1 and server2. 
-        * <code bash>gluster volume create myvol replica 2 transport tcp server1:/data/glusterfs/myvol/brick1/brick server2:/data/glusterfs/myvol/brick1/brick</code> 
-    * **Distributed Replicated**: Files distributed across replicated sets of bricks. (# bricks should be multiple of # of replicas and adjacent servers are the replicated sets) The following round robins between the "replicated sets" server1/server2 and server3/server4. Data will exist on two servers at all times. 
-        * <code bash>gluster volume create myvol replica 2 transport tcp server1:/data/glusterfs/myvol/brick1/brick server2:/data/glusterfs/myvol/brick1/brick server3:/data/glusterfs/myvol/brick1/brick server4:/data/glusterfs/myvol/brick1/brick</code> 
-    * More types:  
-        * [[http://gluster.readthedocs.org/en/latest/Quick-Start-Guide/Architecture/#types-of-volumes|Architecture]] 
-        * [[http://gluster.readthedocs.org/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/#formatting-and-mounting-bricks|Volume Types]] 
-  - Start the Gluster volume to make it usable by clients 
-    - <code bash>gluster volume start myvol</code> 
- 
-===== Client Config ===== 
- 
-  * Temporary mount 
-    * <code bash>mount -t glusterfs server1:/myvol /mnt</code> 
-  * Persistent mount in /etc/fstab 
-    * <code bash>vim /etc/fstab 
-## Gluster mounts 
-server1:/myvol /data glusterfs defaults 0 0</code> 
- 
-======= Operation ======= 
- 
-===== Server Ops ===== 
- 
-==== Viewing Status ==== 
- 
-  * View peer status 
-    * <code bash>gluster peer status</code> 
-  * View gluster volume info 
-    * <code bash>gluster volume info</code> 
- 
-==== Storage Pools ==== 
- 
-  * Add server to the trusted storage pool 
-    * <code bash>gluster peer probe <server-name></code> 
- 
-  * Remove server from trusted storage pool 
-    * <code bash>gluster peer detach <server-name></code> 
- 
-==== Volumes ==== 
- 
-  * Add a brick to the volume (add to trusted storage pool first) 
-    * <code bash>gluster volume add-brick myvol server5:/data/glusterfs/myvol/brick1/brick</code> 
-  * Remove a brick from the volume (and then remove from trusted storage pool) 
-    * <code bash>gluster volume remove-brick myvol server2:/data/glusterfs/myvol/brick1/brick start 
-gluster volume remove-brick myvol server2:/data/glusterfs/myvol/brick1/brick status 
-gluster volume remove-brick myvol server2:/data/glusterfs/myvol/brick1/brick commit</code> 
-    * Gluster will auto-rebalance data between the remaining bricks. 
-    * If any type of replication or stripe sets exist, you must remove a number of bricks equal to a multiple. 
-  * Replace a faulty brick 
-    * <code bash>gluster volume replace-brick myvol server3:/data/glusterfs/myvol/brick1/brick server5:/data/glusterfs/myvol/brick1/brick commit force</code> 
-  * Start Volume 
-    * <code bash>gluster volume start myvol</code> 
-  * Stop Volume 
-    * <code bash>gluster volume stop myvol</code> 
- 
-==== Balance Data ==== 
- 
-Data can be re-balanced live and should be done after adding/removing nodes to the volume. There are two types of rebalance options: 
- 
-  * Fix Layout: Fixes layout changes (added bricks), so newly added files can be stored on new nodes. 
-    * <code bash>gluster volume rebalance myvol fix-layout start</code> 
-  * Fix Layout and Migrate Data: Fix layout changes and also migrate data to new nodes. 
-    * <code bash>gluster volume rebalance myvol start</code> 
- 
-  * View re-balance status 
-    * <code bash>gluster volume rebalance status</code> 
- 
-==== Security ==== 
- 
-  * Allow only clients from a specific network (networks comma separated for multiple) 
-    * <code bash>gluster volume set myvol auth.allow 10.1.2.*</code> 
-    * Note: Gluster servers do not need to be added to this list. The servers will use an auto-generated username/password when they are added to the trusted storage pool. 
- 
----- 
  • linux_wiki/glusterfs.txt
  • Last modified: 2019/05/25 23:50
  • (external edit)