Next revision
|
Previous revision
|
linux_wiki:glusterfs [2015/10/04 02:06] billdozor created |
linux_wiki:glusterfs [2019/05/25 23:50] (current) |
| |
**Checklist** | **Checklist** |
* Distro: Enterprise Linux 6 | * Distro(s): Enterprise Linux 6 |
| |
---- | ---- |
- Configure Trusted Pool | - Configure Trusted Pool |
- From server1 | - From server1 |
- <code bash>gluster peer probe server2</code> | * <code bash>gluster peer probe server2</code> |
- Note: Once a trusted pool has been created, only existing pool members can probe new servers. | * Note: Once a trusted pool has been created, only existing pool members can probe new servers. |
- New servers cannot probe a pool member. | * New servers cannot probe a pool member. |
- Create a GlusterFS Volume | - Create a GlusterFS Volume => **How the volume is created depicts whether data is distributed, striped, replicated, or a combination.** |
- <code bash>gluster volume create myvol server1:/data/glusterfs/myvol/brick1/brick server2:/data/glusterfs/myvol/brick1/brick</code> | * Note: A volume cannot be created at a mount point; it must be a subdirectory within the mount point |
- Distributed(default): No redundancy, files will round robin between bricks that are in the volume. | * **Distributed(default)**: A normal "gluster volume create" with no arguments creates a distributed volume. This has no redundancy; files will round robin between bricks that are in the volume. The following alternates between server1 and server2 storage. |
- Note: A volume cannot be created at a mount point; it must be a subdirectory within the mount point | * <code bash>gluster volume create myvol server1:/data/glusterfs/myvol/brick1/brick server2:/data/glusterfs/myvol/brick1/brick</code> |
| * **Replicated**: A copy of the data is kept on each brick. (Amount determined by the replica number, but should equal # of bricks) The following keeps a copy on server1 and server2. |
| * <code bash>gluster volume create myvol replica 2 transport tcp server1:/data/glusterfs/myvol/brick1/brick server2:/data/glusterfs/myvol/brick1/brick</code> |
| * **Distributed Replicated**: Files distributed across replicated sets of bricks. (# bricks should be multiple of # of replicas and adjacent servers are the replicated sets) The following round robins between the "replicated sets" server1/server2 and server3/server4. Data will exist on two servers at all times. |
| * <code bash>gluster volume create myvol replica 2 transport tcp server1:/data/glusterfs/myvol/brick1/brick server2:/data/glusterfs/myvol/brick1/brick server3:/data/glusterfs/myvol/brick1/brick server4:/data/glusterfs/myvol/brick1/brick</code> |
| * More types: |
| * [[http://gluster.readthedocs.org/en/latest/Quick-Start-Guide/Architecture/#types-of-volumes|Architecture]] |
| * [[http://gluster.readthedocs.org/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/#formatting-and-mounting-bricks|Volume Types]] |
- Start the Gluster volume to make it usable by clients | - Start the Gluster volume to make it usable by clients |
- <code bash>gluster volume start myvol</code> | - <code bash>gluster volume start myvol</code> |
* View gluster volume info | * View gluster volume info |
* <code bash>gluster volume info</code> | * <code bash>gluster volume info</code> |
| |
==== Managing ==== | |
| |
* Re-balance Data | |
* <code bash>gluster volume rebalance myvol start</code> | |
* View re-balance status | |
* <code bash>gluster volume rebalance status</code> | |
| |
* Daemon control | |
* <code bash>service glusterd [stop|start|restart|status]</code> | |
| |
==== Storage Pools ==== | ==== Storage Pools ==== |
* Remove server from trusted storage pool | * Remove server from trusted storage pool |
* <code bash>gluster peer detach <server-name></code> | * <code bash>gluster peer detach <server-name></code> |
| |
| ==== Volumes ==== |
| |
| * Add a brick to the volume (add to trusted storage pool first) |
| * <code bash>gluster volume add-brick myvol server5:/data/glusterfs/myvol/brick1/brick</code> |
| * Remove a brick from the volume (and then remove from trusted storage pool) |
| * <code bash>gluster volume remove-brick myvol server2:/data/glusterfs/myvol/brick1/brick start |
| gluster volume remove-brick myvol server2:/data/glusterfs/myvol/brick1/brick status |
| gluster volume remove-brick myvol server2:/data/glusterfs/myvol/brick1/brick commit</code> |
| * Gluster will auto-rebalance data between the remaining bricks. |
| * If any type of replication or stripe sets exist, you must remove a number of bricks equal to a multiple. |
| * Replace a faulty brick |
| * <code bash>gluster volume replace-brick myvol server3:/data/glusterfs/myvol/brick1/brick server5:/data/glusterfs/myvol/brick1/brick commit force</code> |
| * Start Volume |
| * <code bash>gluster volume start myvol</code> |
| * Stop Volume |
| * <code bash>gluster volume stop myvol</code> |
| |
| ==== Balance Data ==== |
| |
| Data can be re-balanced live and should be done after adding/removing nodes to the volume. There are two types of rebalance options: |
| |
| * Fix Layout: Fixes layout changes (added bricks), so newly added files can be stored on new nodes. |
| * <code bash>gluster volume rebalance myvol fix-layout start</code> |
| * Fix Layout and Migrate Data: Fix layout changes and also migrate data to new nodes. |
| * <code bash>gluster volume rebalance myvol start</code> |
| |
| * View re-balance status |
| * <code bash>gluster volume rebalance status</code> |
| |
==== Security ==== | ==== Security ==== |