Setup GlusterFS Storage With Heketi on CentOS 8 / CentOS 7

ComputingPost
12 min readSep 27, 2022

In this guide, you’ll learn to install and configure GlusterFS Storage on CentOS 8 / CentOS 7 with Heketi. GlusterFS is a software defined, scale-out storage solution designed to provide affordable and flexible storage for unstructured data. GlusterFS allows you to unify infrastructure and data storage while improving availability performance and data manageability.



GlusterFS Storage can be deployed in the private cloud or datacenter or in your in-Premise datacenter. This is done purely on commodity servers and storage hardware resulting in a powerful, massively scalable, and highly available NAS environment.

Heketi

Heketi provides a RESTful management interface which can be used to manage the lifecycle of GlusterFS Storage volumes. This allows for easy integration of GlusterFS with cloud services like OpenShift, OpenStack Manila and Kubernetes for dynamic volumes provisioning.

Heketi will automatically determine the location for bricks across the cluster, making sure to place bricks and its replicas across different failure domains.

Environment Setup

Our setup of GlusterFS on CentOS 8 / CentOS 7 systems will comprise of below.

  • CentOS 8 / CentOS 8 Linux servers
  • GlusterFS 6 software release
  • Three GlusterFS Servers
  • Each server has three disks (@10GB)
  • DNS resolution configured — You can use /etc/hosts file if you don’t have DNS server
  • User account with sudo or root user access
  • Heketi will be installed in one of the GlusterFS nodes.

Under the /etc/hosts file of each server, I have:

$ sudo vim /etc/hosts

10.10.1.168 gluster01

10.10.1.179 gluster02

10.10.1.64 gluster03

Step 1: Update all servers

Ensure all servers that will be part of the GlusterFS storage cluster are updated.

sudo yum -y update

Since there may be kernel updates, I recommend you reboot your system.

sudo reboot

Step 2: Configure NTP time synchronization

You need to synchronize time across all GlusterFS Storage servers using the Network Time Protocol (NTP) or Chrony daemon. Refer to our guide below.

Setup Chrony Time synchronization on CentOS

Step 3: Add GlusterFS repository

Download GlusterFS repository on all servers. We’ll do GlusterFS 6 in this setup since it is the latest stable release.

CentOS 8:

sudo yum -y install wget

sudo wget -O /etc/yum.repos.d/glusterfs-rhel8.repo https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/CentOS/glusterfs-rhel8.repo

CentOS 7:

sudo yum -y install centos-release-gluster6

Once you’ve added the repository, update your YUM index.

sudo yum makecache

Step 3: Install GlusterFS on CentOS 8 / CentOS 7

Installation of GlusterFS on CentOS 8 differs from CentOS 7 installation.

Install GlusterFS on CentOS 8

Enable PowerTools repository

sudo dnf -y install dnf-utils

sudo yum-config-manager --enable PowerTools

sudo dnf -y install glusterfs-server

Install GlusterFS on CentOS 7

Run the following commands on all nodes to install latest GlusterFS on CentOS 7.

sudo yum -y install glusterfs-server

Confirm installed package version.

$ rpm -qi glusterfs-server 

Name : glusterfs-server

Version : 6.5

Release : 2.el8

Architecture: x86_64

Install Date: Tue 29 Oct 2019 06:58:16 PM EAT

Group : Unspecified

Size : 6560178

License : GPLv2 or LGPLv3+

Signature : RSA/SHA256, Wed 28 Aug 2019 03:39:40 PM EAT, Key ID 43607f0dc2f8238c

Source RPM : glusterfs-6.5-2.el8.src.rpm

Build Date : Wed 28 Aug 2019 03:27:19 PM EAT

Build Host : buildhw-09.phx2.fedoraproject.org

Relocations : (not relocatable)

Packager : Fedora Project

Vendor : Fedora Project

URL : http://docs.gluster.org/

Bug URL : https://bugz.fedoraproject.org/glusterfs

Summary : Distributed file-system server

You can also use the gluster command to check version.

$ gluster --version 

glusterfs 6.5

Repository revision: git://git.gluster.org/glusterfs.git

Copyright (c) 2006-2016 Red Hat, Inc.

GlusterFS comes with ABSOLUTELY NO WARRANTY.

It is licensed to you under your choice of the GNU Lesser

General Public License, version 3 or any later version (LGPLv3

or later), or the GNU General Public License, version 2 (GPLv2),

in all cases as published by the Free Software Foundation.

$ glusterfsd --version

Step 4: Start GlusterFS Service on CentOS 8 / CentOS 7

After installation of GlusterFS Service on CentOS 8 / CentOS 7, start and enable the service.

sudo systemctl enable --now glusterd.service

Load all Kernel modules that will be required by Heketi.

for i in dm_snapshot dm_mirror dm_thin_pool; do

sudo modprobe $i

done

If you have an active firewalld service, allow ports used by GlusterFS.

sudo firewall-cmd --add-service=glusterfs --permanent 

sudo firewall-cmd --reload

Check service status on all nodes.

$ systemctl status glusterd

● glusterd.service - GlusterFS, a clustered file-system server

Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)

Active: active (running) since Tue 2019-10-29 19:10:08 EAT; 3min 1s ago

Docs: man:glusterd(8)

Main PID: 32027 (glusterd)

Tasks: 9 (limit: 11512)

Memory: 3.9M

CGroup: /system.slice/glusterd.service

└─32027 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO



Oct 29 19:10:08 gluster01.novalocal systemd[1]: Starting GlusterFS, a clustered file-system server...

Oct 29 19:10:08 gluster01.novalocal systemd[1]: Started GlusterFS, a clustered file-system server.



$ systemctl status glusterd

● glusterd.service - GlusterFS, a clustered file-system server

Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)

Active: active (running) since Tue 2019-10-29 19:10:13 EAT; 3min 51s ago

Docs: man:glusterd(8)

Main PID: 3706 (glusterd)

Tasks: 9 (limit: 11512)

Memory: 3.8M

CGroup: /system.slice/glusterd.service

└─3706 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO

Oct 29 19:10:13 gluster02.novalocal systemd[1]: Starting GlusterFS, a clustered file-system server…

Oct 29 19:10:13 gluster02.novalocal systemd[1]: Started GlusterFS, a clustered file-system server.



$ systemctl status glusterd

● glusterd.service - GlusterFS, a clustered file-system server

Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)

Active: active (running) since Tue 2019-10-29 19:10:15 EAT; 4min 24s ago

Docs: man:glusterd(8)

Main PID: 3716 (glusterd)

Tasks: 9 (limit: 11512)

Memory: 3.8M

CGroup: /system.slice/glusterd.service

└─3716 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO

Oct 29 19:10:15 gluster03.novalocal systemd[1]: Starting GlusterFS, a clustered file-system server…

Oct 29 19:10:15 gluster03.novalocal systemd[1]: Started GlusterFS, a clustered file-system server.

Probe other nodes in the cluster

[root@gluster01 ~]# gluster peer probe gluster02

peer probe: success.



[root@gluster01 ~]# gluster peer probe gluster03

peer probe: success.



[root@gluster01 ~]# gluster peer status

Number of Peers: 2



Hostname: gluster02

Uuid: ebfdf84f-3d66-4f98-93df-a6442b5466ed

State: Peer in Cluster (Connected)



Hostname: gluster03

Uuid: 98547ab1-9565-4f71-928c-8e4e13eb61c3

State: Peer in Cluster (Connected)

Step 5: Install Heketi on one of the nodes

I’ll use gluster01 node to run Heketi service. Download the latest arhives of Heketi server and client from Github releases page.

curl -s https://api.github.com/repos/heketi/heketi/releases/latest \

| grep browser_download_url \

| grep linux.amd64 \

| cut -d '"' -f 4 \

| wget -qi -

Extract downloaded heketi archives.

for i in `ls | grep heketi | grep .tar.gz`; do tar xvf $i; done

Copy the heketi & heketi-cli binary packages.

sudo cp heketi/heketi,heketi-cli /usr/local/bin

Confirm they are available in your PATH

$ heketi --version

Heketi v10.4.0-release-10 (using go: go1.15.14)



$ heketi-cli --version

heketi-cli v10.4.0-release-10

Step 5: Configure Heketi Server

  • Add heketi system user.
sudo groupadd --system heketi

sudo useradd -s /sbin/nologin --system -g heketi heketi
  • Create heketi configurations and data paths.
sudo mkdir -p /var/lib/heketi /etc/heketi /var/log/heketi
  • Copy heketi configuration file to /etc/heketi directory.
sudo cp heketi/heketi.json /etc/heketi
  • Edit the Heketi configuration file
sudo vim /etc/heketi/heketi.json

Set service port:

"port": "8080"

Set admin and use secrets.

"_jwt": "Private keys for access",

"jwt":

"_admin": "Admin has access to all APIs",

"admin":

"key": "ivd7dfORN7QNeKVO"

,

"_user": "User only has access to /volumes endpoint",

"user":

"key": "gZPgdZ8NtBNj6jfp"



,

Configure glusterfs executor

_sshexec_comment": "SSH username and private key file information",

"sshexec":

"keyfile": "/etc/heketi/heketi_key",

"user": "root",

"port": "22",

"fstab": "/etc/fstab",

......

,

If you use a user other than root, ensure it has passwordless sudo privilege escalation.

Confirm database path is set properly

"_db_comment": "Database file name",

"db": "/var/lib/heketi/heketi.db",

},

Below is my complete configuration file modified.

"_port_comment": "Heketi Server Port Number",

"port": "8080",



"_enable_tls_comment": "Enable TLS in Heketi Server",

"enable_tls": false,



"_cert_file_comment": "Path to a valid certificate file",

"cert_file": "",



"_key_file_comment": "Path to a valid private key file",

"key_file": "",





"_use_auth": "Enable JWT authorization. Please enable for deployment",

"use_auth": false,



"_jwt": "Private keys for access",

"jwt":

"_admin": "Admin has access to all APIs",

"admin":

"key": "ivd7dfORN7QNeKVO"

,

"_user": "User only has access to /volumes endpoint",

"user":

"key": "gZPgdZ8NtBNj6jfp"



,



"_backup_db_to_kube_secret": "Backup the heketi database to a Kubernetes secret when running in Kubernetes. Default is off.",

"backup_db_to_kube_secret": false,



"_profiling": "Enable go/pprof profiling on the /debug/pprof endpoints.",

"profiling": false,



"_glusterfs_comment": "GlusterFS Configuration",

"glusterfs":

"_executor_comment": [

"Execute plugin. Possible choices: mock, ssh",

"mock: This setting is used for testing and development.",

" It will not send commands to any node.",

"ssh: This setting will notify Heketi to ssh to the nodes.",

" It will need the values in sshexec to be configured.",

"kubernetes: Communicate with GlusterFS containers over",

" Kubernetes exec api."

],

"executor": "mock",



"_sshexec_comment": "SSH username and private key file information",

"sshexec":

"keyfile": "/etc/heketi/heketi_key",

"user": "cloud-user",

"port": "22",

"fstab": "/etc/fstab"

,



"_db_comment": "Database file name",

"db": "/var/lib/heketi/heketi.db",



"_refresh_time_monitor_gluster_nodes": "Refresh time in seconds to monitor Gluster nodes",

"refresh_time_monitor_gluster_nodes": 120,



"_start_time_monitor_gluster_nodes": "Start time in seconds to monitor Gluster nodes when the heketi comes up",

"start_time_monitor_gluster_nodes": 10,



"_loglevel_comment": [

"Set log level. Choices are:",

" none, critical, error, warning, info, debug",

"Default is warning"

],

"loglevel" : "debug",



"_auto_create_block_hosting_volume": "Creates Block Hosting volumes automatically if not found or exsisting volume exhausted",

"auto_create_block_hosting_volume": true,



"_block_hosting_volume_size": "New block hosting volume will be created in size mentioned, This is considered only if auto-create is enabled.",

"block_hosting_volume_size": 500,



"_block_hosting_volume_options": "New block hosting volume will be created with the following set of options. Removing the group gluster-block option is NOT recommended. Additional options can be added next to it separated by a comma.",

"block_hosting_volume_options": "group gluster-block",



"_pre_request_volume_options": "Volume options that will be applied for all volumes created. Can be overridden by volume options in volume create request.",

"pre_request_volume_options": "",



"_post_request_volume_options": "Volume options that will be applied for all volumes created. To be used to override volume options in volume create request.",

"post_request_volume_options": ""
  • Generate Heketi SSH keys
sudo ssh-keygen -f /etc/heketi/heketi_key -t rsa -N ''

sudo chown heketi:heketi /etc/heketi/heketi_key*
  • Copy generated public key to all GlusterFS nodes
for i in gluster01 gluster02 gluster03; do

ssh-copy-id -i /etc/heketi/heketi_key.pub root@$i

done

Alternatively, you can cat the contents of /etc/heketi/heketi_key.pub and add to each server ~/.ssh/authorized_keys

Confirm you can access the GlusterFS nodes with the Heketi private key:

$ ssh -i /etc/heketi/heketi_key root@gluster02

The authenticity of host 'gluster02 (10.10.1.179)' can't be established.

ECDSA key fingerprint is SHA256:GXNdsSxmp2O104rPB4RmYsH73nTa5U10cw3LG22sANc.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'gluster02,10.10.1.179' (ECDSA) to the list of known hosts.

Activate the web console with: systemctl enable --now cockpit.socket



Last login: Tue Oct 29 20:11:32 2019 from 10.10.1.168

[root@gluster02 ~]#
  • Create Systemd Unit file

Create Systemd unit file for Heketi

$ sudo vim /etc/systemd/system/heketi.service

[Unit]

Description=Heketi Server



[Service]

Type=simple

WorkingDirectory=/var/lib/heketi

EnvironmentFile=-/etc/heketi/heketi.env

User=heketi

ExecStart=/usr/local/bin/heketi --config=/etc/heketi/heketi.json

Restart=on-failure

StandardOutput=syslog

StandardError=syslog



[Install]

WantedBy=multi-user.target

Also download sample environment file for Heketi.

sudo wget -O /etc/heketi/heketi.env https://raw.githubusercontent.com/heketi/heketi/master/extras/systemd/heketi.env
  • Set all directory permissions
sudo chown -R heketi:heketi /var/lib/heketi /var/log/heketi /etc/heketi
  • Start Heketi service

Disable SELinux

sudo setenforce 0

sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config

Then reload Systemd and start Heketi service

sudo systemctl daemon-reload

sudo systemctl enable --now heketi

Confirm the service is running.

$ systemctl status heketi

● heketi.service - Heketi Server

Loaded: loaded (/etc/systemd/system/heketi.service; enabled; vendor preset: disabled)

Active: active (running) since Tue 2019-10-29 20:29:23 EAT; 4s ago

Main PID: 2166 (heketi)

Tasks: 5 (limit: 11512)

Memory: 8.7M

CGroup: /system.slice/heketi.service

└─2166 /usr/local/bin/heketi --config=/etc/heketi/heketi.json



Oct 29 20:29:23 gluster01.novalocal heketi[2166]: Heketi v9.0.0

Oct 29 20:29:23 gluster01.novalocal heketi[2166]: [heketi] INFO 2019/10/29 20:29:23 Loaded mock executor

Oct 29 20:29:23 gluster01.novalocal heketi[2166]: [heketi] INFO 2019/10/29 20:29:23 Volumes per cluster limit is set to default value of 1000

Oct 29 20:29:23 gluster01.novalocal heketi[2166]: [heketi] INFO 2019/10/29 20:29:23 Block: Auto Create Block Hosting Volume set to true

Oct 29 20:29:23 gluster01.novalocal heketi[2166]: [heketi] INFO 2019/10/29 20:29:23 Block: New Block Hosting Volume size 500 GB

Oct 29 20:29:23 gluster01.novalocal heketi[2166]: [heketi] INFO 2019/10/29 20:29:23 Block: New Block Hosting Volume Options: group gluster-block

Oct 29 20:29:23 gluster01.novalocal heketi[2166]: [heketi] INFO 2019/10/29 20:29:23 GlusterFS Application Loaded

Oct 29 20:29:23 gluster01.novalocal heketi[2166]: [heketi] INFO 2019/10/29 20:29:23 Started background pending operations cleaner

Oct 29 20:29:23 gluster01.novalocal heketi[2166]: [heketi] INFO 2019/10/29 20:29:23 Started Node Health Cache Monitor

Oct 29 20:29:23 gluster01.novalocal heketi[2166]: Listening on port 8080

Step 6: Create Heketi Topology file

I’ve created an ansible playbook to be used for generating and updating topology file. Editing json file manually can be stressing. This will make scaling easy.

Install Ansible Locally — refer to Ansible Installation of Ansible documentation.

For CentOS:

sudo yum -y install epel-release

sudo yum -y install ansible

For Ubuntu:

sudo apt update

sudo apt install software-properties-common

sudo apt-add-repository --yes --update ppa:ansible/ansible

sudo apt install ansible

Once Ansible is installed, create project folder structure

mkdir -p ~/projects/ansible/roles/heketi/tasks,templates,defaults

Create Heketi Topology Jinja2 template

$ vim ~/projects/ansible/roles/heketi/templates/topology.json.j2



"clusters": [



"nodes": [

% if gluster_servers is defined and gluster_servers is iterable %

% for item in gluster_servers %



"node":

"hostnames":

"manage": [

" item.servername "

],

"storage": [

" item.serverip "

]

,

"zone": item.zone

,

"devices": [

" item.disks "

]

% if not loop.last %,% endif %

% endfor %

% endif %

]



]

Define variables — Set values to match your environment setup.

$ vim ~/projects/ansible/roles/heketi/defaults/main.yml

---

# GlusterFS nodes

gluster_servers:

- servername: gluster01

serverip: 10.10.1.168

zone: 1

disks:

- /dev/vdc

- /dev/vdd

- /dev/vde

- servername: gluster02

serverip: 10.10.1.179

zone: 1

disks:

- /dev/vdc

- /dev/vdd

- /dev/vde

- servername: gluster03

serverip: 10.10.1.64

zone: 1

disks:

- /dev/vdc

- /dev/vdd

- /dev/vde

Create Ansible task

$ vim ~/projects/ansible/roles/heketi/tasks/main.yml

---

- name: Copy heketi topology file

template:

src: topology.json.j2

dest: /etc/heketi/topology.json



- name: Set proper file ownership

file:

path: /etc/heketi/topology.json

owner: heketi

group: heketi

Create playbook and inventory file

$ vim ~/projects/ansible/heketi.yml

---

- name: Generate Heketi topology file and copy to Heketi Server

hosts: gluster01

become: yes

become_method: sudo

roles:

- heketi



$ vim ~/projects/ansible/hosts

gluster01

This is how everything should looks like

$ cd ~/projects/ansible/

$ tree

.

├── heketi.yml

├── hosts

└── roles

└── heketi

├── defaults

│ └── main.yml

├── tasks

│ └── main.yml

└── templates

└── topology.json.j2



5 directories, 5 files

Run playbook

$ cd ~/projects/ansible

$ ansible-playbook -i hosts --user myuser --ask-pass --ask-become-pass heketi.yml



# Key based and Passwordless sudo / root, use:

$ ansible-playbook -i hosts --user myuser heketi.yml

Execution output

Heketi-ansible-execution-1024x242

Confirm the contents of generated Topology file.

$ cat /etc/heketi/topology.json 



"clusters": [



"nodes": [



"node":

"hostnames":

"manage": [

"gluster01"

],

"storage": [

"10.10.1.168"

]

,

"zone": 1

,

"devices": [

"/dev/vdc","/dev/vdd","/dev/vde"

]

,

"node":

"hostnames":

"manage": [

"gluster02"

],

"storage": [

"10.10.1.179"

]

,

"zone": 1

,

"devices": [

"/dev/vdc","/dev/vdd","/dev/vde"

]

,

"node":

"hostnames":

"manage": [

"gluster03"

],

"storage": [

"10.10.1.64"

]

,

"zone": 1

,

"devices": [

"/dev/vdc","/dev/vdd","/dev/vde"

]

]



]

Step 7: Load Heketi Topology file

If all looks good, load the topology file.

# heketi-cli topology load --user admin --secret heketi_admin_secret --json=/etc/heketi/topology.json

In my setup, I’ll run

# heketi-cli topology load --user admin --secret ivd7dfORN7QNeKVO --json=/etc/heketi/topology.json

Creating cluster ... ID: dda582cc3bd943421d57f4e78585a5a9

Allowing file volumes on cluster.

Allowing block volumes on cluster.

Creating node gluster01 ... ID: 0c349dcaec068d7a78334deaef5cbb9a

Adding device /dev/vdc ... OK

Adding device /dev/vdd ... OK

Adding device /dev/vde ... OK

Creating node gluster02 ... ID: 48d7274f325f3d59a3a6df80771d5aed

Adding device /dev/vdc ... OK

Adding device /dev/vdd ... OK

Adding device /dev/vde ... OK

Creating node gluster03 ... ID: 4d6a24b992d5fe53ed78011e0ab76ead

Adding device /dev/vdc ... OK

Adding device /dev/vdd ... OK

Adding device /dev/vde ... OK

Same output is shared in the screenshot below.

Heketi-load-topology-1024x303

Step 7: Confirm GlusterFS / Heketi Setup

Add the Heketi access credentials to your ~/.bashrc file.

$ vim ~/.bashrc

export HEKETI_CLI_SERVER=http://heketiserverip:8080

export HEKETI_CLI_USER=admin

export HEKETI_CLI_KEY="AdminPass"

Source the file.

source ~/.bashrc

After loading topology file, run the command below to list your clusters.

# heketi-cli cluster list

Clusters:

Id:dda582cc3bd943421d57f4e78585a5a9 [file][block]

List nodes available in the Cluster:

# heketi-cli node list

Id:0c349dcaec068d7a78334deaef5cbb9a Cluster:dda582cc3bd943421d57f4e78585a5a9

Id:48d7274f325f3d59a3a6df80771d5aed Cluster:dda582cc3bd943421d57f4e78585a5a9

Id:4d6a24b992d5fe53ed78011e0ab76ead Cluster:dda582cc3bd943421d57f4e78585a5a9

Execute the following command to check the details of a particular node:

# heketi-cli node info ID

# heketi-cli node info 0c349dcaec068d7a78334deaef5cbb9a



Node Id: 0c349dcaec068d7a78334deaef5cbb9a

State: online

Cluster Id: dda582cc3bd943421d57f4e78585a5a9

Zone: 1

Management Hostname: gluster01

Storage Hostname: 10.10.1.168

Devices:

Id:0f26bd867f2bd8bc126ff3193b3611dc Name:/dev/vdd State:online Size (GiB):500 Used (GiB):0 Free (GiB):10 Bricks:0

Id:29c34e25bb30db68d70e5fd3afd795ec Name:/dev/vdc State:online Size (GiB):500 Used (GiB):0 Free (GiB):10 Bricks:0

Id:feb55e58d07421c422a088576b42e5ff Name:/dev/vde State:online Size (GiB):500 Used (GiB):0 Free (GiB):10 Bricks:0

Let’s now create a Gluster volume to verify Heketi & GlusterFS is working.

# heketi-cli volume create --size=1

Name: vol_7e071706e1c22052e5121c29966c3803

Size: 1

Volume Id: 7e071706e1c22052e5121c29966c3803

Cluster Id: dda582cc3bd943421d57f4e78585a5a9

Mount: 10.10.1.168:vol_7e071706e1c22052e5121c29966c3803

Mount Options: backup-volfile-servers=10.10.1.179,10.10.1.64

Block: false

Free Size: 0

Reserved Size: 0

Block Hosting Restriction: (none)

Block Volumes: []

Durability Type: replicate

Distribute Count: 1

Replica Count: 3



# heketi-cli volume list

Id:7e071706e1c22052e5121c29966c3803 Cluster:dda582cc3bd943421d57f4e78585a5a9 Name:vol_7e071706e1c22052e5121c29966c3803

To view topology, run:

heketi-cli topology info

The gluster command can be also used to check servers in the cluster.

gluster pool list

For integration with Kubernetes, check:

We now have a working GlusterFS and Heketi setup. Our next guides will cover how we can configure Persistent Volumes Dynamic provisioning for Kubernetes and OpenShift by using Heketi and GlusterFS.

Reference:

--

--

ComputingPost

ComputingPost — Linux Howtos, Tutorials, Guides, News, Tips and Tricks.