GPFS stands for Basic Parallel File System, which is a cluster file system developed by IBM, know because the IBM Storage Scale. It permits simultaneous learn and write entry to a file system or set of file techniques from a number of nodes on the identical time.
Within the final article, we had confirmed you find out how to set up and configure GPFS file system in RHEL system, as we speak we are going to present you find out how to create GPFS cluster file system on RHEL, together with NSD stanzafile creation, NSD Disk creation, GPFS file system creation and mounting it
If you’re new to GPFS, I like to recommend studying the GPFS sequence article listed under:
1) Disk Addition
Get a LUN ID’s from the Storage Group and scan the SCSI disks (On each the Nodes).
# for host in `ls /sys/class/scsi_host`; do echo "Scanning $host...Accomplished"; echo "- - -" > /sys/class/scsi_host/$host/scan; achieved
After the scan, examine if the given LUNs are found on the OS stage.
# lsscsi --scsi --size | grep -i [Last_Five_Digit_of_LUN]
2) Creating NSD StanzaFile
The mmcrnsd command is used to create cluster-wide names for Community Shared Disks (NSDs) utilized by GPFS. To take action, you need to outline and put together every bodily disk that you simply wish to use with GPFS as a Community Shared Disk (NSD). The NSD stanzafile incorporates the properties of the disks to be created. This file could be up to date as wanted with the mmcrnsd command and could be supplied as enter to the mmcrfs, mmadddisk, or mmrpldisk command.
The names generated by the mmcrnsd command are obligatory as a result of disks connected to a number of nodes might need completely different disk machine names on every node. The NSD names establish every disk uniquely. This command ought to be run for all disks utilized in GPFS file techniques.
A novel NSD quantity ID is written to the disk to establish {that a} disk has been processed by the mmcrnsd command. The entire NSD instructions (mmcrnsd, mmlsnsd and mmdelnsd) use this distinctive NSD quantity ID to establish and course of NSDs. After NSDs are created, GPFS cluster knowledge is up to date and accessible to be used by GPFS.
NSD stanzafile Syntax:
Listed here are the widespread parameters utilized in NSD stanzafile.
%nsd: machine=DiskName nsd=NsdName servers=ServerList utilization= localCache failureGroup=FailureGroup pool=StoragePool thinDiskType=no
The place:
- machine=DiskName – Block machine title that you simply wish to outline as an NSD.
- nsd=NsdName – Specifies the title of the NSD to be created. This title should not already be used as one other GPFS disk title, and it should not start with the reserved string ‘gpfs’.
- servers=ServerList – Specifies NSD servers seperate by comma. Outline NSDs in major and secondary orders. It helps as much as eight NSD servers on this record.
- utilization – Specifies the kind of knowledge to be saved on the disk. The ‘dataAndMetadata’ is the default for disks within the system pool, which signifies that the disk incorporates each knowledge and metadata.
- failureGroup – Identifies the failure group to which the disk belongs. The default is -1, which signifies that the disk has no level of failure in widespread with every other disk.
- pool=StoragePool – Specifies the title of the storage pool to which the NSD is assigned. The default worth for pool is system.
- thinDiskType – Specifies the house reclaim disk kind
Test if the disk reservation coverage is about to 'NO reservation
‘ utilizing the sg_persist command. If not, make the required modifications at VMWare or Storage stage.
# sg_persist -r /dev/sdd EMC SYMMETRIX 5100 Peripheral machine kind: disk PR geeration=0x1, there's NO reservation held
This can be a pattern NSD stanzafile created so as to add a knowledge disk as an instance this text. Let’s use '2gdata.nsd'
as a filename and it will likely be positioned below '/usr/lpp/mmfs'
.
# cat /usr/lpp/mmfs/2gdata.nsd %nsd: machine=/dev/sdd nsd=2gdatansd01 servers=2ggpfsnode01-gpfs,2ggpfsnode02-gpfs utilization=dataAndMetadata pool=system
Make a Word: It’s not necessary to incorporate all of the parameters, so solely the required one ought to be added, the remainder of the parameters will use the default worth if any.

3) Creating Community Shared Disk (NSD)
The mmcrnsd command is used to create cluster-wide names for NSDs utilized by GPFS.
# mmcrnsd -F /usr/lpp/mmfs/2gdata.nsd mmcrnsd: Processing disk sdd mmcrnsd: Propagating the cluster configuration knowledge to all affected nodes. That is an asynchronous course of.
After NSD technology, the ‘2gdata.nsd’ file is rewritten and appears like under.
# cat /usr/lpp/mmfs/2gdata.nsd # /dev/sdd:2ggpfsnode01-gpfs:2ggpfsnode02-gpfs:dataAndMetadata::2gdatansd01:system 2gdatansd01:::dataAndMetadata:-1::system
Run mmlsnsd command to examine NSD standing. It’s going to present the disk title you created and filesystem will present 'free disk'
as a result of we didn’t create it.
# mmlsnsd File system Disk title NSD servers ------------------------------------------------------------------------------- (free disk) 2gdatansd01 2ggpfsnode01-gpfs,2ggpfsnode02-gpfs (free disk) tiebreak1 (instantly connected) (free disk) tiebreak2 (instantly connected) (free disk) tiebreak3 (instantly connected)
4) Creating GPFS Filesystem
Use the mmcrfs command to create a GPFS file system. By default, GPFS ranging from 5.0 creates a file system with a '4 MiB'
block dimension and an '8 KiB'
subblock dimension for good efficiency, however the block dimension could be modified relying on the appliance workload utilizing the mmcrfs command with the choice '-B'
.
Make a Word: The block dimension can be configured throughout NSD creation.
Syntax:
mmcrfs [Mount_Point_Name] [FS_Name] -F [Path_to_StanzaFile]
Execute the next command to create a GPFS file system.
# mmcrfs /GPFS/2gdata 2gdatalv -F /usr/lpp/mmfs/2gdata.nsd The next disks of 2gdatalv will probably be formated on node 2ggpfsnode01: 2gdatansd01: dimension 10241 MB Formatting file system ... Disks as much as dimension 106.74 GB could be added to storage pool system. Creating Inode File Creating Allocation Maps Creating Log Information Creating Inode Allocation Map Creating Block Allocation Map Formatting Allocation Map for storage pool system Accomplished creation of file system /dev/2gdatalv. mmcrfs: Propagating the cluster configuration knowledge to all affected nodes. That is an asynchronous course of.
Use the mmlsfs command to record the attributes of a file system. To examine the FS data of '2gdatalv'
, run: This may present you lots of data.
# mmlsfs 2gdatalv
Use the required switches to get solely the required data of the '2gdatalv'
file system. To filter solely inode dimension, block dimension, subblock dimension, run:
# mmlsfs 2gdatalv -i -B -f -V -d -T flag worth description --------------- ----------------------- -------------------------- -i 4096 Indoe dimension in bytes -B 4194304 Block dimension -f 8192 Minimal fragment (subblock) dimension in bytes -V 30.00 (5.1.8.0) File system model -d 2gdatansd01 Disks in file system -T /GPFS/knowledge default mount level
5) Mounting GPFS File System
The mmmount command mounts the desired GPFS file system on a number of nodes within the cluster. If no nodes are specified, the file techniques are mounted solely on the node from which the command was issued. Use the '-a'
change to mount the GPFS file system on all system within the cluster concurrently.
# mmmount 2gdatalv -a Thu Could 11 11:38:09 +04 2023: mmmount: Mounting file techniques ... 2ggpfsnode01-gpfs: 2ggpfsnode02-gpfs:
If you wish to mount a number of file system concurrently, run:
# mmmount all -a
Lastly examine the mounted file system utilizing df command as proven under:
# df -hT | (IFS= learn -r header; echo $header; grep -i gpfs) | column -t Filesystem Sort Measurement Used Avail Use% Mounted on 2gdatalv GPFS 10G 1.4G 8.7G 14% /GPFS/2gdata
Bonus Ideas:
After the GPFS file system creation, when you run the ‘mmlsnsd’ command, you may see the file system data as proven under.
# mmlsnsd File system Disk title NSD servers ----------------------------------------------------------------------- 2gdatalv 2gdatansd01 2ggpfsnode01-gpfs,2ggpfsnode02-gpfs (free disk) tiebreaker1 (instantly connected) (free disk) tiebreaker2 (instantly connected) (free disk) tiebreaker3 (instantly connected)
Equally, you will discover the file system data within the ‘mmlsconfig’ command output as proven under:
# mmlsconfig Configuration knowledge for cluster 2gtest-cluster.2ggpfsnode01-gpfs: ---------------------------------------------------------------- clusterName 2gtest-cluster.2ggpfsnode01-gpfs clusterId 6339012640885012929 autoload sure dmapiFileHandleSize 32 minReleaseLevel 5.1.8.0 tscCmdAllowRemoteConnections no ccrEnabled sure cipherList AUTHONLY sdrNotifyAuthEnabled sure tiebreakerDisks tiebreaker1;tiebreaker2;tiebreaker3 autoBuildGPL sure pagepool 512M adminMode central File techniques in cluster 2gtest-cluster.2ggpfsnode01-gpfs: --------------------------------------------------------- /dev/2gdatalv
Remaining Ideas
I hope you discovered find out how to create a GPFS Cluster File System on a RHEL system.
On this article, we lined GPFS Cluster file system creation, NSD creation and mounting GPFS file system in RHEL.
In case you have any questions or suggestions, be happy to remark under.