Table of contents
Assumtion
Icon- FreeNAS IP: 192.168.1.254
Storage(FreeNAS) Settings
I set the FreeNas as below (it's just a sample, not a specific guide)
Screenshot
|
Description
|
|---|---|
![]() | The wizard mode makes your life simple! |
![]() | Select language, keyboard map and timezone |
![]() | Select Pool Name and Purpose |
![]() | Skip |
![]() | Create a share for iSCSI |
![]() | Skip |
![]() | Click Confirm |
![]() | The result will be look similar as the left image. |
Hosts settings
Install iSCSI packages
Install the blow package on each hosts.
yum install iscsi-initiator-utils -y |
Initializing iSCSI volume
Connect iSCSI target
Follow these for each hosts.
# iscsiadm -m discovery -t st -p 192.168.1.254192.168.1.254:3260,2 iqn.2011-03.net.abcd.istgt:target01# iscsiadm -m node -T iqn.2011-03.net.abcd.istgt:target01 -p 192.168.1.254 -lLogging in to [iface: default, target: iqn.2011-03.net.abcd.istgt:target01, portal: 192.168.1.254,3260] (multiple)Login to [iface: default, target: iqn.2011-03.net.abcd.istgt:target01, portal: 192.168.1.254,3260] successful. |
Initialize iSCSI target volume
Do this only once, not on the every hosts.
# cat /proc/partitionsmajor minor #blocks name 8 0 292421632 sda 8 1 512000 sda1 8 2 291908608 sda2 253 0 52428800 dm-0 253 1 16506880 dm-1 253 2 222969856 dm-2 8 16 1073741824 sdb# parted /dev/sdb mklabel gptWarning: The existing disk label on /dev/sdb will be destroyed and all data on this disk will be lost. Do you want tocontinue?Yes/No? yes Information: You may need to update /etc/fstab.# parted /dev/sdb printModel: FreeBSD iSCSI Disk (scsi)Disk /dev/sdb: 3006GBSector size (logical/physical): 512B/16384BPartition Table: gptNumber Start End Size File system Name Flags# parted /dev/sdb mkpart primary 1 16384Information: You may need to update /etc/fstab.# parted /dev/sdb set 1 lvm onInformation: You may need to update /etc/fstab. # parted /dev/sdb printModel: FreeBSD iSCSI Disk (scsi)Disk /dev/sdb: 3006GBSector size (logical/physical): 512B/16384BPartition Table: gptNumber Start End Size File system Name Flags 1 1049kB 16.4GB 16.4GB primary lvm# pvcreate /dev/sdb1 Physical volume "/dev/sdb1" successfully created# vgcreate clvmvol /dev/sdb1 Volume group "clvmvol" successfully created# vgs VG #PV #LV #SN Attr VSize VFree clvmvol 1 0 0 wz--n- 15.25g 15.25g vg_sysengdevkvmh 1 3 0 wz--n- 278.38g 0 |
Icon
When you can find the volume group clvmvol on other hosts, try to run blow command:
[root@syseng-dev-kvmhost02 iscsi]# pvdisplay --- Physical volume --- PV Name /dev/sda2 VG Name vg_sysengdevkvmh PV Size 930.51 GiB / not usable 3.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 238210 Free PE 0 Allocated PE 238210 PV UUID MlR1ef-yDT0-TVsr-1Rgc-Pq2D-3STf-64HIro [root@syseng-dev-kvmhost02 iscsi]# pvscan PV /dev/sdb1 VG clvmvol lvm2 [15.25 GiB / 15.25 GiB free] PV /dev/sda2 VG vg_sysengdevkvmh lvm2 [930.51 GiB / 0 free] Total: 2 [945.76 GiB] / in use: 2 [945.76 GiB] / in no VG: 0 [0 ][root@syseng-dev-kvmhost02 iscsi]# vgscan Reading all physical volumes. This may take a while... Found volume group "clvmvol" using metadata type lvm2 Found volume group "vg_sysengdevkvmh" using metadata type lvm2[root@syseng-dev-kvmhost02 iscsi]# vgs VG #PV #LV #SN Attr VSize VFree clvmvol 1 0 0 wz--n- 15.25g 15.25g vg_sysengdevkvmh 1 3 0 wz--n- 930.51g 0 |
Changing LVM settings
You must set locking_type value as 3
/etc/lvm/lvm.conf
<SNIP> locking_type = 3 # Set to 0 to fail when a lock request cannot be satisfied immediately. wait_for_locks = 1 # If using external locking (type 2) and initialisation fails, # with this set to 1 an attempt will be made to use the built-in # clustered locking. # If you are using a customised locking_library you should set this to 0.<SNIP> |
Check VG again
After changing LVM setting, you can not find your volume group anymore, because now you haven't activated the cluster mode yet.
# vgs connect() failed on local socket: No such file or directory Internal cluster locking initialisation failed. |
Set Clustering
/etc/hosts
Before doing this, You should match your hosts file which in the /etc directory.
because, every members of the cluster should know each other by names.
for this purpose, I sat the file /etc/hosts as below.
172.16.1.3 syseng-dev-kvmhost01.localdomain syseng-dev-kvmhost01172.16.1.4 syseng-dev-kvmhost02.localdomain syseng-dev-kvmhost02172.16.1.5 syseng-dev-kvmhost03.localdomain syseng-dev-kvmhost03 |
Icon
Each hosts has 3 IPs, one for management network(172.16.1.0/24), public network(10.40.205.0/24) and SAN network(192.168.1.0/24).
I used a management network address for the cluster.
Install packages
yum install lvm2-cluster cman |
Cluster settings
if you are planning to use a fence in the cluster environment, please follow the guide, DRAC6 KVMHost Cluster Fencing.
/etc/cluster/cluster.conf
<?xml version="1.0" ?><cluster name="clvmkvm" config_version="2"> <!-- post_join_delay: number of seconds the daemon will wait before fencing any victims after a node joins the domain post_fail_delay: number of seconds the daemon will wait before fencing any victims after a domain member fails clean_start : prevent any startup fencing the daemon might do. It indicates that the daemon should assume all nodes are in a clean state to start. --> <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/> <clusternodes> <clusternode name="syseng-dev-kvmhost01" nodeid="1"> </clusternode> <clusternode name="syseng-dev-kvmhost02" nodeid="2"> </clusternode> <clusternode name="syseng-dev-kvmhost03" nodeid="3"> </clusternode> </clusternodes> <!-- cman two nodes specification --> <!--<cman expected_votes="1" two_node="1"/>--> <fencedevices/></cluster> |
fence
Icon
I didn't use any fencing in the above configuration. but you should use fencing when you use clustering in the production environment.
Icon
When you use more than 2 servers, you should use <cman two_node="1" expected_votes="1"/> line.
If you want to know more about this, please refer to Red Hat Linux clustering documents.
Firewall setting
Not working - need more time
/etc/sysconfig/iptables
*filter:INPUT ACCEPT [0:0]:FORWARD ACCEPT [0:0]:OUTPUT ACCEPT [0:0]-A INPUT -i virbr0 -p tcp -m tcp --dport 53 -j ACCEPT-A INPUT -i virbr0 -p udp -m udp --dport 67 -j ACCEPT-A INPUT -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT-A INPUT -p tcp -m tcp --dport 49152:49216 -j ACCEPT-A INPUT -p tcp -m tcp --dport 5900:6100 -j ACCEPT-A INPUT -p tcp -m tcp --dport 16509 -j ACCEPT-A INPUT -p tcp -m tcp --dport 1798 -j ACCEPT-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT-A INPUT -s 172.16.1.0/24 -d 172.16.1.0/24 -p udp -m state --state NEW -m multiport --dports 5404,5405 -j ACCEPT-A INPUT -s 172.16.1.0/24 -d 172.16.1.0/24 -p udp -m addrtype --dst-type MULTICAST -m state --state NEW -m multiport --dports 5404,5405 -j ACCEPT-A INPUT -s 172.16.1.0/24 -d 172.16.1.0/24 -p tcp -m state --state NEW -m tcp --dport 21064 -j ACCEPT-A INPUT -s 172.16.1.0/24 -d 172.16.1.0/24 -p tcp -m state --state NEW -m tcp --dport 11111 -j ACCEPT-A INPUT -s 172.16.1.0/24 -d 172.16.1.0/24 -p tcp -m state --state NEW -m tcp --dport 16851 -j ACCEPT-A INPUT -s 172.16.1.0/24 -d 172.16.1.0/24 -p tcp -m state --state NEW -m tcp --dport 8084 -j ACCEPT-A INPUT -p igmp -j ACCEPTCOMMIT |
Important rules
-A INPUT -s 172.16.1.0/24 -d 172.16.1.0/24 -p udp -m state --state NEW -m multiport --dports 5404,5405 -j ACCEPT-A INPUT -s 172.16.1.0/24 -d 172.16.1.0/24 -p udp -m addrtype --dst-type MULTICAST -m state --state NEW -m multiport --dports 5404,5405 -j ACCEPT-A INPUT -s 172.16.1.0/24 -d 172.16.1.0/24 -p tcp -m state --state NEW -m tcp --dport 21064 -j ACCEPT-A INPUT -s 172.16.1.0/24 -d 172.16.1.0/24 -p tcp -m state --state NEW -m tcp --dport 11111 -j ACCEPT-A INPUT -s 172.16.1.0/24 -d 172.16.1.0/24 -p tcp -m state --state NEW -m tcp --dport 16851 -j ACCEPT-A INPUT -s 172.16.1.0/24 -d 172.16.1.0/24 -p tcp -m state --state NEW -m tcp --dport 8084 -j ACCEPT-A INPUT -p igmp -j ACCEPT |
service iptables restart |
Another way to add rules
Icon
You can add the above rules via below command
iptables -A INPUT -m state --state NEW -m multiport -p udp -s 172.16.1.0/24 --dports 5404,5405 -j ACCEPTiptables -A INPUT -m addrtype --dst-type MULTICAST -m state --state NEW -m multiport -p udp -s 172.16.1.0/24 --dports 5404,5405 -j ACCEPTiptables -A INPUT -m state --state NEW -p tcp -s 172.16.1.0/24 --dport 21064 -j ACCEPTiptables -A INPUT -m state --state NEW -p tcp -s 172.16.1.0/24 --dport 16851 -j ACCEPTiptables-save |
Start & check cluster
# service cman restartStopping cluster: Leaving fence domain... [ OK ] Stopping gfs_controld... [ OK ] Stopping dlm_controld... [ OK ] Stopping fenced... [ OK ] Stopping cman... [ OK ] Unloading kernel modules... [ OK ] Unmounting configfs... [ OK ]Starting cluster: Checking if cluster has been disabled at boot... [ OK ] Checking Network Manager... [ OK ] Global setup... [ OK ] Loading kernel modules... [ OK ] Mounting configfs... [ OK ] Starting cman... [ OK ] Waiting for quorum... [ OK ] Starting fenced... [ OK ] Starting dlm_controld... [ OK ] Tuning DLM kernel config... [ OK ] Starting gfs_controld... [ OK ] Unfencing self... [ OK ] Joining fence domain... [ OK ] |
Check cluster status
[root@syseng-dev-kvmhost02 cluster]# cman_tool nodesNode Sts Inc Joined Name 1 M 240 2015-01-07 13:59:32 syseng-dev-kvmhost01 2 M 240 2015-01-07 13:59:32 syseng-dev-kvmhost02 3 M 244 2015-01-07 13:59:41 syseng-dev-kvmhost03 |
CLVM
Start daemon
[root@syseng-dev-kvmhost01 sysconfig]# service clvmd startStarting clvmd:Activating VG(s): 3 logical volume(s) in volume group "clvmvol" now active clvmd not running on node syseng-dev-kvmhost03 clvmd not running on node syseng-dev-kvmhost02 3 logical volume(s) in volume group "vg_sysengdevkvmh" now active clvmd not running on node syseng-dev-kvmhost03 clvmd not running on node syseng-dev-kvmhost02 [ OK ] |
[root@syseng-dev-kvmhost03 cluster]# service clvmd startStarting clvmd:Activating VG(s): 3 logical volume(s) in volume group "clvmvol" now active clvmd not running on node syseng-dev-kvmhost02 3 logical volume(s) in volume group "vg_sysengdevkvmh" now active clvmd not running on node syseng-dev-kvmhost02 [ OK ] |
[root@syseng-dev-kvmhost02 cluster]# service clvmd startStarting clvmd:Activating VG(s): 3 logical volume(s) in volume group "clvmvol" now active 3 logical volume(s) in volume group "vg_sysengdevkvmh" now active [ OK ] |
Change cluster aware Volume Group
[root@syseng-dev-kvmhost02 ~]# vgchange -c y clvmvol Volume group "clvmvol" successfully changed[root@syseng-dev-kvmhost02 ~]# vgs VG #PV #LV #SN Attr VSize VFree clvmvol 1 3 0 wz--nc 2.73t 2.73t vg_sysengdevkvmh 1 3 0 wz--n- 278.38g 0 |
Icon
Now we see the ‘c’ flag in VG attributes
Start service on boot
chkconfig cman onchkconfig clvmd on |
[root@syseng-dev-kvmhost03 cluster]# service clvmd start
Starting clvmd:
Activating VG(s): 3 logical volume(s) in volume group "clvmvol" now active
clvmd not running on node syseng-dev-kvmhost02
3 logical volume(s) in volume group "vg_sysengdevkvmh" now active
clvmd not running on node syseng-dev-kvmhost02
[ OK ]
Starting clvmd:
Activating VG(s): 3 logical volume(s) in volume group "clvmvol" now active
clvmd not running on node syseng-dev-kvmhost02
3 logical volume(s) in volume group "vg_sysengdevkvmh" now active
clvmd not running on node syseng-dev-kvmhost02
[ OK ]








No comments:
Post a Comment