Wednesday 29 May 2024

Creating NFS service group.

 First we need to enable the Nfs/client services on the Server but have to disable the autostart of NFS services across the reboot of VCS nodes.






svccfg -s nfs/server setprop "application/auto_enable=false"
svccfg -s nfs/mapid setprop "application/auto_enable=false" 
svccfg -s nfs/nlockmgr setprop "application/auto_enable=false"
svccfg -s nfs/status setprop "application/auto_enable=false"
Below are the resources we need to configure to set up NFS service group. I have listed them in the phase wise.

Phase1
1. NFSRestart
2. Share

Phase2
3. DiskGroup
4. Mount

Phase3
5. IP
6. NIC

Bottom to top approach is best method to create VCS service groups.
#hagrp -add nfssg
#hagrp -modify nfssg SystemList sys1 0 sys2 1
#hagrp  -modify nfssg AutoStartList sys1

IP Creation

#hares -add mnicb MultiNICB nfssg
#hares -modify mnicb Critical 0
#hares -modify mnicb Device e1000g0
#hares -modify mnicb Enabled 1

#hares add ipmnicb IPMultiNICB nfssg
#hares -modify ipmnicb Critical 0
#hares -modify ipmnicb Address 192.168.1.100
#hares -modify ipmnicb BaseResName mnicb
#hares -modify ipmnicb NetMask 255.255.255.0
#hares -modify ipmnicb Enabled 1

###################

Now we need a mount point to be shared. This mount point will come from a disk group , as we are using VxVM. So make a DiskGroup resource and name it nfsdg. (Don't confuse with name nfssg)

#hares -add nfsdg DiskGroup nfssg
#hares -modify  nfsdg  Critical 0
#hares -modify nfsdg DiskGroup dg1
#hares -modify  nfsdg  Enabled 1



#hares -add nfsmount Mount nfssg
#hares -modify   nfsmount  Critical 0
#hares -modify  nfsmount BlockDevice /dev/vx/dg1/dsk/vol1
#hares -modify nfsmount MountPoint /test
#hares -modify nfsmount MountOpt
#hares -modify nfsmount Enabled 1

###################


#hares -add nfsshare Share nfssg
#hares -modify nfsshare Critical 0
#hares -modify nfsshare PathName /test
#hares -modify nfsshare Options %-y
#hares -modify nfsshare Enabled 1



The most important resource is NFRestart resource, it will restart nfs services whenever it is called by VCS. Usually whenever service group is brought online or offline, this resource is triggered. As it is most important, we will give highest priority to this resource and it will be our top resource in dependency hierarchy. First add it : 

#hares -add nfsrestart NFSRestart nfssg
#hares -modify nfsrestart Critical 0
#hares -modify nfsrestart Enabled 1



As we know NFSRestart is most important , so make it grandfather, I mean keep it at the top of dependency tree :   NFSRestart  ->Share->Mount->DiskGroup   and another one is  IP->NIC  , thats it. DONE.  We will make 2 dependency tree not 1 because making 1 dependency tree will violate the rule of max 5 dependency in a tree.

#hares -link nfsrestart nfsshare
#hares -link nfsshare nfsmount
#hares -link nfsmount nfsdg

#hares -link ipmnicb mnicb

###################DONE ...!!!###################

#haconf -dump -makero

BRING THE SERVICE GROUP ONLINE :-

#hagrp -online nfssg -sys sys1

Clarity of facts :-
1. Here , we are working on NFS server and not on client. We are providing high availability to "nfs share".
2. On client, simply mount it by "mount" command. If you want to provide HA on this mount point as well, simple "Mount" type resource will work , with block device modified as "192.168.1.100:/test" .


Sunday 16 May 2021

REDHAT TALKING TO ISCSI PROTOCOL

 Managing ISCSI in LINUX (REDHAT 7.1)


iSCSI is a block protocol for storage networking and runs the very common SCSI storage protocol across a network connection which is usually Ethernet. iSCSI, like Fibre Channel, can be used to create a Storage Area Network (SAN). iSCSI traffic can be run over a shared network or a dedicated storage network.

Accessing ISCSI Storage on Linux environment works in phase wise. 
  • Setup an Initiator node
  • Discover the initiator node
  • Login to the initiator node
  • Access the block storage, create filesystem, mount and use
  • Logout of initiator node.

  1. Setup and initiator node. 

Install all the utilities required to administrate the ISCSI devices in the environment. Usually these packages comes in built if you have gone for installing the "Full installation of server" while setting/building up the server.

# yum install iscsi-initiator-utils

    2.  Discover the target

# iscsiadm -m discover, discoverydb, node, 
                    -t  send targets,SLP, iSNS,fw
                    -p port 

# iscsiadm -m discover -t st -p  (Syntax)
# iscsiadm -m discover -t st -p 192.168.0.12:3260


-m  ==> Mode 
-t    ==> Type
-p    ==> Port

Type 

send targets     :- Native ISCSI Protocol allows each iscsi target to send available targets to the                                           initiator.

SLP                  :- Service level protocol used for announcing available targets

iSNS                 :- Internet STORAGE NAME SERVICE records the volume of targets available                                         in the wide area network. 
fw                     :- NIC's and systems which are used for keeping the boot images or boots away from                                 the initiator ( Obsolete and not used in any live environment)


# iscsiadm -m discover -t st -p 192.168.0.12:3260,1 iqn.2006-04.example:3260


    3. Login to the target


# iscsiadm -m node -T <Target Name> -p <Portal:port> -l 

# iscsiadm -mo node -T iqn.2006-04.example:3260 -l 


    4. Access the block storage and mount the filesystems


   # grep -i "attached scsi" /var/log/messages

   # mkfs.ext4 /dev/<diskName>
    
   # mount /dev/disk_name /mount_point
   
   # echo "/dev/diskname    /mount_point    _netdev    0    0" >>/etc/fstab

    

    5.    Logout of iscsi target


    # iscsiadm -m node -T iqn.2006-04.example:3260 -p 192.168.0.12:3260 -u 


Note :- LOGOUT is essential for  "The sendtargets command used to retrieve --targetname and --portal values overwrites the contents of the /var/lib/iscsi/nodes database. This database will then be repopulated using the settings in /etc/iscsi/iscsid.conf. However, this will not occur if a session is currently logged in and in use."



Troubleshooting ISCSI Devices


if the iSCSI devices are not found then perform rescan

    iscsiadm -m node --rescan

If a lun is resized at NETAPP level it will reflect on the host side and the filesystem can be resized online without resulting in downtime to the filesytem.

    # echo 1 > /sys/block/sdX/device/rescan









Friday 19 June 2015

Elastic Virtual Switch

Elastic Virtual Switch



 The Packets are encapsulated which are coming from two different machines with different networks with 200.X series or 192.X series or 10.X series, all the packets would be transferred using the concept of tunneling and encapsulation.


digest -v -a md5 /export/share/mys11.uar


on server S11-server1 (192.168.0.112)

#pkg install evs
#pkg install rad-evs-controller (RAD :- remote administration daemon)
#svcadm refresh rad:local
#svcadm disable rad:local
#svcadm enable rad:local

configure password less authentication for the evsuser

#ssh-keygen -t rsa
#public key would be generated... 
#cat /root/.ssh/id_rsa.pub
#cat /root/.ssh/id_rsa.pub> /var/user/evsuser/.ssh/authorized_keys
#cd /var/tmp/
#cat s11desktop.pub >>/var/user/evsuser/.ssh/authorized_keys
#evsadm show-controlprop
#evsadm set-controlprop -p l2-type=vxlan
#evsadm set-controlprop =p vxlan-range=200-300

****** vxlan-addr parameter is Tunneling network address*******

#evsadm set-controlprop -p vxlan-addr=192.168.0.0/24
#evsadm create-evs App_Evs
#evsadm show-evs
#evsadm show-evsprop
#evsadm add-ipnet -p subnet=192.168.3.0/24 App_Evs/ipnet1
#evsadm show-ipnet
#evsadm help
#evsadm add-vport App_Evs/vport0
#evsadm add-vport App_Evs/vport1
#evsadm show-vport
#evsadm

on S11-desktop (192.168.0.111)

#pkg install evs
#which evsadm 
#grep evsuser /etc/passwd
#grep evsuser /etc/shadow
#evsadm
#scp /root/.ssh/id_rsa.pub oracle@s11-server1:/var/tmp/s11desktop.pub
#evsadm set-prop -p controller=ssh://evsuser@s11-server1
#evsadm


Go to any zone and change the net value to VPORT



                

Thursday 18 June 2015

Network High Availability in SOLARIS 11

Network High Availability


1. Trunking
2. DLMP (Dynamic Link Multipathing) only 11.2
3. IPMP (Internet protocol Multipathing)

Trunking + DLMP = Agreegration

Aggregation is at LINK Layer
IPMP is IP Layer/Network Layer.

Aggregation:-

Basically can be done in two modes
         a. Trunking
          b. DLMP

Trunking:-

#dladm create-aggr -l net1 -l net2 aggr0
#ipadm create-ip aggr0
#ipadm create-addr -T static -a 192.168.0.161/24 aggr0/v4

if net1 and net2 has got 1GB speed each would give me 2gb speed but only dependency is that two NIC ports should be connected to the same switch, if switch fails then the Trunking fails, However it works fine if the Switch is configured as cluster in CASCADED MANNER.

However this gives me the availability of NIC until the switch fails..

DLMP:-

#dladm create-aggr -m dlmp -l net1 -l net2 aggr1
#dladm create-ip aggr0
#dladm create-addr -T static -a 192.168.0.162/24 aggr1/v4

if you want to modify the trunking to DLMP

#dladm modify-aggr -m dlmp aggr1

IPMP

Steps
1. Create IPMP Group
2. Put Network ports
3. Assign IP to group

#ipadm create-ip net1
#ipadm create-ip net2
#ipadm create-ipmp -i net1 -i net2 ipmp0
#ipadm create-addr -T static -a 192.168.0.168/24 ipmp0/db1
#ipadm show-addr

by default the health check would be link based.

Monitor the IPMP status and gain INFO

#ipmpstat -i 
#ipmpstat -g 
#ipmpstat -p (to check probe activity)
#ipmpstat -t (to check the targets)

To configure in Probe based.
just assign IP to both the NIC cards

Test is the key word

#ipadm create-addr -T static -a 192.168.0.155/24 net1/test
#ipadm create-addr -T static -a 192.168.0.156/24 net2/test

for deleting the IPMP.

#ipadm delete-addr ipmp0/data1
#ipadm delete-addr net1/test
#ipadm delete-addr net2/test
#ipadm remove-ipmp -i net1 ipmp0
#ipadm remove-ipmp -i net2 ipmp0
#ipadm delete-ip net1
#ipadm delete-ip net2
#ipadm delete-ipmp ipmp0
#ipadm show-addr


Integrated Load Balancer

1. Health check
2. Server group
3. Rule

#pkg install ilb
#svcs ilb
#svcadm enable ilb (will take it to maintenance)
#ipadm set-prop -p forwarding=on ipv4
#svcadm clear ilb
#ilbadm create-hc -h hc-test=PING,hc-timeout=3,hc-count=3,hc-interval=10 hc1
#ilbadm create-sg -s server=192.168.1.101,192.168.1.102 sg1
#ilbadm create-rule -e -p -i vip=192.168.0.200,port=80,protocol=tcp -m lbalg=rr,type=HALF-NAT -h hc-name=hc1 -o servergroup=sg1 rule1






Wednesday 17 June 2015

Networking in Oracle Solaris 11

Default networking service in Solaris 11 is

network/physical:default

# svcprop network/physical:default|grep -i active_ncp
gives us the defaultfixed which defines type of installation

dladm command is used in PHYSICAL and DATA LINK layers
ipadm command is used in NETWORK layer



commands to find the network-ports

         #dladm show-phys

default naming convention for the network cards are renamed as "NET0,1,2,3"

         # dladm show-phys -m 

this would show you the MAC address

        # dladm show-link

network ports which are plumbed

Plumbing the network card

#ipadm create-ip net1

Assigning IP Address

#ipadm show-addr (lists the current IP's configured)
#ipadm create-addr -T static -a 192.168.0.150/24 net1
        -T type of network
        -a Address
        -t to assign temporary IP address

To add a tag for easy identification for which virtual ip is assigned for which purpose as below.

#ipadm create-addr -T static -a 192.168.0.151/24 net1/apache1

/etc/ipadm/ipadm.conf (this is the file which holds the information of all the ip's)

Command to change the hostname

#svcs identity/node
#svcprop identity/node|grep nodename
#svccfg -s identity/node setprop config/nodename=<desired hostname>
#svcadm refresh identity/nodename

To change the DNS client entries..

#svcs dns/client
#svcprop dns/client |grep nameserver
#svccfg -s dns/cleint setprop config/nameserver=<desired DNS Server name>

To see the IP properties

#ipadm show-ifprop net1

for IPMP

#ipadm set-ifprop -p standby=on -m ip net1

To change the MTU value

#ipadm set-ifprop -p mtu=1400 -m ipv4 net1

To enable forwarding for all the NIC's

#ipadm set-prop -p forwarding=on ipv4 

To enable the MTU value as jumbo frames (MTU value to 9000) this is possible only in the link layer..

#dladm show-linkprop -p mtu net1

Note: To do above operation you have to unplumb the interface

To create Virtual NIC 

#dladm create-vnic -l net1 vnic0
#dladm set-linkprop -p maxbw=100m vnic0
#dladm show-vnic (Gives the details of all VNICS)

firewall rules are applicable to VNIC's which was not possible in SOLARIS 10.




Network Virtualization

dladm show-vnic
dladm create-etherstub stub0 
dladm show-etherstub

dladm create-vnic -l vnic0 stub0
dladm create-vnic -l vnic1 stub0
dladm create-vnic -l vnic2 stub0

dladm show-vnic

Tuesday 16 June 2015

AI Configuration for solaris 11

AUTOMATED INSTALLATION 
jumpstart concept has been removed from SOLARIS 11
Basically 5 steps:--

Create DHCP server
Create Service
configure client
create manifest
Create profile

Update netmasks file
AI  Manifests
1. Default (/export/ai/install/auto_install/manifest/default.xml)
Sample
{
name="default"hard disk partitioning<IPS>http://pkg.oracle.com/solaris/release<IPS>}
2. Custom
#cp /export/ai/install/auto_install/manifest/default.xml mymanifest.xml
# cd /var/tmp
#mv default.xml mymanifest.xml
#vi mymanifest.xml
name=mymanifest
auto_reboot="true"
http://s11-server1.mydomain.com
:wq!

3. Criteria Manifest
DHCP server               Install Server     IPS

DHCP configuration
#installadm set-server -i 192.168.0.130 -c 5 -m
if installadm command is not found then install it from IPS 
#pkg install installadm
-c is number of servers to install concurrently from total list of servers.
-i  initial ips
-m managed by AI server

INSTALL SERVER
#installadm create-service -n basic_ai -s /var/tmp/ai_X86.iso -d /export/ai/install
AI SERIVCE = ARCHITECHTURE+RELEASE

#install create-client -e 00:4F:F8:00:00:00 -n basic_ai
-e  "MAC address" in this case 
-n "name of the service"
for booting from OK prompt
ok > boot net:dhcp - install
#installadm create-manifest -f /var/tmp/mymanifest.xml -c mac=<mac address> -n basic_ai -s /opt/ora/iso/<ISO image> -d /export/ai/install

#installadm list -c
#installadm delete-service default_i386
#sysconfig create-profile -o /var/tmp
# installadm create-profile -p client1 -f /var/tmp/sc-profile.xml -c mac="MAC Address" -n basic_ai

Friday 11 July 2014

Playing around with NIC card settings.

Setting NIC speed and duplex

Solaris is often unable to correctly auto-negotiate duplex settings with a link partner (e.g. switch), especially when the switch is set to 100Mbit full-duplex. You can force the NIC into 100Mbit full-duplex by disabling auto-negotiation and 100Mbit half-duplex capability.

Example with hme0:

1. Make the changes to the running system.
# ndd -set /dev/hme adv_100hdx_cap 0
# ndd -set /dev/hme adv_100fdx_cap 1
# ndd -set /dev/hme adv_autoneg_cap 0

2. Make kernel parameter changes to preserve the speed and duplex settings after a reboot.
# vi /etc/system
Add:
# set hme:hme_adv_autoneg_cap=0
# set hme:hme_adv_100hdx_cap=0
# set hme:hme_adv_100fdx_cap=1

Note: the /etc/system change affects all hme interfaces if multiple NICs are present (e.g. hme0hme1).