Friday 19 June 2015

Elastic Virtual Switch

Elastic Virtual Switch



 The Packets are encapsulated which are coming from two different machines with different networks with 200.X series or 192.X series or 10.X series, all the packets would be transferred using the concept of tunneling and encapsulation.


digest -v -a md5 /export/share/mys11.uar


on server S11-server1 (192.168.0.112)

#pkg install evs
#pkg install rad-evs-controller (RAD :- remote administration daemon)
#svcadm refresh rad:local
#svcadm disable rad:local
#svcadm enable rad:local

configure password less authentication for the evsuser

#ssh-keygen -t rsa
#public key would be generated... 
#cat /root/.ssh/id_rsa.pub
#cat /root/.ssh/id_rsa.pub> /var/user/evsuser/.ssh/authorized_keys
#cd /var/tmp/
#cat s11desktop.pub >>/var/user/evsuser/.ssh/authorized_keys
#evsadm show-controlprop
#evsadm set-controlprop -p l2-type=vxlan
#evsadm set-controlprop =p vxlan-range=200-300

****** vxlan-addr parameter is Tunneling network address*******

#evsadm set-controlprop -p vxlan-addr=192.168.0.0/24
#evsadm create-evs App_Evs
#evsadm show-evs
#evsadm show-evsprop
#evsadm add-ipnet -p subnet=192.168.3.0/24 App_Evs/ipnet1
#evsadm show-ipnet
#evsadm help
#evsadm add-vport App_Evs/vport0
#evsadm add-vport App_Evs/vport1
#evsadm show-vport
#evsadm

on S11-desktop (192.168.0.111)

#pkg install evs
#which evsadm 
#grep evsuser /etc/passwd
#grep evsuser /etc/shadow
#evsadm
#scp /root/.ssh/id_rsa.pub oracle@s11-server1:/var/tmp/s11desktop.pub
#evsadm set-prop -p controller=ssh://evsuser@s11-server1
#evsadm


Go to any zone and change the net value to VPORT



                

Thursday 18 June 2015

Network High Availability in SOLARIS 11

Network High Availability


1. Trunking
2. DLMP (Dynamic Link Multipathing) only 11.2
3. IPMP (Internet protocol Multipathing)

Trunking + DLMP = Agreegration

Aggregation is at LINK Layer
IPMP is IP Layer/Network Layer.

Aggregation:-

Basically can be done in two modes
         a. Trunking
          b. DLMP

Trunking:-

#dladm create-aggr -l net1 -l net2 aggr0
#ipadm create-ip aggr0
#ipadm create-addr -T static -a 192.168.0.161/24 aggr0/v4

if net1 and net2 has got 1GB speed each would give me 2gb speed but only dependency is that two NIC ports should be connected to the same switch, if switch fails then the Trunking fails, However it works fine if the Switch is configured as cluster in CASCADED MANNER.

However this gives me the availability of NIC until the switch fails..

DLMP:-

#dladm create-aggr -m dlmp -l net1 -l net2 aggr1
#dladm create-ip aggr0
#dladm create-addr -T static -a 192.168.0.162/24 aggr1/v4

if you want to modify the trunking to DLMP

#dladm modify-aggr -m dlmp aggr1

IPMP

Steps
1. Create IPMP Group
2. Put Network ports
3. Assign IP to group

#ipadm create-ip net1
#ipadm create-ip net2
#ipadm create-ipmp -i net1 -i net2 ipmp0
#ipadm create-addr -T static -a 192.168.0.168/24 ipmp0/db1
#ipadm show-addr

by default the health check would be link based.

Monitor the IPMP status and gain INFO

#ipmpstat -i 
#ipmpstat -g 
#ipmpstat -p (to check probe activity)
#ipmpstat -t (to check the targets)

To configure in Probe based.
just assign IP to both the NIC cards

Test is the key word

#ipadm create-addr -T static -a 192.168.0.155/24 net1/test
#ipadm create-addr -T static -a 192.168.0.156/24 net2/test

for deleting the IPMP.

#ipadm delete-addr ipmp0/data1
#ipadm delete-addr net1/test
#ipadm delete-addr net2/test
#ipadm remove-ipmp -i net1 ipmp0
#ipadm remove-ipmp -i net2 ipmp0
#ipadm delete-ip net1
#ipadm delete-ip net2
#ipadm delete-ipmp ipmp0
#ipadm show-addr


Integrated Load Balancer

1. Health check
2. Server group
3. Rule

#pkg install ilb
#svcs ilb
#svcadm enable ilb (will take it to maintenance)
#ipadm set-prop -p forwarding=on ipv4
#svcadm clear ilb
#ilbadm create-hc -h hc-test=PING,hc-timeout=3,hc-count=3,hc-interval=10 hc1
#ilbadm create-sg -s server=192.168.1.101,192.168.1.102 sg1
#ilbadm create-rule -e -p -i vip=192.168.0.200,port=80,protocol=tcp -m lbalg=rr,type=HALF-NAT -h hc-name=hc1 -o servergroup=sg1 rule1






Wednesday 17 June 2015

Networking in Oracle Solaris 11

Default networking service in Solaris 11 is

network/physical:default

# svcprop network/physical:default|grep -i active_ncp
gives us the defaultfixed which defines type of installation

dladm command is used in PHYSICAL and DATA LINK layers
ipadm command is used in NETWORK layer



commands to find the network-ports

         #dladm show-phys

default naming convention for the network cards are renamed as "NET0,1,2,3"

         # dladm show-phys -m 

this would show you the MAC address

        # dladm show-link

network ports which are plumbed

Plumbing the network card

#ipadm create-ip net1

Assigning IP Address

#ipadm show-addr (lists the current IP's configured)
#ipadm create-addr -T static -a 192.168.0.150/24 net1
        -T type of network
        -a Address
        -t to assign temporary IP address

To add a tag for easy identification for which virtual ip is assigned for which purpose as below.

#ipadm create-addr -T static -a 192.168.0.151/24 net1/apache1

/etc/ipadm/ipadm.conf (this is the file which holds the information of all the ip's)

Command to change the hostname

#svcs identity/node
#svcprop identity/node|grep nodename
#svccfg -s identity/node setprop config/nodename=<desired hostname>
#svcadm refresh identity/nodename

To change the DNS client entries..

#svcs dns/client
#svcprop dns/client |grep nameserver
#svccfg -s dns/cleint setprop config/nameserver=<desired DNS Server name>

To see the IP properties

#ipadm show-ifprop net1

for IPMP

#ipadm set-ifprop -p standby=on -m ip net1

To change the MTU value

#ipadm set-ifprop -p mtu=1400 -m ipv4 net1

To enable forwarding for all the NIC's

#ipadm set-prop -p forwarding=on ipv4 

To enable the MTU value as jumbo frames (MTU value to 9000) this is possible only in the link layer..

#dladm show-linkprop -p mtu net1

Note: To do above operation you have to unplumb the interface

To create Virtual NIC 

#dladm create-vnic -l net1 vnic0
#dladm set-linkprop -p maxbw=100m vnic0
#dladm show-vnic (Gives the details of all VNICS)

firewall rules are applicable to VNIC's which was not possible in SOLARIS 10.




Network Virtualization

dladm show-vnic
dladm create-etherstub stub0 
dladm show-etherstub

dladm create-vnic -l vnic0 stub0
dladm create-vnic -l vnic1 stub0
dladm create-vnic -l vnic2 stub0

dladm show-vnic

Tuesday 16 June 2015

AI Configuration for solaris 11

AUTOMATED INSTALLATION 
jumpstart concept has been removed from SOLARIS 11
Basically 5 steps:--

Create DHCP server
Create Service
configure client
create manifest
Create profile

Update netmasks file
AI  Manifests
1. Default (/export/ai/install/auto_install/manifest/default.xml)
Sample
{
name="default"hard disk partitioning<IPS>http://pkg.oracle.com/solaris/release<IPS>}
2. Custom
#cp /export/ai/install/auto_install/manifest/default.xml mymanifest.xml
# cd /var/tmp
#mv default.xml mymanifest.xml
#vi mymanifest.xml
name=mymanifest
auto_reboot="true"
http://s11-server1.mydomain.com
:wq!

3. Criteria Manifest
DHCP server               Install Server     IPS

DHCP configuration
#installadm set-server -i 192.168.0.130 -c 5 -m
if installadm command is not found then install it from IPS 
#pkg install installadm
-c is number of servers to install concurrently from total list of servers.
-i  initial ips
-m managed by AI server

INSTALL SERVER
#installadm create-service -n basic_ai -s /var/tmp/ai_X86.iso -d /export/ai/install
AI SERIVCE = ARCHITECHTURE+RELEASE

#install create-client -e 00:4F:F8:00:00:00 -n basic_ai
-e  "MAC address" in this case 
-n "name of the service"
for booting from OK prompt
ok > boot net:dhcp - install
#installadm create-manifest -f /var/tmp/mymanifest.xml -c mac=<mac address> -n basic_ai -s /opt/ora/iso/<ISO image> -d /export/ai/install

#installadm list -c
#installadm delete-service default_i386
#sysconfig create-profile -o /var/tmp
# installadm create-profile -p client1 -f /var/tmp/sc-profile.xml -c mac="MAC Address" -n basic_ai