Step by Step Deleting Node In Oracle RAC (12c Release 1) Environment

 

Steps for deleting node in Oracle RAC (12c Release 1) environment :

Steps for Deleting an Instance From the Cluster database :-

Invoke dbca from node 1 (racpb1) :

[oracle@racpb1 ~]$ . .bash_profile 
[oracle@racpb1 ~]$ 
[oracle@racpb1 ~]$ dbca

Check number of Instance running status :

[oracle@racpb1 ~]$ srvctl status database -d orcl11g
Instance orcl11g1 is running on node racpb1
Instance orcl11g2 is running on node racpb2

Check Instance removed from OCR :

[oracle@racpb1 ~]$ srvctl config database -d orcl11g
Database unique name: orcl11g
Database name: orcl11g
Oracle home: /u01/app/oracle/product/12.1.0/db_1
Oracle user: oracle
Spfile: +DATA/orcl11g/spfileorcl11g.ora
Password file: 
Domain: localdomain.com
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: 
Disk Groups: DATA
Mount point paths: 
Services: 
Type: RAC
Start concurrency: 
Stop concurrency: 
OSDBA group: dba
OSOPER group: oinstall
Database instances: orcl11g1,orcl11g2
Configured nodes: racpb1,racpb2
Database is administrator managed

Remove Oracle RAC Database home :-

 Disable and Stop listener :

[oracle@racpb3 ~]$ srvctl status listener -l LISTENER
Listener LISTENER is enabled
Listener LISTENER is running on node(s): racpb3,racpb2,racpb1
[oracle@racpb3 ~]$ srvctl disable listener -l LISTENER -n racpb3
[oracle@racpb3 ~]$ srvctl stop listener -l LISTENER -n racpb3

Update Inventory on deleting node (racpb3) :

[oracle@racpb3 ~]$ export ORACLE_SID=orcl11g3
[oracle@racpb3 ~]$ export ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1
[oracle@racpb3 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@racpb3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1 "CLUSTER_NODES={racpb3}" -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 5869 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.

Deinstall ORACLE_HOME :

Specify the “-local” flag as not to remove more than just the local node’s software.

[oracle@racpb3 ~]$ $ORACLE_HOME/deinstall/deinstall -local

Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/

############ ORACLE DECONFIG TOOL START ############

######################### DECONFIG CHECK OPERATION START #########################
## [START] Install check configuration ##

Checking for existence of the Oracle home location /u01/app/oracle/product/12.1.0/db_1
Oracle Home type selected for deinstall is: Oracle Real Application Cluster Database
Oracle Base selected for deinstall is: /u01/app/oracle
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /u01/app/12.1.0/grid
The following nodes are part of this cluster: racpb3,racpb2,racpb1
Checking for sufficient temp space availability on node(s) : 'racpb3'

## [END] Install check configuration ##

Network Configuration check config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2018-12-28_11-36-29-PM.log
Network Configuration check config END
Database Check Configuration START
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check2018-12-28_11-36-31-PM.log
Use comma as separator when specifying list of values as input

Specify the list of database names that are configured locally on this node for this Oracle home. Local configurations of the discovered databases will be removed []: orcl11g
Database Check Configuration END
Oracle Configuration Manager check START
OCM check log file location : /u01/app/oraInventory/logs//ocm_check7641.log
Oracle Configuration Manager check END

######################### DECONFIG CHECK OPERATION END #########################

####################### DECONFIG CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /u01/app/12.1.0/grid
The following nodes are part of this cluster: racpb3,racpb2,racpb1
The cluster node(s) on which the Oracle home deinstallation will be performed are:racpb3
Oracle Home selected for deinstall is: /u01/app/oracle/product/12.1.0/db_1
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
The option -local will not modify any database configuration for this Oracle home.
Checking the config status for CCR
Oracle Home exists with CCR directory, but CCR is not configured
CCR check is finished
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2018-12-28_11-36-19-PM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2018-12-28_11-36-19-PM.err'
######################## DECONFIG CLEAN OPERATION START ########################
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2018-12-28_11-37-08-PM.log
Network Configuration clean config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2018-12-28_11-37-08-PM.log
Network Configuration clean config END
Oracle Configuration Manager clean START
OCM clean log file location : /u01/app/oraInventory/logs//ocm_clean7641.log
Oracle Configuration Manager clean END

######################### DECONFIG CLEAN OPERATION END #########################

####################### DECONFIG CLEAN OPERATION SUMMARY #######################
Cleaning the config for CCR
As CCR is not configured, so skipping the cleaning of CCR configuration
CCR clean is finished
#######################################################################

############# ORACLE DECONFIG TOOL END #############

Using properties file /tmp/deinstall2018-12-28_11-27-37PM/response/deinstall_2018-12-28_11-36-19-PM.rsp
Location of logs /u01/app/oraInventory/logs/
############ ORACLE DEINSTALL TOOL START ############

####################### DEINSTALL CHECK OPERATION SUMMARY #######################
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2018-12-28_11-36-19-PM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2018-12-28_11-36-19-PM.err'

######################## DEINSTALL CLEAN OPERATION START ########################
## [START] Preparing for Deinstall ##
Setting LOCAL_NODE to racpb3
Setting CLUSTER_NODES to racpb3
Setting CRS_HOME to false
Setting oracle.installer.invPtrLoc to /tmp/deinstall2018-12-28_11-27-37PM/oraInst.loc
Setting oracle.installer.local to true

## [END] Preparing for Deinstall ##

Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START

Detach Oracle home '/u01/app/oracle/product/12.1.0/db_1' from the central inventory on the local node : Done

Delete directory '/u01/app/oracle/product/12.1.0/db_1' on the local node : Done

The Oracle Base directory '/u01/app/oracle' will not be removed on local node. The directory is in use by Oracle Home '/u01/app/12.1.0/grid'.

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END

## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2018-12-28_11-27-37PM' on node 'racpb3'

## [END] Oracle install clean ##

######################### DEINSTALL CLEAN OPERATION END #########################

####################### DEINSTALL CLEAN OPERATION SUMMARY #######################
Successfully detached Oracle home '/u01/app/oracle/product/12.1.0/db_1' from the central inventory on the local node.
Successfully deleted directory '/u01/app/oracle/product/12.1.0/db_1' on the local node.
Oracle Universal Installer cleanup was successful.

Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################

############# ORACLE DEINSTALL TOOL END #############

Update Inventory in remaining nodes :

[oracle@racpb1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1 "CLUSTER_NODES={racpb1,racpb2}" 
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 5999 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.

Remove GRID_HOME :-

Check the pinned status of nodes :

[oracle@racpb1 ~]$ olsnodes -s -t
racpb1 Active Unpinned
racpb2 Active Unpinned
racpb3 Active Unpinned

If the node is pinned, then run the crsctl unpin css to unpinned the nodes from GRID_HOME.

Disable the Clusterware daemon process from node (racpb3):

[root@racpb3 ~]# /u01/app/12.1.0/grid/crs/install/rootcrs.pl -deconfig -force -verbose
Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params
Network 1 exists
Subnet IPv4: 192.168.12.0/255.255.255.0/eth0, static
Subnet IPv6: 
Ping Targets: 
Network is enabled
Network is individually enabled on nodes: 
Network is individually disabled on nodes: 
VIP exists: network number 1, hosting node racpb1
VIP Name: racvr1
VIP IPv4 Address: 192.168.12.130
VIP IPv6 Address: 
VIP is enabled.
VIP is individually enabled on nodes: 
VIP is individually disabled on nodes: 
VIP exists: network number 1, hosting node racpb2
VIP Name: racvr2
VIP IPv4 Address: 192.168.12.131
VIP IPv6 Address: 
VIP is enabled.
VIP is individually enabled on nodes: 
VIP is individually disabled on nodes: 
VIP exists: network number 1, hosting node racpb3
VIP Name: racvr3
VIP IPv4 Address: 192.168.12.132
VIP IPv6 Address: 
VIP is enabled.
VIP is individually enabled on nodes: 
VIP is individually disabled on nodes: 
ONS exists: Local port 6100, remote port 6200, EM port 2016, Uses SSL false
ONS is enabled
ONS is individually enabled on nodes: 
ONS is individually disabled on nodes: 
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'racpb3'
CRS-2673: Attempting to stop 'ora.crsd' on 'racpb3'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'racpb3'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'racpb3'
CRS-2677: Stop of 'ora.DATA.dg' on 'racpb3' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'racpb3'
CRS-2677: Stop of 'ora.asm' on 'racpb3' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'racpb3' has completed
CRS-2677: Stop of 'ora.crsd' on 'racpb3' succeeded
CRS-2673: Attempting to stop 'ora.evmd' on 'racpb3'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'racpb3'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'racpb3'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'racpb3'
CRS-2677: Stop of 'ora.drivers.acfs' on 'racpb3' succeeded
CRS-2677: Stop of 'ora.evmd' on 'racpb3' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'racpb3'
CRS-2673: Attempting to stop 'ora.ctssd' on 'racpb3'
CRS-2673: Attempting to stop 'ora.storage' on 'racpb3'
CRS-2677: Stop of 'ora.gpnpd' on 'racpb3' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'racpb3' succeeded
CRS-2677: Stop of 'ora.storage' on 'racpb3' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'racpb3'
CRS-2677: Stop of 'ora.ctssd' on 'racpb3' succeeded
CRS-2677: Stop of 'ora.crf' on 'racpb3' succeeded
CRS-2677: Stop of 'ora.asm' on 'racpb3' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'racpb3'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'racpb3' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'racpb3'
CRS-2677: Stop of 'ora.cssd' on 'racpb3' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'racpb3'
CRS-2677: Stop of 'ora.gipcd' on 'racpb3' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'racpb3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2018/12/29 00:13:32 CLSRSC-4006: Removing Oracle Trace File Analyzer (TFA) Collector.

2018/12/29 00:14:03 CLSRSC-4007: Successfully removed Oracle Trace File Analyzer (TFA) Collector.

2018/12/29 00:14:05 CLSRSC-336: Successfully deconfigured Oracle Clusterware stack on this node

Delete clusterware configuration from other running nodes :

[root@racpb1 ~]# /u01/app/12.1.0/grid/bin/crsctl delete node -n racpb3
CRS-4661: Node racpb3 successfully deleted.

Check Clusterware status :

[oracle@racpb1 ~]$ crs_stat -t -v
Name Type R/RA F/FT Target State Host 
----------------------------------------------------------------------
ora.DATA.dg ora....up.type 0/5 0/ ONLINE ONLINE racpb1 
ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE racpb1 
ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racpb2 
ora....N2.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racpb1 
ora....N3.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racpb1 
ora.MGMTLSNR ora....nr.type 0/0 0/0 ONLINE ONLINE racpb2 
ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE racpb1 
ora.cvu ora.cvu.type 0/5 0/0 ONLINE ONLINE racpb1 
ora.mgmtdb ora....db.type 0/2 0/1 ONLINE ONLINE racpb2 
ora....network ora....rk.type 0/5 0/ ONLINE ONLINE racpb1 
ora.oc4j ora.oc4j.type 0/1 0/2 ONLINE ONLINE racpb2 
ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE racpb1 
ora.orcl11g.db ora....se.type 0/2 0/1 ONLINE ONLINE racpb1 
ora....SM1.asm application 0/5 0/0 ONLINE ONLINE racpb1 
ora....B1.lsnr application 0/5 0/0 ONLINE ONLINE racpb1 
ora.racpb1.ons application 0/3 0/0 ONLINE ONLINE racpb1 
ora.racpb1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE racpb1 
ora....SM2.asm application 0/5 0/0 ONLINE ONLINE racpb2 
ora....B2.lsnr application 0/5 0/0 ONLINE ONLINE racpb2 
ora.racpb2.ons application 0/3 0/0 ONLINE ONLINE racpb2 
ora.racpb2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE racpb2 
ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racpb2 
ora.scan2.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racpb1 
ora.scan3.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racpb1
[oracle@racpb1 ~]$ crsctl check cluster -all
**************************************************************
racpb1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racpb2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[oracle@racpb1 ~]$ olsnodes -s -t
racpb1 Active Unpinned
racpb2 Active Unpinned

Update Inventory :

[oracle@racpb3 ~]$ grid
[oracle@racpb3 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@racpb3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={racpb3}" CRS=TRUE -local 
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 5980 MB Passed
The inventory pointer is located at /etc/oraInst.loc


Deinstall GRID_HOME :

[oracle@racpb3 ~]$ cd /u01/app/12.1.0/grid/deinstall  
[oracle@racpb3 deinstall]$ ./deinstall -local  

Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/
############ ORACLE DECONFIG TOOL START ############
######################### DECONFIG CHECK OPERATION START #########################
## [START] Install check configuration ##
Checking for existence of the Oracle home location /u01/app/12.1.0/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Standalone Server
Oracle Base selected for deinstall is: /u01/app/grid
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home 
## [END] Install check configuration ##
Traces log file: /u01/app/oraInventory/logs//crsdc_2018-12-28_08-35-48PM.log
Network Configuration check config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2018-12-28_08-35-48-PM.log
Specify all Oracle Restart enabled listeners that are to be de-configured. Enter .(dot) to deselect all. 
[ASMNET1LSNR_ASM,M GMTLSNR,LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1]:
Network Configuration check config END
Asm Check Configuration START
ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_check2018-12-28_08-35-48-PM.log
ASM configuration was not detected in this Oracle home. Was ASM configured in this Oracle home (y|n) [n]: n
ASM was not detected in the Oracle Home
Database Check Configuration START
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_2018-12-28_08-35-48-PM.log
Database Check Configuration END
######################### DECONFIG CHECK OPERATION END #########################
####################### DECONFIG CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: 
The following nodes are part of this cluster: null
The cluster node(s) on which the Oracle home deinstallation will be performed are:null
Oracle Home selected for deinstall is: /u01/app/1210/grid
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Following Oracle Restart enabled listener(s) will be de-configured: ASMNET1LSNR_ASM,MGMTLSNR,LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1
ASM was not detected in the Oracle Home
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2018-12-28_08-35-46-PM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2018-12-28_08-35-46-PM.err'
######################## DECONFIG CLEAN OPERATION START ########################
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2018-12-08_08-36-48-PM.log
ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_clean2018-12-28_08-36-48-PM.log
ASM Clean Configuration END
Network Configuration clean config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2018-12-28_08-36-48-PM.log
De-configuring Oracle Restart enabled listener(s): ASMNET1LSNR_ASM,MGMTLSNR,LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1
De-configuring listener: ASMNET1LSNR_ASM
Stopping listener: ASMNET1LSNR_ASM
Deleting listener: ASMNET1LSNR_ASM
Listener deleted successfully.
Listener de-configured successfully.
De-configuring listener: MGMTLSNR
Stopping listener: MGMTLSNR
Deleting listener: MGMTLSNR
Listener deleted successfully.
Listener de-configured successfully.
De-configuring listener: LISTENER
Stopping listener: LISTENER
Deleting listener: LISTENER
Listener deleted successfully.
Listener de-configured successfully.
De-configuring listener: LISTENER_SCAN3
Stopping listener: LISTENER_SCAN3
Deleting listener: LISTENER_SCAN3
Listener deleted successfully.
Listener de-configured successfully.
De-configuring listener: LISTENER_SCAN2
Stopping listener: LISTENER_SCAN2
Deleting listener: LISTENER_SCAN2
Listener deleted successfully.
Listener de-configured successfully
De-configuring listener: LISTENER_SCAN1
Stopping listener: LISTENER_SCAN1
Deleting listener: LISTENER_SCAN1
Listener deleted successfully.
Listener de-configured successfully.
De-configuring Listener configuration file...
Listener configuration file de-configured successfully.
De-configuring backup files...
Backup files de-configured successfully.

The network configuration has been cleaned up successfully. 
Network Configuration clean config END
######################### DECONFIG CLEAN OPERATION END #########################
####################### DECONFIG CLEAN OPERATION SUMMARY #######################
Following Oracle Restart enabled listener(s) were de-configured successfully: ASMNET1LSNR_ASM,MGMTLSNR,LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1
Oracle Restart is stopped and de-configured successfully.
#######################################################################
############# ORACLE DECONFIG TOOL END #############
Using properties file /tmp/deinstall2018-12-28_08-33-16PM/response/deinstall2018-12-28_08-33-16PM.rsp
Location of logs /u01/app/oraInventory/logs/
############ ORACLE DEINSTALL TOOL START ############
####################### DEINSTALL CHECK OPERATION SUMMARY #######################
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deinstall2018-12-28_08-33-16PM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2018-12-28_08-33-16PM.err'
######################## DEINSTALL CLEAN OPERATION START ########################
## [START] Preparing for Deinstall ##
Setting LOCAL_NODE to racpb3
Setting CRS_HOME to false
Setting oracle.installer.invPtrLoc to /tmp/deinstall2018-12-15_28-33-16PM/oraInst.loc
Setting oracle.installer.local to true
## [END] Preparing for Deinstall ##
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START
Detach Oracle home '/u01/app/12.1.0/grid' from the central inventory on the local node : Done

Delete directory '/u01/app/12.1.0/grid' on the local node : Succeeded <<<<

Delete directory '/u01/app/oraInventory' on the local node : Done

The Oracle Base directory '/u01/app/oracle' will not be removed on local node. The directory is not empty.

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END

## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2018-12-29_00-52-55AM' on node 'racpb3'

## [END] Oracle install clean ##

######################### DEINSTALL CLEAN OPERATION END #########################
####################### DEINSTALL CLEAN OPERATION SUMMARY #######################
Successfully detached Oracle home '/u01/app/12.1.0/grid' from the central inventory on the local node.
Successfully deleted directory '/u01/app/12.1.0/grid' on the local node.
Successfully deleted directory '/u01/app/oraInventory' on the local node.
Oracle Universal Installer cleanup was successful.

Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL TOOL END #############

Update Inventory in remaining nodes :

[oracle@racpb1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={racpb1,racpb2}" CRS=TRUE 
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 5997 MB Passed
The inventory pointer is located at /etc/oraInst.loc

'UpdateNodeList' was successful.

 Verify the integrity of the cluster after the nodes have been removed :

[oracle@racpb1 ~]$ cluvfy stage -post nodedel -n racpb3

Performing post-checks for node removal

Checking CRS integrity...

CRS integrity check passed

Clusterware version consistency passed.

Node removal check passed

Post-check for node removal was successful.

 

Catch Me On:- Hariprasath Rajaram Telegram:https://t.me/joinchat/I_f4DkeGfZsxfzXxHD6gTg LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/ Facebook:https://www.facebook.com/HariPrasathdba                       FB Group:https://www.facebook.com/groups/894402327369506/                   FB Page: https://www.facebook.com/dbahariprasath/? Twitter: https://twitter.com/hariprasathdba

Step by Step Adding Node In Oracle RAC (12c Release 1) Environment

Steps for adding node in Oracle RAC (12c Release 1) environment :

For adding node to exisiting RAC environment,Initially we need a setup Oracle RAC environment to add nodes.Follow the below link  Steps for Oracle RAC 12cR1 Installation for Two-node RAC Installation.

Existing /etc/hosts file for Two-Node RAC Setup :-

#Public
192.168.12.128 racpb1.localdomain.com racpb1
192.168.12.129 racpb2.localdomain.com racpb2

#Private
192.168.79.128 racpv1.localdomain.com racpv1
192.168.79.129 racpv2.localdomain.com racpv2

#Virtual
192.168.12.130 racvr1.localdomain.com racvr1
192.168.12.131 racvr2.localdomain.com racvr2

#Scan
#192.168.12.140 racsn.localdomain.com racsn
#192.168.12.150 racsn.localdomain.com racsn
#192.168.12.160 racsn.localdomain.com racsn

Configure new host entry in all nodes of /etc/hosts file,

#Public
192.168.12.128 racpb1.localdomain.com racpb1
192.168.12.129 racpb2.localdomain.com racpb2
192.168.12.127 racpb3.localdomain.com racpb3


#Private
192.168.79.128 racpv1.localdomain.com racpv1
192.168.79.129 racpv2.localdomain.com racpv2
192.168.79.127 racpv3.localdomain.com racpv3


#Virtual
192.168.12.130 racvr1.localdomain.com racvr1
192.168.12.131 racvr2.localdomain.com racvr2
192.168.12.132 racvr3.localdomain.com racvr3

#Scan
#192.168.12.140 racsn.localdomain.com racsn
#192.168.12.150 racsn.localdomain.com racsn
#192.168.12.160 racsn.localdomain.com racsn

Creating groups and user on new node with same group and user id of existing nodes :

groups  : oinstall(primary group)  dba (secondary group)

#groupadd -g 54321 oinstall
#groupadd -g 54322 dba
#useradd -u 54323 -g oinstall -G dba oracle

ASM library Installation and Configuration :

[oracle@racpb3 Desktop]$ rpm -Uvh oracleasmlib-2.0.4-1.el6.x86_64 --nodeps --force
[oracle@racpb3 Desktop]$ oracleasm-support-2.1.8-1.el6.x86_64 --nodeps --force

Configuration  and Check ASM disks :

[root@racpb3 Panasonic DBA]# oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: oinstall
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
[root@racpb3 Panasonic DBA]# oracleasm init
Creating /dev/oracleasm mount point: /dev/oracleasm
Loading module "oracleasm": oracleasm
Configuring "oracleasm" to use device physical block size 
Mounting ASMlib driver filesystem: /dev/oracleasm
[root@racpb3 Panasonic DBA]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "DATA"
[root@racpb3 Panasonic DBA]# oracleasm listdisks
DATA

Configure SSH for oracle user on all nodes :

Copy sshUserSetup.sh script to new node(racpb3) and execute it.

[root@racpb1 deinstall]# cd /u01/app/12.1.0/grid/deinstall

[root@racpb1 deinstall]# scp sshUserSetup.sh oracle@racpb3:/home/oracle
oracle@racpb3's password: 
sshUserSetup.sh 100% 32KB 31.6KB/s 00:00

Run sshUserSetup.sh

[oracle@racpb3 ~]$ sh sshUserSetup.sh -hosts "racpb3" -user oracle
The output of this script is also logged into /tmp/sshUserSetup_2018-12-27-03-51-12.log
Hosts are racpb3
user is oracle
Platform:- Linux 
Checking if the remote hosts are reachable
PING racpb3.localdomain.com (192.168.12.127) 56(84) bytes of data.
64 bytes from racpb3.localdomain.com (192.168.12.127): icmp_seq=1 ttl=64 time=0.034 ms
64 bytes from racpb3.localdomain.com (192.168.12.127): icmp_seq=2 ttl=64 time=0.032 ms
64 bytes from racpb3.localdomain.com (192.168.12.127): icmp_seq=3 ttl=64 time=0.045 ms
64 bytes from racpb3.localdomain.com (192.168.12.127): icmp_seq=4 ttl=64 time=0.046 ms
64 bytes from racpb3.localdomain.com (192.168.12.127): icmp_seq=5 ttl=64 time=0.075 ms

--- racpb3.localdomain.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 3999ms
rtt min/avg/max/mdev = 0.032/0.046/0.075/0.016 ms
Remote host reachability check succeeded.
The following hosts are reachable: racpb3.
The following hosts are not reachable: .
All hosts are reachable. Proceeding further...
firsthost racpb3
numhosts 1
The script will setup SSH connectivity from the host racpb3.localdomain.com to all
the remote hosts. After the script is executed, the user can use SSH to run
commands on the remote hosts or copy files between this host racpb3.localdomain.com
and the remote hosts without being prompted for passwords or confirmations.

NOTE 1:
As part of the setup procedure, this script will use ssh and scp to copy
files between the local host and the remote hosts. Since the script does not
store passwords, you may be prompted for the passwords during the execution of
the script whenever ssh or scp is invoked.

NOTE 2:
AS PER SSH REQUIREMENTS, THIS SCRIPT WILL SECURE THE USER HOME DIRECTORY
AND THE .ssh DIRECTORY BY REVOKING GROUP AND WORLD WRITE PRIVILEDGES TO THESE
directories.

Do you want to continue and let the script make the above mentioned changes (yes/no)?
yes

The user chose yes
Please specify if you want to specify a passphrase for the private key this script will create for the local host. Passphrase is used to encrypt the private key and makes SSH much more secure. Type 'yes' or 'no' and then press enter. In case you press 'yes', you would need to enter the passphrase whenever the script executes ssh or scp. 
The estimated number of times the user would be prompted for a passphrase is 2. In addition, if the private-public files are also newly created, the user would have to specify the passphrase on one additional occasion. 
Enter 'yes' or 'no'.
yes

The user chose yes
The files containing the client public and private keys already exist on the local host. The current private key may or may not have a passphrase associated with it. In case you remember the passphrase and do not want to re-run ssh-keygen, press 'no' and enter. If you press 'no', the script will not attempt to create any new public/private key pairs. If you press 'yes', the script will remove the old private/public key files existing and create new ones prompting the user to enter the passphrase. If you enter 'yes', any previous SSH user setups would be reset. If you press 'change', the script will associate a new passphrase with the old keys.
Press 'yes', 'no' or 'change'
yes
The user chose yes
Creating .ssh directory on local host, if not present already
Creating authorized_keys file on local host
Changing permissions on authorized_keys to 644 on local host
Creating known_hosts file on local host
Changing permissions on known_hosts to 644 on local host
Creating config file on local host
If a config file exists already at /home/oracle/.ssh/config, it would be backed up to /home/oracle/.ssh/config.backup.
Removing old private/public keys on local host
Running SSH keygen on local host
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Generating public/private rsa key pair.
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
2b:88:2f:d5:38:5d:51:6a:2d:1e:a6:e0:51:a2:7e:c7 oracle@racpb3.localdomain.com
The key's randomart image is:
+--[ RSA 1024]----+
| . . .. |
| . o .o |
| . o *.. |
| . . + =.o |
| . o+E.S |
| o+oo . |
| ..... . |
| .. . |
| .. |
+-----------------+
Creating .ssh directory and setting permissions on remote host racpb3
THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR oracle. THIS IS AN SSH REQUIREMENT.
The script would create ~oracle/.ssh/config file on remote host racpb3. If a config file exists already at ~oracle/.ssh/config, it would be backed up to ~oracle/.ssh/config.backup.
The user may be prompted for a password here since the script would be running SSH on host racpb3.
Warning: Permanently added 'racpb3,192.168.12.127' (RSA) to the list of known hosts.
oracle@racpb3's password: 
Done with creating .ssh directory and setting permissions on remote host racpb3.
Copying local host public key to the remote host racpb3
The user may be prompted for a password or passphrase here since the script would be using SCP for host racpb3.
oracle@racpb3's password: 
Done copying local host public key to the remote host racpb3
The script will run SSH on the remote machine racpb3. The user may be prompted for a passphrase here in case the private key has been encrypted with a passphrase.
Agent admitted failure to sign using the key.
oracle@racpb3's password: 
cat: /home/oracle/.ssh/known_hosts.tmp: No such file or directory
cat: /home/oracle/.ssh/authorized_keys.tmp: No such file or directory
SSH setup is complete.

------------------------------------------------------------------------
Verifying SSH setup
===================
The script will now run the date command on the remote nodes using ssh
to verify if ssh is setup correctly. IF THE SETUP IS CORRECTLY SETUP,
THERE SHOULD BE NO OUTPUT OTHER THAN THE DATE AND SSH SHOULD NOT ASK FOR
PASSWORDS. If you see any output other than date or are prompted for the
password, ssh is not setup correctly and you will need to resolve the
issue and set up ssh again.
The possible causes for failure could be:
1. The server settings in /etc/ssh/sshd_config file do not allow ssh
for user oracle.
2. The server may have disabled public key based authentication.
3. The client public key on the server may be outdated.
4. ~oracle or ~oracle/.ssh on the remote host may not be owned by oracle.
5. User may not have passed -shared option for shared remote users or
may be passing the -shared option for non-shared remote users.
6. If there is output in addition to the date, but no password is asked,
it may be a security alert shown as part of company policy. Append the
additional text to the <OMS HOME>/sysman/prov/resources/ignoreMessages.txt file.
------------------------------------------------------------------------
--racpb3:--
Running /usr/bin/ssh -x -l oracle racpb3 date to verify SSH connectivity has been setup from local host to racpb3.
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.
The script will run SSH on the remote machine racpb3. The user may be prompted for a passphrase here in case the private key has been encrypted with a passphrase.
Agent admitted failure to sign using the key.
oracle@racpb3's password: 
Thu Dec 27 03:52:59 IST 2018
-----------------------------------------------------------------------
SSH verification complete.

Copy authorized keys to all nodes running under cluster environment.

[oracle@racpb3 .ssh]$ scp authorized_keys oracle@racpb1:/home/oracle/
oracle@racpb1's password:
authorized_keys 100% 478 0.5KB/s 00:00

[oracle@racpb1 ~]$ cat authorized_keys >> .ssh/authorized_keys
[oracle@racpb1 ~]$ cat .ssh/authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAqucDimj845F+iE2cWRtVf4qBP/YYqtMcUgpuORdlWuEsRN3wygAlrLszJ9h3gzlIfORUYGLT01A4lj0ZmQtxxfNjKW74feK25ieYkeQUsADLNPvmsdXwpNSCZ4IerLpp74sm0mzFdAZC8o2hAPhvJwiCU85naxTDo/NSNGDMOf6eCRAE8fSb4rICrC+FNdC+TlagyhM+K1Jxt2MmFpKgauzjCpQcGqkCo6DsD59nppf7fAXUUovL7Ykh1AVufYdEhFGFS6lffhV90qrsHEmOKVodek8p16I9lemeJRNaXdM1QT4UcmBLlC+qWF6WMmh9PYMmq3+3cUca74G1U6gF+w== oracle@racpb1.localdomain.com
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA43diGL6I8oEnOa+WQc0gvIj0KkaNYIT06UwqvWhyfibwCUATBdj0aSQiSIGmiy95+wDiyfWJDKFAR60Bb8ZG5UzgP/XPhoZKcJKYxVMtX2zppeVQjoyXR2mwyElcT5xLR/PNhUMnDHbWPPp9kK6flyMGrpYjxbwh55FzC6MQ/jw19u9VVLDsNtt4q8Zv/LZF7jwwPAn4YXT2WFVnY6Td709C05RD7GVRA35wsVCXiAoQbl5EsQ6/4Hdz9IKEcDSDcD6EnGhaLARnSy2ose1CL/Zk/5/iyMldhKxA8m26ZuVu7G1bZqKIbnUfUWnyx48opSbANLn2fTzPaIIO2Cwd1w== oracle@racpb2.localdomain.com
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEA0MWOu3g/Pfw729Fn7ruHif5eJxQDTb6km1SbeUfIZTRPrpA62e9fu6TVDrmVupAqlrswKJU2HueSPk7uidgS2zbLC9BsrBx2O/P/GBO+MgIYVjpzWd0uCJ9yjCAD0ciWosdBjafxVNsO/hZ08Wqc49BqJ9fZV8IbOD9xnYQOJls= oracle@racpb3.localdomain.com

[oracle@racpb1 .ssh]$ scp authorized_keys racpb2:/home/oracle/.ssh/
authorized_keys 100% 1300 1.3KB/s 00:00

[oracle@racpb1 .ssh]$ scp authorized_keys racpb3:/home/oracle/.ssh/

authorized_keys 100% 1300 1.3KB/s 00:00

Check Time Synchornization :

It will check SSH not configure successfully.Run the below commands from all nodes.

Example for 1st node :

[oracle@racpb1 .ssh]$ ssh racpb1 date
Thu Dec 27 04:08:16 IST 2018
[oracle@racpb1 .ssh]$ ssh racpb2 date
Thu Dec 27 04:08:19 IST 2018
[oracle@racpb1 .ssh]$ ssh racpb3 date
Thu Dec 27 04:08:23 IST 2018

Verify Cluster utility :-

[oracle@racpb1 bin]$ ./cluvfy comp peer -n racpb3 -refnode racpb1 -r 11gr2

Verifying peer compatibility

Checking peer compatibility...

Compatibility check: Physical memory [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 7.8085GB (8187808.0KB) 7.8085GB (8187808.0KB) matched
Physical memory <null>

Compatibility check: Available memory [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 7.80856GB (8187808.0KB) 7.8085GB (8187808.0KB) matched
Available memory <null>

Compatibility check: Swap space [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 5.8594GB (6143996.0KB) 5.8594GB (6143996.0KB) matched
Swap space <null>

Compatibility check: Free disk space for "/u01/app/12.1.0/grid" [reference node: racpb1]storage
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 35.5566GB (3.728384E7KB) 28.9248GB (3.0329856E7KB) matched
Free disk space <null>

Compatibility check: Free disk space for "/tmp" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 6.9678GB (7306240.0KB) 8.1494GB (8545280.0KB) matched
Free disk space <null>

Compatibility check: User existence for "oracle" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 oracle(54321) oracle(54321) matched
User existence for "oracle" check passed

Compatibility check: Group existence for "oinstall" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 oinstall(54321) oinstall(54321) matched
Group existence for "oinstall" check passed

Compatibility check: Group existence for "dba" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 dba(54322) dba(54322) matched
Group existence for "dba" check passed

Compatibility check: Group membership for "oracle" in "oinstall (Primary)" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 yes yes matched
Group membership for "oracle" in "oinstall (Primary)" check passed

Compatibility check: Group membership for "oracle" in "dba" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 yes yes matched
Group membership for "oracle" in "dba" check passed

Compatibility check: Run level [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 5 5 matched
Run level check passed

Compatibility check: System architecture [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 x86_64 x86_64 matched
System architecture check passed

Compatibility check: Kernel version [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 2.6.39-400.17.1.el6uek.x86_64 2.6.39-400.17.1.el6uek.x86_64 matched
Kernel version check passed

Compatibility check: Kernel param "semmsl" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 256 256 matched
Kernel param "semmsl" check passed

Compatibility check: Kernel param "semmns" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 32000 32000 matched
Kernel param "semmns" check passed

Compatibility check: Kernel param "semopm" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 100 100 matched
Kernel param "semopm" check passed

Compatibility check: Kernel param "semmni" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 142 142 matched
Kernel param "semmni" check passed

Compatibility check: Kernel param "shmmax" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 4294967295 4294967295 matched
Kernel param "shmmax" check passed

Compatibility check: Kernel param "shmmni" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 4096 4096 matched
Kernel param "shmmni" check passed

Compatibility check: Kernel param "shmall" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 0 0 matched
Kernel param "shmall" check passed

Compatibility check: Kernel param "file-max" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 6815744 6815744 matched
Kernel param "file-max" check passed

Compatibility check: Kernel param "ip_local_port_range" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 9000 65500 9000 65500 matched
Kernel param "ip_local_port_range" check passed

Compatibility check: Kernel param "rmem_default" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 4194304 4194304 matched
Kernel param "rmem_default" check passed

Compatibility check: Kernel param "rmem_max" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 4194304 4194304 matched
Kernel param "rmem_max" check passed

Compatibility check: Kernel param "wmem_default" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 262144 262144 matched
Kernel param "wmem_default" check passed

Compatibility check: Kernel param "wmem_max" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 1048576 1048576 matched
Kernel param "wmem_max" check passed

Compatibility check: Kernel param "aio-max-nr" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 1048576 1048576 matched
Kernel param "aio-max-nr" check passed

Compatibility check: Package existence for "binutils" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 binutils-2.20.51.0.2-5.36.el6 binutils-2.20.51.0.2-5.36.el6 matched
Package existence for "binutils" check passed

Compatibility check: Package existence for "compat-libcap1" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 compat-libcap1-1.10-1 compat-libcap1-1.10-1 matched
Package existence for "compat-libcap1" check passed

Compatibility check: Package existence for "compat-libstdc++-33 (x86_64)" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 compat-libstdc++-33-3.2.3-69.el6 (x86_64) compat-libstdc++-33-3.2.3-69.el6 (x86_64) matched
Package existence for "compat-libstdc++-33 (x86_64)" check passed

Compatibility check: Package existence for "libgcc (x86_64)" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 libgcc-4.4.7-3.el6 (x86_64),libgcc-4.4.7-3.el6 (i686) libgcc-4.4.7-3.el6 (x86_64),libgcc-4.4.7-3.el6 (i686) matched
Package existence for "libgcc (x86_64)" check passed

Compatibility check: Package existence for "libstdc++ (x86_64)" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 libstdc++-4.4.7-3.el6 (x86_64) libstdc++-4.4.7-3.el6 (x86_64) matched
Package existence for "libstdc++ (x86_64)" check passed

Compatibility check: Package existence for "libstdc++-devel (x86_64)" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 libstdc++-devel-4.4.7-3.el6 (x86_64) libstdc++-devel-4.4.7-3.el6 (x86_64) matched
Package existence for "libstdc++-devel (x86_64)" check passed

Compatibility check: Package existence for "sysstat" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 sysstat-9.0.4-20.el6 sysstat-9.0.4-20.el6 matched
Package existence for "sysstat" check passed

Compatibility check: Package existence for "gcc" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 gcc-4.4.7-3.el6 gcc-4.4.7-3.el6 matched
Package existence for "gcc" check passed

Compatibility check: Package existence for "gcc-c++" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 gcc-c++-4.4.7-3.el6 gcc-c++-4.4.7-3.el6 matched
Package existence for "gcc-c++" check passed

Compatibility check: Package existence for "ksh" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 ksh-20100621-19.el6 ksh-20100621-19.el6 matched
Package existence for "ksh" check passed

Compatibility check: Package existence for "make" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 make-3.81-20.el6 make-3.81-20.el6 matched
Package existence for "make" check passed

Compatibility check: Package existence for "glibc (x86_64)" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 glibc-2.12-1.107.el6 (x86_64),glibc-2.12-1.107.el6 (i686) glibc-2.12-1.107.el6 (x86_64),glibc-2.12-1.107.el6 (i686) matched
Package existence for "glibc (x86_64)" check passed

Compatibility check: Package existence for "glibc-devel (x86_64)" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 glibc-devel-2.12-1.107.el6 (x86_64) glibc-devel-2.12-1.107.el6 (x86_64) matched
Package existence for "glibc-devel (x86_64)" check passed

Compatibility check: Package existence for "libaio (x86_64)" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 libaio-0.3.107-10.el6 (x86_64) libaio-0.3.107-10.el6 (x86_64) matched
Package existence for "libaio (x86_64)" check passed

Compatibility check: Package existence for "libaio-devel (x86_64)" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 libaio-devel-0.3.107-10.el6 (x86_64) libaio-devel-0.3.107-10.el6 (x86_64) matched
Package existence for "libaio-devel (x86_64)" check passed

Verification of peer compatibility was successful.
Checks passed for the following node(s):
racpb3

Verify new node pre-check :

[oracle@racpb1 bin]$ ./cluvfy stage -pre nodeadd -n racpb3 -fixup -verbose > /home/oracle/cluvfy_pre_nodeadd.txt

Above node addition pre-check has to get passed to add nodes to the existing Two-Node RAC environment.I have attached the cluvfy_pre_nodeadd file here.

From racpb1 node,

For GRID_HOME :

[oracle@racpb1 ~]$ . .bash_profile
[oracle@racpb1 ~]$ grid
[oracle@racpb1 ~]$ export IGNORE_PREADDNODE_CHECKS=Y
[oracle@racpb1 ~]$ cd $ORACLE_HOME/addnode

[oracle@racpb1 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={racpb3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={racvr3}"
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB. Actual 7957 MB Passed
Checking swap space: must be greater than 150 MB. Actual 5999 MB Passed
[WARNING] [INS-13014] Target environment does not meet some optional requirements.
CAUSE: Some of the optional prerequisites are not met. See logs for details. /u01/app/oraInventory/logs/addNodeActions2018-12-27_05-25-06AM.log
ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/addNodeActions2018-12-27_05-25-06AM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.

Prepare Configuration in progress.

Prepare Configuration successful.
.................................................. 8% Done.
You can find the log of this install session at:
/u01/app/oraInventory/logs/addNodeActions2018-12-27_05-25-06AM.log

Instantiate files in progress.

Instantiate files successful.
.................................................. 14% Done.

Copying files to node in progress.

Copying files to node successful.
.................................................. 73% Done.

Saving cluster inventory in progress.
.................................................. 80% Done.

Saving cluster inventory successful.
The Cluster Node Addition of /u01/app/12.1.0/grid was successful.
Please check '/tmp/silentInstall.log' for more details.

Setup Oracle Base in progress.

Setup Oracle Base successful.
.................................................. 88% Done.

As a root user, execute the following script(s):
1. /u01/app/oraInventory/orainstRoot.sh
2. /u01/app/12.1.0/grid/root.sh

Execute /u01/app/oraInventory/orainstRoot.sh on the following nodes:
[racpb3]
Execute /u01/app/12.1.0/grid/root.sh on the following nodes:
[racpb3]

The scripts can be executed in parallel on all the nodes.

..........
Update Inventory in progress.
.................................................. 100% Done.

Update Inventory successful.
Successfully Setup Software.

As root user,execute orainstRoot.sh and root.sh on racpb3 :

[root@racpb3 ]# sh orainstRoot.sh 
Creating the Oracle inventory pointer file (/etc/oraInst.loc)
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.

[root@racpb3 grid]# sh root.sh
Check /u01/app/12.1.0/grid/install/root_racpb3.localdomain.com_2018-12-27_21-52-22.log for the output of root script

I have attached here root script output log

Check Clusterware status :-

[root@racpb3 bin]# ./crsctl check cluster -all
**************************************************************
racpb1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racpb2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racpb3:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[root@racpb3 bin]# ./crs_stat -t -v
Name Type R/RA F/FT Target State Host 
----------------------------------------------------------------------
ora.DATA.dg ora....up.type 0/5 0/ ONLINE ONLINE racpb1 
ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE racpb1 
ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racpb1 
ora....N2.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racpb3 
ora....N3.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racpb2 
ora.MGMTLSNR ora....nr.type 0/0 0/0 ONLINE ONLINE racpb2 
ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE racpb1 
ora.cvu ora.cvu.type 0/5 0/0 ONLINE ONLINE racpb2 
ora.mgmtdb ora....db.type 0/2 0/1 ONLINE ONLINE racpb2 
ora....network ora....rk.type 0/5 0/ ONLINE ONLINE racpb1 
ora.oc4j ora.oc4j.type 0/1 0/2 ONLINE ONLINE racpb2 
ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE racpb1 
ora.orcl11g.db ora....se.type 0/2 0/1 ONLINE ONLINE 
ora....SM1.asm application 0/5 0/0 ONLINE ONLINE racpb1 
ora....B1.lsnr application 0/5 0/0 ONLINE ONLINE racpb1 
ora.racpb1.ons application 0/3 0/0 ONLINE ONLINE racpb1 
ora.racpb1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE racpb1 
ora....SM2.asm application 0/5 0/0 ONLINE ONLINE racpb2 
ora....B2.lsnr application 0/5 0/0 ONLINE ONLINE racpb2 
ora.racpb2.ons application 0/3 0/0 ONLINE ONLINE racpb2 
ora.racpb2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE racpb2 
ora....SM3.asm application 0/5 0/0 ONLINE ONLINE racpb3 
ora....B3.lsnr application 0/5 0/0 ONLINE ONLINE racpb3 
ora.racpb3.ons application 0/3 0/0 ONLINE ONLINE racpb3 
ora.racpb3.vip ora....t1.type 0/0 0/0 ONLINE ONLINE racpb3 
ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racpb1 
ora.scan2.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racpb3 
ora.scan3.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racpb2

 

For ORACLE_HOME :

[oracle@racpb1 addnode]$ export ORACLE_SID=orcl11g
[oracle@racpb1 addnode]$ export ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1
[oracle@racpb1 ~]$ cd $ORACLE_HOME/addnode

[oracle@racpb1 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={racpb3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={racvr3}"
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB. Actual 7937 MB Passed
Checking swap space: must be greater than 150 MB. Actual 5999 MB Passed


Prepare Configuration in progress.

Prepare Configuration successful.
.................................................. 8% Done.
You can find the log of this install session at:
/u01/app/oraInventory/logs/addNodeActions2018-12-28_12-34-24AM.log

Instantiate files in progress.

Instantiate files successful.
.................................................. 14% Done.

Copying files to node in progress.

Copying files to node successful.
.................................................. 73% Done.

Saving cluster inventory in progress.
SEVERE:Remote 'UpdateNodeList' failed on nodes: 'racpb2'. Refer to '/u01/app/oraInventory/logs/addNodeActions2018-12-28_12-34-24AM.log' for details.
It is recommended that the following command needs to be manually run on the failed nodes: 
/u01/app/oracle/product/12.1.0/db_1/oui/bin/runInstaller -updateNodeList -noClusterEnabled ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1 CLUSTER_NODES=racpb1,racpb2,racpb3 CRS=false "INVENTORY_LOCATION=/u01/app/oraInventory" LOCAL_NODE=<node on which command is to be run>. 
Please refer 'UpdateNodeList' logs under central inventory of remote nodes where failure occurred for more details.
.................................................. 80% Done.

Saving cluster inventory successful.
The Cluster Node Addition of /u01/app/oracle/product/12.1.0/db_1 was unsuccessful.
Please check '/tmp/silentInstall.log' for more details.

Setup Oracle Base in progress.

Setup Oracle Base successful.
.................................................. 88% Done.

As a root user, execute the following script(s):
1. /u01/app/oracle/product/12.1.0/db_1/root.sh

Execute /u01/app/oracle/product/12.1.0/db_1/root.sh on the following nodes: 
[racpb3]

..........
Update Inventory in progress.
.................................................. 100% Done.

Update Inventory successful.
Successfully Setup Software.

Run the below commands on failed nodes racpb3

[oracle@racpb3 db_1]$ /u01/app/oracle/product/12.1.0/db_1/oui/bin/runInstaller -updateNodeList -noClusterEnabled ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1 CLUSTER_NODES=racpb1,racpb2,racpb3 CRS=false "INVENTORY_LOCATION=/u01/app/oraInventory" LOCAL_NODE=3
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 5994 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.

Execute root.sh on new node (racpb3) as root user 

[root@racpb3 Desktop]# sh /u01/app/oracle/product/12.1.0/db_1/root.sh 
Check /u01/app/oracle/product/12.1.0/db_1/install/root_racpb3.localdomain.com_2018-12-28_00-57-10.log for the output of root script
[root@racpb3 Desktop]# tail -f /u01/app/oracle/product/12.1.0/db_1/install/root_racpb3.localdomain.com_2018-12-28_00-57-10.log
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/12.1.0/db_1
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.

Set env. to new node (racpb3) :-

grid()
{
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=/u01/app/12.1.0/grid; export ORACLE_HOME
export ORACLE_SID=+ASM3
ORACLE_TERM=xterm; export ORACLE_TERM
BASE_PATH=/usr/sbin:$PATH; export BASE_PATH
PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH
}

11g()
{
ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1
export ORACLE_HOME
ORACLE_BASE=/u01/app/oracle
export ORACLE_BASE
ORACLE_SID=orcl11g3
export ORACLE_SID
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/usr/lib:.
export LD_LIBRARY_PATH
LIBPATH=$ORACLE_HOME/lib32:$ORACLE_HOME/lib:/usr/lib:/lib
export LIBPATH
TNS_ADMIN=${ORACLE_HOME}/network/admin
export TNS_ADMIN
PATH=$ORACLE_HOME/bin:$PATH:.
export PATH
}

Check the database status and instances :-

[oracle@racpb3 ~]$ srvctl status database -d orcl11g
Instance orcl11g1 is running on node racpb1
Instance orcl11g2 is running on node racpb2

It will show only two instances are present under clusterware.Add new instance using DBCA.

Adding Instance to Cluster Database :

Invoke dbca from node 1 (racpb1) :

[oracle@racpb1 ~]$ . .bash_profile 
[oracle@racpb1 ~]$ 11g
[oracle@racpb1 ~]$ dbca

 

 

 

 

 

 

 

 

 

Check Database status and configuration :

[oracle@racpb3 ~]$ srvctl status database -d orcl11g
Instance orcl11g1 is running on node racpb1
Instance orcl11g2 is running on node racpb2
Instance orcl11g3 is running on node racpb3


[oracle@racpb3 ~]$ srvctl config database -d orcl11g
Database unique name: orcl11g
Database name: orcl11g
Oracle home: /u01/app/oracle/product/12.1.0/db_1
Oracle user: oracle
Spfile: +DATA/orcl11g/spfileorcl11g.ora
Password file: 
Domain: localdomain.com
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: 
Disk Groups: DATA
Mount point paths: 
Services: 
Type: RAC
Start concurrency: 
Stop concurrency: 
OSDBA group: dba
OSOPER group: oinstall
Database instances: orcl11g1,orcl11g2,orcl11g3
Configured nodes: racpb1,racpb2,racpb3
Database is administrator managed

 

Catch Me On:- Hariprasath Rajaram Telegram:https://t.me/joinchat/I_f4DkeGfZsxfzXxHD6gTg LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/ Facebook:https://www.facebook.com/HariPrasathdba                       FB Group:https://www.facebook.com/groups/894402327369506/                   FB Page: https://www.facebook.com/dbahariprasath/? Twitter: https://twitter.com/hariprasathdba