Oracle 12c-2 Node Rac To Single Instance Standby Database Setup

Steps for creating Single instance standby database from RAC primary database :-

  1. Change the archive log mode :
$ sqlplus / as sysdba

SQL> alter database archivelog;

Database altered.

SQL> alter database open;

Database altered.

SQL> archive log list
Database log mode                  Archive Mode
Automatic archival                 Enabled
Archive destination                +DG01
Oldest online log sequence         299300
Next log sequence to archive       299305
Current log sequence               299305

2. Enable force logging mode:

SQL> select force_logging from v$database;

FORCE_LOGGING
---------------------------------------
NO

SQL> alter database force logging;

Database altered.

SQL> select force_logging from v$database;

FORCE_LOGGING
---------------------------------------
YES

3.  Parameter Configuration setup:

SQL> alter system set log_archive_config='DG_CONFIG=(prod,proddr)' SCOPE=both sid='*';

System altered.

SQL> alter system set log_archive_dest_1='LOCATION=/u01/app/oracle/oradata/prod/arch/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=prod' SCOPE=both sid='*';

System altered.

SQL> alter system set log_archive_dest_2='SERVICE=proddr LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=proddr' SCOPE=both sid='*';
SQL> alter system set fal_server=prod SCOPE=both sid='*';

System altered.

SQL> alter system set fal_client=proddr SCOPE=both sid='*';

System altered.

SQL> alter system set standby_file_management=auto SCOPE=both sid='*';

System altered.

SQL> alter system set REMOTE_LOGIN_PASSWORDFILE=exclusive scope=spfile;

System altered.

4. Standby Listener Configuration:

[oracle@proddr01 ]$ export ORACLE_SID=prod
[oracle@proddr01 ]$ export ORACLE_HOME=/oracle/app/oracle/product/12.1.0/dbhome_1
[oracle@proddr01 admin]$ cd $ORACLE_HOME/network/admin
[oracle@proddr01 admin]$ cat listener.ora

# listener.ora Network Configuration File: /oracle/app/oracle/product/12.1.0/dbhome_1/network/admin/listener.ora

# Generated by Oracle configuration tools.

SID_LIST_LISTENER =

  (SID_LIST =

    (SID_DESC =

      (ORACLE_HOME = /u01/app/oracle/product/12.1.0/dbhome_1)

      (SID_NAME = prod )

    )

  )

LISTENER_PRODDR=

  (DESCRIPTION_LIST =

    (DESCRIPTION =

      (ADDRESS = (PROTOCOL = TCP)(HOST = proddr01)(PORT = 1521))

    )

  )

ADR_BASE_LISTENER = /u01/app/oracle

5. TNS Connection string Configuration :

Standby and primary tnsnames.ora entry should be available in both nodes:

[oracle@proddr01 admin]$ cd $ORACLE_HOME/network/admin
[oracle@proddr01 admin]$ cat tnsnames.ora

PROD =

  (DESCRIPTION =

    (ADDRESS_LIST =

      (ADDRESS = (PROTOCOL = TCP)(HOST = prod1)(PORT = 1521))

    )

    (CONNECT_DATA =

       (SERVER = DEDICATED)

        (SID = prod1)

    )

  )

PRODDR =

  (DESCRIPTION =

    (ADDRESS = (PROTOCOL = TCP)(HOST = proddr01)(PORT = 1521))

    (CONNECT_DATA =

      (SERVER = DEDICATED)

      (SID = prod)

    )

  )

6. Create respective directories in Standby Server:

[oracle@proddr01 admin]$ mkdir /oracle/app/oracle/oradata/proddr/ctrl
[oracle@proddr01 admin]$ mkdir /oracle/app/oracle/oradata/proddr/data
[oracle@proddr01 admin]$ mkdir /oracle/app/oracle/oradata/proddr/logs
[oracle@proddr01 admin]$ mkdir /oracle/app/oracle/oradata/proddr/arch
[oracle@proddr01 admin]$ mkdir /oracle/app/oracle/admin/proddr/adump

7. Start Standby listener :

[oracle@proddr01 admin] $lsnrctl start LISTENER_PRODDR

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 28-JAN-2019 14:05:49

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Starting listener to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=proddr01.localdomain.com)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER_PRODDR
Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date 03-DEC-2018 14:09:08
Uptime 55 days 23 hr. 56 min. 40 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /oracle/app/oracle/product/12.1.0/db_1/network/admin/listener.ora
Listener Log File /oracle/app/oracle/diag/tnslsnr/proddr01/listener_proddr/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=proddr01)(PORT=1521)))
Services Summary...
Service "proddr" has 1 instance(s).
Instance "proddr", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully

8. Copying password & parameter file to standby server:

  • After copying pfile,only keep the parameter entry in PFILE:

db_name

[oracle@proddr01 ]$ cd $ORACLE_HOME/dbs
[oracle@prod1 dbs]$ scp initprod.ora orapwprod
oracle@proddr01:/oracle/app/oracle/product/12.1.0/dbhome_1/dbs oracle@proddr01's password: 
initprod.ora  100% 1536     1.5KB/s   00:00
orapwprod     100% 1536     1.5KB/s   00:00                                 
[oracle@proddr01 dbs]$ cat initprod.ora

db_name='prod'

9. Check connectivity between primary and standby side :

[oracle@proddr01 ]$ tnsping prod   [In boths the nodes]

[oracle@proddr01 ]$ tnsping proddr    [In boths the nodes]

10. Standby Database Creation :

Startup in nomount stage :

[oracle@proddr01 ]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Thu Jan 29 01:12:25 2019

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup nomount

ORACLE instance started.

Total System Global Area  217157632 bytes
Fixed Size                  2211928 bytes
Variable Size             159387560 bytes
Database Buffers           50331648 bytes
Redo Buffers                5226496 bytes

11. Connect RMAN to create standby database,

Set cluster_database is FALSE.

[oracle@proddr01 ]$ rman target sys/****@prod auxiliary sys/****@proddr
Recovery Manager: Release 12.1.0.2.0 - Production on Sun Jan 27 16:15:10 2019 Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

connected to target database: PROD (DBID=1459429229)
connected to auxiliary database: PROD (not mounted)

RMAN> run
{
allocate channel prmy1 type disk;
allocate channel prmy2 type disk;
allocate channel prmy3 type disk;
allocate channel prmy4 type disk;
allocate auxiliary channel stby type disk;
duplicate target database for standby from active database
spfile
parameter_value_convert 'prod','proddr'
set db_file_name_convert='+DG01/prod/datafile','/oradata1/proddr/data' 
set db_unique_name='proddr'
set cluster_database='false'
set log_file_name_convert='+DG01/prod/onlinelog','/oradata1/proddr/logs' 
set control_files='/oracle/app/oracle/oradata/proddr/ctrl/control.ctl'
set fal_client='proddr'
set fal_server='prod'
set audit_file_dest='/oracle/app/oracle/admin/proddr/adump'
set log_archive_config='dg_config=(proddr,prod)'
set log_archive_dest_1='location=location=/oradata1/prod/arch'
set log_archive_dest_2='service=prod async valid_for=(online_logfiles,primary_role) db_unique_name=prod'
set sga_target='50GB'
set sga_max_size='50GB'
set undo_tablespace='UNDOTBS1'
nofilenamecheck;
}

using target database control file instead of recovery catalog
allocated channel: prmy1
channel prmy1: SID=42 device type=DISK
 
allocated channel: prmy2
channel prmy2: SID=36 device type=DISK
 
allocated channel: prmy3 
channel prmy3 : SID=45 device type=DISK

allocated channel: prmy4 
channel prmy4 : SID=45 device type=DISK
 
allocated channel: stby
channel stby: SID=20 device type=DISK
 
Starting Duplicate Db at 28-JAN-19
.
.
.
.
.
Finished Duplicate Db at 28-JAN-19
released channel: prmy1
released channel: prmy2
released channel: prmy3
released channel: prmy4
released channel: stby
RMAN>

12. Enable Recovery Manager in standby side:

[oracle@proddr01 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Mon Jan 28 10:36:39 2019

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production

With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> alter database recover managed standby database disconnect from session;

Database altered.

13. Check Standby SYNC Verification:

SQL> SELECT ARCH.THREAD# "Thread", ARCH.SEQUENCE# "Last Sequence Received", APPL.SEQUENCE# "Last Sequence Applied", (ARCH.SEQUENCE# - APPL.SEQUENCE#) "Difference" FROM (SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH,(SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL WHERE ARCH.THREAD# = APPL.THREAD# ORDER BY 1;

Thread     Last Sequence Received Last Sequence Applied Difference
---------- ---------------------- --------------------- -----------
1          299314                 299314                0
2          149803                 149803                0

 

Catch Me On:- Hariprasath Rajaram 

Telegram:https://t.me/joinchat/I_f4DkeGfZuxgMIoJSpQZg LinkedIn:https://www.linkedin.com/in/hariprasathdba Facebook:https://www.facebook.com/HariPrasathdba 
FB Group:https://www.facebook.com/groups/894402327369506/ 
FB Page: https://www.facebook.com/dbahariprasath/? 
Twitter: https://twitter.com/hariprasathdba

Step by Step Deleting Node In Oracle RAC (12c Release 1) Environment

 

Steps for deleting node in Oracle RAC (12c Release 1) environment :

Steps for Deleting an Instance From the Cluster database :-

Invoke dbca from node 1 (racpb1) :

[oracle@racpb1 ~]$ . .bash_profile 
[oracle@racpb1 ~]$ 
[oracle@racpb1 ~]$ dbca

Check number of Instance running status :

[oracle@racpb1 ~]$ srvctl status database -d orcl11g
Instance orcl11g1 is running on node racpb1
Instance orcl11g2 is running on node racpb2

Check Instance removed from OCR :

[oracle@racpb1 ~]$ srvctl config database -d orcl11g
Database unique name: orcl11g
Database name: orcl11g
Oracle home: /u01/app/oracle/product/12.1.0/db_1
Oracle user: oracle
Spfile: +DATA/orcl11g/spfileorcl11g.ora
Password file: 
Domain: localdomain.com
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: 
Disk Groups: DATA
Mount point paths: 
Services: 
Type: RAC
Start concurrency: 
Stop concurrency: 
OSDBA group: dba
OSOPER group: oinstall
Database instances: orcl11g1,orcl11g2
Configured nodes: racpb1,racpb2
Database is administrator managed

Remove Oracle RAC Database home :-

 Disable and Stop listener :

[oracle@racpb3 ~]$ srvctl status listener -l LISTENER
Listener LISTENER is enabled
Listener LISTENER is running on node(s): racpb3,racpb2,racpb1
[oracle@racpb3 ~]$ srvctl disable listener -l LISTENER -n racpb3
[oracle@racpb3 ~]$ srvctl stop listener -l LISTENER -n racpb3

Update Inventory on deleting node (racpb3) :

[oracle@racpb3 ~]$ export ORACLE_SID=orcl11g3
[oracle@racpb3 ~]$ export ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1
[oracle@racpb3 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@racpb3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1 "CLUSTER_NODES={racpb3}" -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 5869 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.

Deinstall ORACLE_HOME :

Specify the “-local” flag as not to remove more than just the local node’s software.

[oracle@racpb3 ~]$ $ORACLE_HOME/deinstall/deinstall -local

Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/

############ ORACLE DECONFIG TOOL START ############

######################### DECONFIG CHECK OPERATION START #########################
## [START] Install check configuration ##

Checking for existence of the Oracle home location /u01/app/oracle/product/12.1.0/db_1
Oracle Home type selected for deinstall is: Oracle Real Application Cluster Database
Oracle Base selected for deinstall is: /u01/app/oracle
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /u01/app/12.1.0/grid
The following nodes are part of this cluster: racpb3,racpb2,racpb1
Checking for sufficient temp space availability on node(s) : 'racpb3'

## [END] Install check configuration ##

Network Configuration check config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2018-12-28_11-36-29-PM.log
Network Configuration check config END
Database Check Configuration START
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check2018-12-28_11-36-31-PM.log
Use comma as separator when specifying list of values as input

Specify the list of database names that are configured locally on this node for this Oracle home. Local configurations of the discovered databases will be removed []: orcl11g
Database Check Configuration END
Oracle Configuration Manager check START
OCM check log file location : /u01/app/oraInventory/logs//ocm_check7641.log
Oracle Configuration Manager check END

######################### DECONFIG CHECK OPERATION END #########################

####################### DECONFIG CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /u01/app/12.1.0/grid
The following nodes are part of this cluster: racpb3,racpb2,racpb1
The cluster node(s) on which the Oracle home deinstallation will be performed are:racpb3
Oracle Home selected for deinstall is: /u01/app/oracle/product/12.1.0/db_1
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
The option -local will not modify any database configuration for this Oracle home.
Checking the config status for CCR
Oracle Home exists with CCR directory, but CCR is not configured
CCR check is finished
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2018-12-28_11-36-19-PM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2018-12-28_11-36-19-PM.err'
######################## DECONFIG CLEAN OPERATION START ########################
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2018-12-28_11-37-08-PM.log
Network Configuration clean config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2018-12-28_11-37-08-PM.log
Network Configuration clean config END
Oracle Configuration Manager clean START
OCM clean log file location : /u01/app/oraInventory/logs//ocm_clean7641.log
Oracle Configuration Manager clean END

######################### DECONFIG CLEAN OPERATION END #########################

####################### DECONFIG CLEAN OPERATION SUMMARY #######################
Cleaning the config for CCR
As CCR is not configured, so skipping the cleaning of CCR configuration
CCR clean is finished
#######################################################################

############# ORACLE DECONFIG TOOL END #############

Using properties file /tmp/deinstall2018-12-28_11-27-37PM/response/deinstall_2018-12-28_11-36-19-PM.rsp
Location of logs /u01/app/oraInventory/logs/
############ ORACLE DEINSTALL TOOL START ############

####################### DEINSTALL CHECK OPERATION SUMMARY #######################
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2018-12-28_11-36-19-PM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2018-12-28_11-36-19-PM.err'

######################## DEINSTALL CLEAN OPERATION START ########################
## [START] Preparing for Deinstall ##
Setting LOCAL_NODE to racpb3
Setting CLUSTER_NODES to racpb3
Setting CRS_HOME to false
Setting oracle.installer.invPtrLoc to /tmp/deinstall2018-12-28_11-27-37PM/oraInst.loc
Setting oracle.installer.local to true

## [END] Preparing for Deinstall ##

Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START

Detach Oracle home '/u01/app/oracle/product/12.1.0/db_1' from the central inventory on the local node : Done

Delete directory '/u01/app/oracle/product/12.1.0/db_1' on the local node : Done

The Oracle Base directory '/u01/app/oracle' will not be removed on local node. The directory is in use by Oracle Home '/u01/app/12.1.0/grid'.

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END

## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2018-12-28_11-27-37PM' on node 'racpb3'

## [END] Oracle install clean ##

######################### DEINSTALL CLEAN OPERATION END #########################

####################### DEINSTALL CLEAN OPERATION SUMMARY #######################
Successfully detached Oracle home '/u01/app/oracle/product/12.1.0/db_1' from the central inventory on the local node.
Successfully deleted directory '/u01/app/oracle/product/12.1.0/db_1' on the local node.
Oracle Universal Installer cleanup was successful.

Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################

############# ORACLE DEINSTALL TOOL END #############

Update Inventory in remaining nodes :

[oracle@racpb1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1 "CLUSTER_NODES={racpb1,racpb2}" 
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 5999 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.

Remove GRID_HOME :-

Check the pinned status of nodes :

[oracle@racpb1 ~]$ olsnodes -s -t
racpb1 Active Unpinned
racpb2 Active Unpinned
racpb3 Active Unpinned

If the node is pinned, then run the crsctl unpin css to unpinned the nodes from GRID_HOME.

Disable the Clusterware daemon process from node (racpb3):

[root@racpb3 ~]# /u01/app/12.1.0/grid/crs/install/rootcrs.pl -deconfig -force -verbose
Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params
Network 1 exists
Subnet IPv4: 192.168.12.0/255.255.255.0/eth0, static
Subnet IPv6: 
Ping Targets: 
Network is enabled
Network is individually enabled on nodes: 
Network is individually disabled on nodes: 
VIP exists: network number 1, hosting node racpb1
VIP Name: racvr1
VIP IPv4 Address: 192.168.12.130
VIP IPv6 Address: 
VIP is enabled.
VIP is individually enabled on nodes: 
VIP is individually disabled on nodes: 
VIP exists: network number 1, hosting node racpb2
VIP Name: racvr2
VIP IPv4 Address: 192.168.12.131
VIP IPv6 Address: 
VIP is enabled.
VIP is individually enabled on nodes: 
VIP is individually disabled on nodes: 
VIP exists: network number 1, hosting node racpb3
VIP Name: racvr3
VIP IPv4 Address: 192.168.12.132
VIP IPv6 Address: 
VIP is enabled.
VIP is individually enabled on nodes: 
VIP is individually disabled on nodes: 
ONS exists: Local port 6100, remote port 6200, EM port 2016, Uses SSL false
ONS is enabled
ONS is individually enabled on nodes: 
ONS is individually disabled on nodes: 
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'racpb3'
CRS-2673: Attempting to stop 'ora.crsd' on 'racpb3'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'racpb3'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'racpb3'
CRS-2677: Stop of 'ora.DATA.dg' on 'racpb3' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'racpb3'
CRS-2677: Stop of 'ora.asm' on 'racpb3' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'racpb3' has completed
CRS-2677: Stop of 'ora.crsd' on 'racpb3' succeeded
CRS-2673: Attempting to stop 'ora.evmd' on 'racpb3'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'racpb3'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'racpb3'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'racpb3'
CRS-2677: Stop of 'ora.drivers.acfs' on 'racpb3' succeeded
CRS-2677: Stop of 'ora.evmd' on 'racpb3' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'racpb3'
CRS-2673: Attempting to stop 'ora.ctssd' on 'racpb3'
CRS-2673: Attempting to stop 'ora.storage' on 'racpb3'
CRS-2677: Stop of 'ora.gpnpd' on 'racpb3' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'racpb3' succeeded
CRS-2677: Stop of 'ora.storage' on 'racpb3' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'racpb3'
CRS-2677: Stop of 'ora.ctssd' on 'racpb3' succeeded
CRS-2677: Stop of 'ora.crf' on 'racpb3' succeeded
CRS-2677: Stop of 'ora.asm' on 'racpb3' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'racpb3'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'racpb3' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'racpb3'
CRS-2677: Stop of 'ora.cssd' on 'racpb3' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'racpb3'
CRS-2677: Stop of 'ora.gipcd' on 'racpb3' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'racpb3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2018/12/29 00:13:32 CLSRSC-4006: Removing Oracle Trace File Analyzer (TFA) Collector.

2018/12/29 00:14:03 CLSRSC-4007: Successfully removed Oracle Trace File Analyzer (TFA) Collector.

2018/12/29 00:14:05 CLSRSC-336: Successfully deconfigured Oracle Clusterware stack on this node

Delete clusterware configuration from other running nodes :

[root@racpb1 ~]# /u01/app/12.1.0/grid/bin/crsctl delete node -n racpb3
CRS-4661: Node racpb3 successfully deleted.

Check Clusterware status :

[oracle@racpb1 ~]$ crs_stat -t -v
Name Type R/RA F/FT Target State Host 
----------------------------------------------------------------------
ora.DATA.dg ora....up.type 0/5 0/ ONLINE ONLINE racpb1 
ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE racpb1 
ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racpb2 
ora....N2.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racpb1 
ora....N3.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racpb1 
ora.MGMTLSNR ora....nr.type 0/0 0/0 ONLINE ONLINE racpb2 
ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE racpb1 
ora.cvu ora.cvu.type 0/5 0/0 ONLINE ONLINE racpb1 
ora.mgmtdb ora....db.type 0/2 0/1 ONLINE ONLINE racpb2 
ora....network ora....rk.type 0/5 0/ ONLINE ONLINE racpb1 
ora.oc4j ora.oc4j.type 0/1 0/2 ONLINE ONLINE racpb2 
ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE racpb1 
ora.orcl11g.db ora....se.type 0/2 0/1 ONLINE ONLINE racpb1 
ora....SM1.asm application 0/5 0/0 ONLINE ONLINE racpb1 
ora....B1.lsnr application 0/5 0/0 ONLINE ONLINE racpb1 
ora.racpb1.ons application 0/3 0/0 ONLINE ONLINE racpb1 
ora.racpb1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE racpb1 
ora....SM2.asm application 0/5 0/0 ONLINE ONLINE racpb2 
ora....B2.lsnr application 0/5 0/0 ONLINE ONLINE racpb2 
ora.racpb2.ons application 0/3 0/0 ONLINE ONLINE racpb2 
ora.racpb2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE racpb2 
ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racpb2 
ora.scan2.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racpb1 
ora.scan3.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racpb1
[oracle@racpb1 ~]$ crsctl check cluster -all
**************************************************************
racpb1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racpb2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[oracle@racpb1 ~]$ olsnodes -s -t
racpb1 Active Unpinned
racpb2 Active Unpinned

Update Inventory :

[oracle@racpb3 ~]$ grid
[oracle@racpb3 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@racpb3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={racpb3}" CRS=TRUE -local 
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 5980 MB Passed
The inventory pointer is located at /etc/oraInst.loc


Deinstall GRID_HOME :

[oracle@racpb3 ~]$ cd /u01/app/12.1.0/grid/deinstall  
[oracle@racpb3 deinstall]$ ./deinstall -local  

Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/
############ ORACLE DECONFIG TOOL START ############
######################### DECONFIG CHECK OPERATION START #########################
## [START] Install check configuration ##
Checking for existence of the Oracle home location /u01/app/12.1.0/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Standalone Server
Oracle Base selected for deinstall is: /u01/app/grid
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home 
## [END] Install check configuration ##
Traces log file: /u01/app/oraInventory/logs//crsdc_2018-12-28_08-35-48PM.log
Network Configuration check config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2018-12-28_08-35-48-PM.log
Specify all Oracle Restart enabled listeners that are to be de-configured. Enter .(dot) to deselect all. 
[ASMNET1LSNR_ASM,M GMTLSNR,LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1]:
Network Configuration check config END
Asm Check Configuration START
ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_check2018-12-28_08-35-48-PM.log
ASM configuration was not detected in this Oracle home. Was ASM configured in this Oracle home (y|n) [n]: n
ASM was not detected in the Oracle Home
Database Check Configuration START
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_2018-12-28_08-35-48-PM.log
Database Check Configuration END
######################### DECONFIG CHECK OPERATION END #########################
####################### DECONFIG CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: 
The following nodes are part of this cluster: null
The cluster node(s) on which the Oracle home deinstallation will be performed are:null
Oracle Home selected for deinstall is: /u01/app/1210/grid
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Following Oracle Restart enabled listener(s) will be de-configured: ASMNET1LSNR_ASM,MGMTLSNR,LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1
ASM was not detected in the Oracle Home
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2018-12-28_08-35-46-PM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2018-12-28_08-35-46-PM.err'
######################## DECONFIG CLEAN OPERATION START ########################
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2018-12-08_08-36-48-PM.log
ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_clean2018-12-28_08-36-48-PM.log
ASM Clean Configuration END
Network Configuration clean config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2018-12-28_08-36-48-PM.log
De-configuring Oracle Restart enabled listener(s): ASMNET1LSNR_ASM,MGMTLSNR,LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1
De-configuring listener: ASMNET1LSNR_ASM
Stopping listener: ASMNET1LSNR_ASM
Deleting listener: ASMNET1LSNR_ASM
Listener deleted successfully.
Listener de-configured successfully.
De-configuring listener: MGMTLSNR
Stopping listener: MGMTLSNR
Deleting listener: MGMTLSNR
Listener deleted successfully.
Listener de-configured successfully.
De-configuring listener: LISTENER
Stopping listener: LISTENER
Deleting listener: LISTENER
Listener deleted successfully.
Listener de-configured successfully.
De-configuring listener: LISTENER_SCAN3
Stopping listener: LISTENER_SCAN3
Deleting listener: LISTENER_SCAN3
Listener deleted successfully.
Listener de-configured successfully.
De-configuring listener: LISTENER_SCAN2
Stopping listener: LISTENER_SCAN2
Deleting listener: LISTENER_SCAN2
Listener deleted successfully.
Listener de-configured successfully
De-configuring listener: LISTENER_SCAN1
Stopping listener: LISTENER_SCAN1
Deleting listener: LISTENER_SCAN1
Listener deleted successfully.
Listener de-configured successfully.
De-configuring Listener configuration file...
Listener configuration file de-configured successfully.
De-configuring backup files...
Backup files de-configured successfully.

The network configuration has been cleaned up successfully. 
Network Configuration clean config END
######################### DECONFIG CLEAN OPERATION END #########################
####################### DECONFIG CLEAN OPERATION SUMMARY #######################
Following Oracle Restart enabled listener(s) were de-configured successfully: ASMNET1LSNR_ASM,MGMTLSNR,LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1
Oracle Restart is stopped and de-configured successfully.
#######################################################################
############# ORACLE DECONFIG TOOL END #############
Using properties file /tmp/deinstall2018-12-28_08-33-16PM/response/deinstall2018-12-28_08-33-16PM.rsp
Location of logs /u01/app/oraInventory/logs/
############ ORACLE DEINSTALL TOOL START ############
####################### DEINSTALL CHECK OPERATION SUMMARY #######################
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deinstall2018-12-28_08-33-16PM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2018-12-28_08-33-16PM.err'
######################## DEINSTALL CLEAN OPERATION START ########################
## [START] Preparing for Deinstall ##
Setting LOCAL_NODE to racpb3
Setting CRS_HOME to false
Setting oracle.installer.invPtrLoc to /tmp/deinstall2018-12-15_28-33-16PM/oraInst.loc
Setting oracle.installer.local to true
## [END] Preparing for Deinstall ##
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START
Detach Oracle home '/u01/app/12.1.0/grid' from the central inventory on the local node : Done

Delete directory '/u01/app/12.1.0/grid' on the local node : Succeeded <<<<

Delete directory '/u01/app/oraInventory' on the local node : Done

The Oracle Base directory '/u01/app/oracle' will not be removed on local node. The directory is not empty.

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END

## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2018-12-29_00-52-55AM' on node 'racpb3'

## [END] Oracle install clean ##

######################### DEINSTALL CLEAN OPERATION END #########################
####################### DEINSTALL CLEAN OPERATION SUMMARY #######################
Successfully detached Oracle home '/u01/app/12.1.0/grid' from the central inventory on the local node.
Successfully deleted directory '/u01/app/12.1.0/grid' on the local node.
Successfully deleted directory '/u01/app/oraInventory' on the local node.
Oracle Universal Installer cleanup was successful.

Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL TOOL END #############

Update Inventory in remaining nodes :

[oracle@racpb1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={racpb1,racpb2}" CRS=TRUE 
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 5997 MB Passed
The inventory pointer is located at /etc/oraInst.loc

'UpdateNodeList' was successful.

 Verify the integrity of the cluster after the nodes have been removed :

[oracle@racpb1 ~]$ cluvfy stage -post nodedel -n racpb3

Performing post-checks for node removal

Checking CRS integrity...

CRS integrity check passed

Clusterware version consistency passed.

Node removal check passed

Post-check for node removal was successful.

 

Catch Me On:- Hariprasath Rajaram Telegram:https://t.me/joinchat/I_f4DkeGfZsxfzXxHD6gTg LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/ Facebook:https://www.facebook.com/HariPrasathdba                       FB Group:https://www.facebook.com/groups/894402327369506/                   FB Page: https://www.facebook.com/dbahariprasath/? Twitter: https://twitter.com/hariprasathdba