Step-by-Step One Node Rac Applying Psu Patch on 12c Grid and DB Home

Description:-

As we already seen how to configure Oracle One node RAC in 12cR1 and the relocation of the instance from one node to another node. In this article, let us apply the July’18 PSU patch to the same environment.

For Oracle One Node RAC configuration, please click here. Below is the configuration of the environment.

High Level steps for applying the Patch:-

  • Current OPatch Version
  • Upgrade Opatch utility
  • Prepare for Patching
  • Applying Patch
  • Patch Verification

Current OPatch Version:-

Step 1:- Current version of Opatch Tool in our environment

$ export PATH=$ORACLE_HOME/OPatch:$PATH
$ $ORACLE_HOME/OPatch/opatch version
OPatch Version: 12.1.0.1.3

OPatch succeeded.

From the above output,the opatch version is 12.1.0.1.3. But as per our README document, the minimum OPatch utility version shoul be 12.2.0.1.12 or later to apply this patch. Oracle recommends that you use the latest released OPatch version for 12.2, which is available for download from My Oracle Support patch 6880880 by selecting the 12.2.0.1.0 release.

Upgrade Opatch utility:-

Step 2:- Backup the existing Opatch folder

Backup the OPatch directory as root user for GRID_HOME and oracle user for ORACLE_HOME(Database) in both the nodes of the cluster. Otherwise, if we try to backup as oracle user in GRID_HOME, we will receive permission issues.

GRID_HOME:
$ su - root
$ cd /oradb/app/12.1.0.2/grid/
$ mv OPatch/ OPatch_bkp
$ unzip <PATH_TO_PATCH>/p6880880_122010_Linux-x86-64.zip -d .
$ chown -R oracle:oinstall OPatch
$ chmod -R 755 OPatch

ORACLE_HOME:
$ su - oracle
$ cd /oradb/app/oracle/product/12.1.0.2/db_1
$ mv OPatch/ OPatch_bkp
$ unzip <PATH_TO_PATCH>/p6880880_122010_Linux-x86-64.zip -d .
$ chmod -R 755 OPatch

Now, as oracle user verify the OPatch utility version.

GRID_HOME:-(Both Nodes)

$ export ORACLE_HOME=/oradb/app/12.1.0.2/grid
$ export PATH=$ORACLE_HOME/OPatch:$PATH
$ opatch version
OPatch Version: 12.2.0.1.14

OPatch succeeded.

ORACLE_HOME:-(Both Nodes)

$ export ORACLE_HOME=/oradb/app/oracle/product/12.1.0.2/db_1
$ export PATH=$ORACLE_HOME/OPatch:$PATH
$ opatch version
OPatch Version: 12.2.0.1.14

OPatch succeeded.

Prepare for Patching:-

Step 3:- Preparing Node 1 to apply the PSU Patch

Now, login as root user and set the environmental variables

Applying Patch:-

Step 4:- Navigate to the patch location and follow the below steps to apply the patch.

$ cd <PATH_TO_PATCH>
$ unzip p27967747_121020_Linux-x86-64.zip
$ cd 27967747
$ $ORACLE_HOME/OPatch/opatchauto apply ./

OPatchauto session is initiated at Wed Sep 26 02:39:52 2018

System initialization log file is /oradb/app/12.1.0.2/grid/cfgtoollogs/opatchautodb/systemconfig2018-09-26_02-40-10AM.log.

Session log file is /oradb/app/12.1.0.2/grid/cfgtoollogs/opatchauto/opatchauto2018-09-26_02-41-29AM.log
The id for this session is WYWB

Executing OPatch prereq operations to verify patch applicability on home /oradb/app/12.1.0.2/grid

Executing OPatch prereq operations to verify patch applicability on home /oradb/app/oracle/product/12.1.0.2/db_1
Patch applicability verified successfully on home /oradb/app/oracle/product/12.1.0.2/db_1

Patch applicability verified successfully on home /oradb/app/12.1.0.2/grid

Verifying SQL patch applicability on home /oradb/app/oracle/product/12.1.0.2/db_1
SQL patch applicability verified successfully on home /oradb/app/oracle/product/12.1.0.2/db_1

Preparing to bring down database service on home /oradb/app/oracle/product/12.1.0.2/db_1

WARNING: The service ORCL.oracledbwr.com configured on orcl will not be switched as it is not configured to run on any other node(s).
Successfully prepared home /oradb/app/oracle/product/12.1.0.2/db_1 to bring down database service

Relocating RACOne home before patching on home /oradb/app/oracle/product/12.1.0.2/db_1
Relocated RACOne home before patching on home /oradb/app/oracle/product/12.1.0.2/db_1

Bringing down CRS service on home /oradb/app/12.1.0.2/grid
Prepatch operation log file location: /oradb/app/12.1.0.2/grid/cfgtoollogs/crsconfig/crspatch_prodrac101_2018-09-26_02-49-04AM.log
CRS service brought down successfully on home /oradb/app/12.1.0.2/grid

Performing prepatch operation on home /oradb/app/oracle/product/12.1.0.2/db_1
Perpatch operation completed successfully on home /oradb/app/oracle/product/12.1.0.2/db_1

Start applying binary patch on home /oradb/app/oracle/product/12.1.0.2/db_1
Binary patch applied successfully on home /oradb/app/oracle/product/12.1.0.2/db_1

Performing postpatch operation on home /oradb/app/oracle/product/12.1.0.2/db_1
Postpatch operation completed successfully on home /oradb/app/oracle/product/12.1.0.2/db_1


Start applying binary patch on home /oradb/app/12.1.0.2/grid
Binary patch applied successfully on home /oradb/app/12.1.0.2/grid

Starting CRS service on home /oradb/app/12.1.0.2/grid
Postpatch operation log file location: /oradb/app/12.1.0.2/grid/cfgtoollogs/crsconfig/crspatch_prodrac101_2018-09-26_03-42-46AM.log
CRS service started successfully on home /oradb/app/12.1.0.2/grid

Relocating back RACOne to home /oradb/app/oracle/product/12.1.0.2/db_1
Relocated back RACOne home successfully to home /oradb/app/oracle/product/12.1.0.2/db_1


Preparing home /oradb/app/oracle/product/12.1.0.2/db_1 after database service restarted
No step execution required.........


Trying to apply SQL patch on home /oradb/app/oracle/product/12.1.0.2/db_1
SQL patch applied successfully on home /oradb/app/oracle/product/12.1.0.2/db_1

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:prodrac101
RAC Home:/oradb/app/oracle/product/12.1.0.2/db_1
Version:12.1.0.2.0
Summary:

==Following patches were SKIPPED:

Patch: /mnt/hgfs/shared/soft/12102/July18_PSU/27967747/26983807
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /mnt/hgfs/shared/soft/12102/July18_PSU/27967747/27762277
Reason: This patch is not applicable to this specified target type - "rac_database"


==Following patches were SUCCESSFULLY applied:

Patch: /mnt/hgfs/shared/soft/12102/July18_PSU/27967747/27547329
Log: /oradb/app/oracle/product/12.1.0.2/db_1/cfgtoollogs/opatchauto/core/opatch/opatch2018-09-26_02-55-50AM_1.log

Patch: /mnt/hgfs/shared/soft/12102/July18_PSU/27967747/27762253
Log: /oradb/app/oracle/product/12.1.0.2/db_1/cfgtoollogs/opatchauto/core/opatch/opatch2018-09-26_02-55-50AM_1.log


Host:prodrac101
CRS Home:/oradb/app/12.1.0.2/grid
Version:12.1.0.2.0
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /mnt/hgfs/shared/soft/12102/July18_PSU/27967747/26983807
Log: /oradb/app/12.1.0.2/grid/cfgtoollogs/opatchauto/core/opatch/opatch2018-09-26_03-08-36AM_1.log

Patch: /mnt/hgfs/shared/soft/12102/July18_PSU/27967747/27547329
Log: /oradb/app/12.1.0.2/grid/cfgtoollogs/opatchauto/core/opatch/opatch2018-09-26_03-08-36AM_1.log

Patch: /mnt/hgfs/shared/soft/12102/July18_PSU/27967747/27762253
Log: /oradb/app/12.1.0.2/grid/cfgtoollogs/opatchauto/core/opatch/opatch2018-09-26_03-08-36AM_1.log

Patch: /mnt/hgfs/shared/soft/12102/July18_PSU/27967747/27762277
Log: /oradb/app/12.1.0.2/grid/cfgtoollogs/opatchauto/core/opatch/opatch2018-09-26_03-08-36AM_1.log

OPatchauto session completed at Wed Sep 26 04:03:09 2018
Time taken to complete the session 83 minutes, 17 seconds

Patch Verification:-

Step 5:- Once the patch has been applied successfully, verify it in the database like below.

$ sqlplus / as sysdba
SQL> set serveroutput on
SQL> exec dbms_qopatch.get_sqlpatch_status;

Patch Id : 27547329
Action : APPLY
Action Time : 26-SEP-2018 04:03:06
Description : DATABASE PATCH SET UPDATE 12.1.0.2.180717
Logfile :
/oradb/app/oracle/cfgtoollogs/sqlpatch/27547329/22280349/27547329_apply_ORCL_201
8Sep26_04_00_51.log
Status : SUCCESS

PL/SQL procedure successfully completed.

Similarly, follow the same steps to apply the patch in Node 2.

Catch Me On:- Hariprasath Rajaram

Telegram:https://t.me/joinchat/I_f4DhGF_Zifr9YZvvMkRg
LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/
Facebook:https://www.facebook.com/HariPrasathdba
FB Group:https://www.facebook.com/groups/894402327369506/
FB Page: https://www.facebook.com/dbahariprasath/?
Twitter: https://twitter.com/hariprasathdba

 

Oracle12c-RAC One Node Switchover

In the previous article we have configured Oracle 12cR1 One Node RAC. Whereas here let us do some playful activities using the configured environment.Description:-

As I said already we have Oracle 12cR1 One Node RAC database configured in Nodes prodrac101 & prodrac102. Due to OS maintenance activity, we are in need to stop the oracle services in Node 1 and relocate them to Node 2 to reduce the downtime of the database and make sure the business continuity.

Let’s start the demo

Below is the database configuration output.

$ srvctl config database -d ORCL
Database unique name: ORCL
Database name: ORCL
Oracle home: /oradb/app/oracle/product/12.1.0.2/db_1
Oracle user: oracle
Spfile: +DBWR_DATA/ORCL/PARAMETERFILE/spfile.278.985981865
Password file: +DBWR_DATA/ORCL/PASSWORD/pwdorcl.276.985981257
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: ORCLPOOL
Disk Groups: DBWR_FRA,DBWR_DATA
Mount point paths:
Services: ORCL.oracledbwr.com
Type: RACOneNode
Online relocation timeout: 30
Instance name prefix: ORCL
Candidate servers:
OSDBA group: dba
OSOPER group: oper
Database instances:
Database is policy managed

Note-down the serverpool name of the database is configured. Let us verify the instance is running on which node.

$ srvctl status database -d ORCL
Instance ORCL_1 is running on node prodrac101
Online relocation: INACTIVE

From the above output we can see that the instance is running in the first node. So, we will relocate the instance from Node 1(prodrac101) to Node 2(prodrac102).

Before we start the relocate process make sure the serverpool’s are configured properly. Say for example, below is configuration of serverpool in our environment.

$ srvctl config srvpool
Server pool name: Free
Importance: 0, Min: 0, Max: -1
Category:
Candidate server names:
Server pool name: Generic
Importance: 0, Min: 0, Max: -1
Category:
Candidate server names:
Server pool name: ORCLPOOL
Importance: 0, Min: 0, Max: 1
Category: hub
Candidate server names:

As we already know that our instance is running under “ORCLPOOL” serverpool from database configuration. In the above output we can see that the Max value of the serverpool is 1 and we need to change it value, otherwise the relocation process will get failed as below.

$ srvctl relocate database -d ORCL -n prodrac102 -w 5 -v
Online relocation failed, rolling back to original state
PRCD-1222 : Online relocation of database "ORCL" failed but database was restored to its original state
PRCR-1114 : Failed to relocate servers prodrac102 into server pool ora.ORCLPOOL
CRS-2598: Server pool 'ora.ORCLPOOL' is already at its maximum size of '1'

In order to avoid the above error, we need to increase the max value of the serverpool as below.

$ srvctl modify srvpool -g ORCLPOOL -l 1 -u 3 -i 999

Once the max value is increased, verify the configuration now.

$ srvctl config srvpool
Server pool name: Free
Importance: 0, Min: 0, Max: -1
Category:
Candidate server names:
Server pool name: Generic
Importance: 0, Min: 0, Max: -1
Category:
Candidate server names:
Server pool name: ORCLPOOL
Importance: 999, Min: 1, Max: 3
Category: hub
Candidate server names:

Now, we can start the relocation process.

$ srvctl relocate database -d ORCL -n prodrac102 -w 5 -v
Configuration updated to two instances
Instance ORCL_2 started
Services relocated
Waiting for up to 5 minutes for instance ORCL_1 to stop ...
Instance ORCL_1 stopped
Configuration updated to one instance

Now, verify the database configuration and on which node the instance is running.

$ srvctl config database -d ORCL
Database unique name: ORCL
Database name: ORCL
Oracle home: /oradb/app/oracle/product/12.1.0.2/db_1
Oracle user: oracle
Spfile: +DBWR_DATA/ORCL/PARAMETERFILE/spfile.278.985981865
Password file: +DBWR_DATA/ORCL/PASSWORD/pwdorcl.276.985981257
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: ORCLPOOL
Disk Groups: DBWR_FRA,DBWR_DATA
Mount point paths:
Services: ORCL.oracledbwr.com
Type: RACOneNode
Online relocation timeout: 30
Instance name prefix: ORCL
Candidate servers:
OSDBA group: dba
OSOPER group: oper
Database instances:
Database is policy managed
$ srvctl status database -d ORCL
Instance ORCL_2 is running on node prodrac102
Online relocation: INACTIVE

Now, we are sure that the instance has been relocated from Node 1 (prodrac101) to Node 2 (prodrac102).

 

Catch Me On:- Hariprasath Rajaram

Telegram:https://t.me/joinchat/I_f4DhGF_Zifr9YZvvMkRg
LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/
Facebook:https://www.facebook.com/HariPrasathdba
FB Group:https://www.facebook.com/groups/894402327369506/
FB Page: https://www.facebook.com/dbahariprasath/?
Twitter: https://twitter.com/hariprasathdba