Oracle Trace File Analyzer Tips And Tricks


Trace File Analyzer is new utility which can be installed on the database nodes either stand alone or cluster nodes.

In order to collect the diagnostics of 8 node for example, then we have to review alert log, listener logs whatever on all the nodes that become very lengthy procedure and also not easy to merge the information based on the all nodes.

Hence Oracle introduced TFA – Trace file analyzer and this made easy job for DBA’s, TFA utility or bundle can be downloaded from MOS.



[root@test18c ~]# cd /u01/app/oracle/tfa/bin/
[root@test18c bin]# ./tfactl
tfactl> orachk
Using Orachk : /u01/app/oracle/tfa/test18c/tfa_home/ext/orachk/orachk

Running orachk
PATH : /u01/app/oracle/tfa/test18c/tfa_home/ext/orachk
VERSION : 18.3.0_20180808
COLLECTIONS DATA LOCATION : /u01/app/oracle/tfa/repository/suptools/test18c/orachk/root

List of running databases

1. test18c
2. None of above

Select databases from list for checking best practices. For multiple databases, select 1 for All or comma separated number like 1,2 etc [1-2][1]. 1
. .
. .

Checking Status of Oracle Software Stack - Clusterware, ASM, RDBMS

. . . . . .
. . . . . . . . .
Oracle Stack Status 
Host Name CRS Installed ASM HOME RDBMS Installed CRS UP ASM UP RDBMS UP DB Instance Name
test18c No No Yes No No Yes test18c

Copying plug-ins

. .
. . . . . .

*** Checking Best Practice Recommendations ( PASS / WARNING / FAIL ) ***


Collections and audit checks log file is

Node name - test18c

Collecting - Database Parameters for test18c database
Collecting - Database Undocumented Parameters for test18c database
Collecting - RDBMS Feature Usage for test18c database
Collecting - CPU Information
Collecting - Disk I/O Scheduler on Linux
Collecting - DiskMount Information
Collecting - Kernel parameters
Collecting - Maximum number of semaphore sets on system
Collecting - Maximum number of semaphores on system
Collecting - Maximum number of semaphores per semaphore set
Collecting - Memory Information
Collecting - OS Packages
Collecting - Operating system release information and kernel version
Collecting - Patches for RDBMS Home
Collecting - Table of file system defaults
Collecting - number of semaphore operations per semop system call
Collecting - Disk Information
Collecting - Linux Operating system health check using
Collecting - Root user limits
Collecting - Verify no database server kernel out of memory errors

Data collections completed. Checking best practices on test18c.

CRITICAL => Bash is vulnerable to code injection (CVE-2014-6271)
WARNING => Linux swap configuration does not meet recommendation
INFO => Important Storage Minimum Requirements for Grid & Database Homes
WARNING => Non-AWR Space consumption is greater than or equal to 50% of total SYSAUX space. for test18c
INFO => Most recent ADR incidents for /u01/app/oracle/product/18.0.0/dbhome_1
INFO => Oracle GoldenGate failure prevention best practices
INFO => user_dump_dest has trace files older than 30 days for test18c
FAIL => Database parameter DB_BLOCK_CHECKSUM is not set to recommended value on test18c instance
FAIL => Database parameter DB_LOST_WRITE_PROTECT is not set to recommended value on test18c instance
WARNING => Database parameter DB_BLOCK_CHECKING on PRIMARY is NOT set to the recommended value. for test18c
INFO => Operational Best Practices
INFO => Database Consolidation Best Practices
INFO => Computer failure prevention best practices
INFO => Data corruption prevention best practices
INFO => Logical corruption prevention best practices
INFO => Database/Cluster/Site failure prevention best practices
INFO => Client failover operational best practices
WARNING => Duplicate objects were found in the SYS and SYSTEM schemas for test18c
WARNING => Oracle clusterware is not being used
WARNING => RAC Application Cluster is not being used for database high availability on test18c instance
WARNING => DISK_ASYNCH_IO is NOT set to recommended value for test18c
FAIL => Table AUD$[FGA_LOG$] should use Automatic Segment Space Management for test18c
FAIL => Flashback on PRIMARY is not configured for test18c
INFO => Database failure prevention best practices
WARNING => fast_start_mttr_target has NOT been changed from default on test18c instance
WARNING => Database Archivelog Mode should be set to ARCHIVELOG for test18c
FAIL => Primary database is not protected with Data Guard (standby database) for real-time data protection and availability for test18c
FAIL => Active Data Guard is not configured for test18c
INFO => Parallel Execution Health-Checks and Diagnostics Reports for test18c
INFO => Oracle recovery manager(rman) best practices
WARNING => Linux Disk I/O Scheduler should be configured to Deadline
WARNING => Consider investigating changes to the schema objects such as DDLs or new object creation for test18c
WARNING => Consider investigating the frequency of SGA resize operations and take corrective action for test18c
Best Practice checking completed. Checking recommended patches on test18c
Collecting patch inventory on ORACLE_HOME /u01/app/oracle/product/18.0.0/dbhome_1
1 Recommended RDBMS patches for 180000 from /u01/app/oracle/product/18.0.0/dbhome_1 on test18c
Patch# RDBMS ASM type Patch-Description 
28090523 yes merge DATABASE RELEASE UPDATE
RDBMS homes patches summary report
Total patches Applied on RDBMS Applied on ASM ORACLE_HOME
1 1 0 /u01/app/oracle/product/18.0.0/dbhome_1

Detailed report (html) - /u01/app/oracle/tfa/repository/suptools/test18c/orachk/root/orachk_test18c_test18c_111318_034400/orachk_test18c_test18c_111318_034400.html

UPLOAD [if required] - /u01/app/oracle/tfa/repository/suptools/test18c/orachk/root/



Starting from 12.2 this bundle is included with RDBMS software and again this is optional when we run the script, still we can skip this if it’s not required.

we will see how to initiate the TFA when we run the of RDBMS.

If we crack the log file then we can see very much detailed information such as scanned trace directories, number of hosts and with the help preview.



tfactl> summary
LOGFILE LOCATION : /u01/app/oracle/tfa/repository/suptools/test18c/summary/root/20181113030332/log/summary_command_20181113030332_test18c_4674.log

Component Specific Summary collection :
- Collecting ACFS details ... Done.
- Collecting DATABASE details ... Done.
- Collecting PATCH details ... Done.
- Collecting LISTENER details ... Done.
- Collecting NETWORK details ... Done.
- Collecting OS details ... Done.
- Collecting TFA details ... Done.
- Collecting SUMMARY details ... Done.

Prepare Clusterwide Summary Overview ... 

Example of Collecting Diagnostic Data

We can access the list of commands using the “tfactl <command> -help“, in this example we will collect sample diagnostic data using TFA.


How to check whether TFA is running or not?

This is simple by grepping the word “tfa” from host level as




Basic TFACTL commands include:

tfactl start: Starts the Oracle Trace File Analyzer daemon on the local node.

tfactl stop: Stops the Oracle Trace File Analyzer daemon on the local node.

tfactl enable: Enables automatic restart of the Oracle Trace File Analyzer daemon after a failure or system reboot.

tfactl disable: Stops any running Oracle Trace File Analyzer daemon and disables automatic restart.

tfactl uninstall: Removes Oracle Trace File Analyzer from the local node.

tfactl syncnodes: Generates and copies Oracle Trace File Analyzer certificates from one Oracle Trace File Analyzer node to other nodes.

tfactl restrictprotocol: Restricts the use of certain protocols.

tfactl status: Checks the status of an Oracle Trace File Analyzer process. The output is same as tfactl print status.

tfactl diagnosetfa: Use the tfactl diagnosa tfa command to collect Oracle Trace File Analyzer diagnostic data from the local node to help diagnose issues with Oracle Trace File Analyzer.

tfactl host: Use the tfactl host command to add hosts to, or remove hosts from the Oracle Trace File Analyzer configuration.

tfactl set: Use the tfactl set command to enable or disable, or modify various Oracle Trace File Analyzer functions.

tfactl access: Use the tfactl access command to allow non-root users to have controlled access to Oracle Trace File Analyzer and to run diagnostic collections.


OraChk :


Catch Me On:- Hariprasath Rajaram

Facebook Group:
Facebook Page:

Oracle Tuning-Analyze SQL with SQL Tuning Advisor


  • The SQL Tuning Advisor takes one or more SQL statements as an input and invokes the Automatic Tuning Optimizer to perform SQL tuning on the statements.
  • The output of the SQL Tuning Advisor is in the form of an recommendations, along with a rationale for each recommendation and its expected benefit.The recommendation relates to collection of statistics on objects, creation of new indexes, restructuring of the SQL statement, or creation of a SQL profile. You can choose to accept the recommendation to complete the tuning of the SQL statements.
  • You can also run the SQL Tuning Advisor selectively on a single or a set of SQL statements that have been identified as problematic.
  • Find the problematic SQL_ID from v$session you would like to analyze. Usually the AWR has the top SQL_IDs column.

In order to access the SQL tuning advisor API, a user must be granted the ADVISOR privilege:

sqlplus / as sysdba
Steps to tune the problematic SQL_ID using SQL TUNING ADVISOR :-

Create Tuning Task :

my_task_name VARCHAR2(30);
sql_id => '43x11xxhxy1j7',
time_limit => 3600,
task_name => 'my_sql_tuning_task_1',
description => 'Tune query using sqlid');

Execute Tuning task :

DBMS_SQLTUNE.EXECUTE_TUNING_TASK( task_name => 'my_sql_tuning_task_1');

Monitor the task executing using below query:


TASK_NAME                      STATUS
------------------------------ -----------
my_sql_tuning_task_1           COMPLETED

Check the status is completed for the task and we can get recommendations of the advisor.

Report Tuning task :

SELECT DBMS_SQLTUNE.REPORT_TUNING_TASK( 'my_sql_tuning_task_1') from DUAL;



Tuning Task Name                  : my_sql_tuning_task_1

Tuning Task Owner                 : SYS

Scope                             : COMPREHENSIVE

Time Limit(seconds)               : 60

Completion Status                 : COMPLETED

Started at                        : 11/10/2018 19:47:27

Completed at                      : 11/10/2018 19:47:54
SQL_ID : 43x11xxhxy1j7

Number of SQL Profile Findings    : 1


1- SQL Profile Finding (see explain plans section below)


  A potentially better execution plan was found for this statement.

  Recommendation (estimated benefit: 99.94%)


  - Consider accepting the recommended SQL profile.

    execute dbms_sqltune.accept_sql_profile(task_name => 'my_sql_tuning_task_1',replace => TRUE);

To get detailed information :


Drop SQL Tuning task :

DBMS_SQLTUNE.drop_tuning_task (task_name => 'my_sql_tuning_task_1');
Another method for adding new task using SQL TUNING ADVISOR :-
Check the PLAN_HASH_VALUE got changed for the specific statement and get SNAP_ID to create a tuning task.
set lines 155
col execs for 999,999,999
col avg_etime for 999,999.999
col avg_lio for 999,999,999.9
col begin_interval_time for a30
col node for 99999
break on plan_hash_value on startup_time skip 1
select ss.snap_id, ss.instance_number node, begin_interval_time, sql_id, plan_hash_value,
nvl(executions_delta,0) execs,
(elapsed_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta))/1000000 avg_etime,
(buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta)) avg_lio
where sql_id = nvl('&sql_id')
and ss.snap_id = S.snap_id
and ss.instance_number = S.instance_number 
and executions_delta > 0
order by 1, 2, 3

Enter value for sql_id: 483wz173punyb

---------- ------ ------------------------------ ------------- --------
15694 1 10-NOV-18 AM 483wz173punyb 2391860790 1 4,586.818 33,924,912.0
15695 1 10-NOV-18 AM 483wz173punyb 2 1,488.867 0,064,449.0
15696 1 10-NOV-18 AM 483wz173punyb 2 1,053.459 8,780,977.0
Create a tuning task for the specific statement from AWR snapshots:-
Create,Execute and Report the task from given AWR snapshot IDs.
Create Task,

l_sql_tune_task_id VARCHAR2(100);
l_sql_tune_task_id := DBMS_SQLTUNE.create_tuning_task (
begin_snap => 1868,
end_snap => 1894,
sql_id => '483wz173punyb',
scope => DBMS_SQLTUNE.scope_comprehensive,
time_limit => 300,
task_name => '483wz173punyb_tuning_task',
description => 'Tuning task for statement 483wz173punyb in AWR.');
DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id);

Execute Task,

EXEC DBMS_SQLTUNE.execute_tuning_task(task_name => '483wz173punyb_tuning_task');

Report task,

SET LONG 10000;
SELECT DBMS_SQLTUNE.report_tuning_task('483wz173punyb_tuning_task') AS recommendations FROM dual;
Interrupt Tuning task :
EXEC DBMS_SQLTUNE.interrupt_tuning_task (task_name => '483wz173punyb_tuning_task');
Resume Tuning task :
EXEC DBMS_SQLTUNE.resume_tuning_task (task_name => '483wz173punyb_tuning_task');
Cancel Tuning task :
EXEC DBMS_SQLTUNE.cancel_tuning_task (task_name => '483wz173punyb_tuning_task');
Reset Tuning task :
EXEC DBMS_SQLTUNE.reset_tuning_task (task_name => '483wz173punyb_tuning_task');
Catch Me On:- Hariprasath Rajaram

Facebook Group:
Facebook Page: