Oracle Tuning
Version 21c

General Information
Library Note Morgan's Library Page Header
ACE Director Alum Daniel Morgan, founder of Morgan's Library, is scheduling complimentary technical Workshops on Database Security for the first 30 Oracle Database customers located anywhere in North America, EMEA, LATAM, or APAC that send an email to asra_us@oracle.com. Request a Workshop for your organization today.
Tuning Note The most senior people in our world are the tuning experts. Names like Jonathan Lewis, Christian Antognini, Cary Milsap, Tanel Poder come quickly to mind. And yet I find, from my experience, that most of the big, ugly, performance and availability problems I confront in my consulting practice have little or nothing to do with any of what they write or anything I've put on this page: Maybe I'm just unlucky.

What I find is that the issues I confront are almost always infrastructure. RAC-ignorant applications running on a RAC cluster with lots of remastering. Network switches and firewalls improperly purchased, configured, and deployed often with brain-dead virtualization, DNS and resolv.conf issues creating infinite loops, and storage layouts that look like they were designed by someone that still thinks RAID5 on the "C" drive is the way to go.

And don't get me started on AWR and ASH reports. Wonderful tools in the hands of someone so skilled they can look at DFS Lock Handle waits and say "Aha it is a RAC cluster so I will look at the size at the sequence caches." There aren't a lot of us that do that. So by all means produce these reports and examine them. But if you can't tell the difference between I/O issues caused by I/O and I/O issues caused by cpu exhaustion I fail to see their value for 95+% of practitioners of the dark art.

So if you need tuning help ... buy the books of the masters ... I'm not one of them ... that's what I do. Take their private classes more than once. Attend their sessions at user group conferences. And maybe something here will stimulate a few synapses in the right direction: No promises made.
 
Q&A 1 Question:

How Can I Get Help Tuning A SQL Statement?

Answer:

If you were referred to this page from an OTN or other forum most likely the reason is one of the following:
  1. You did not include your version number (SELECT * FROM v$version). 11g or 12c are not version numbers. They are marketing labels with no value in tuning.
  2. You did not include DDL for tables and indexes. You can not tune in the absence of column data types and index information and neither can we
  3. You did not include an Explain Plan report generated using DBMS_XPLAN.DISPLAY. Toad and other product explain plan reports are worthless.
  4. You provided no information on RAC vs. stand-alone ... single tenant versus multi-tenant
  5. You provided no information on the infrastructure: servers, storage, network
  6. You provided no information on the number of simultaneous users or other activities running in the database or on the server
  7. You did not read the FAQ and post your listing in a formatted and readable form.
Try again and perhaps someone will be able to help you.
Q&A 2 Question:

How does the latch process in the shared pool work?

Answer:
  1. Compute the hash value of the statement
  2. Use the hash value to determine the part of the shared pool to be latched (locked)
  3. Find the statement (if already in the Shared Pool)
  4. If not present hard-parse the statement (syntax and lexical check, privilege check, optimize)
  5. If it is present skip the syntax and lexical check. Perform the privilege check and optimize
  6. Release the latch
-- Use x$ksmlru to examine shared pool usage
-- Rows returned are flushed from the linked list,
-- Subsequent queries return the next set of rows, if any


conn / as sysdba

SELECT ksmlrsiz CONTIG_MEM, ksmlrnum OBJ_DISPLACED, ksmlrhon INBOUND_OBJ, ksmlrohv HASH, ksmlrses SADDR
FROM x$ksmlru;

CONTIG_MEM OBJ_DISPLACED INBOUND_OBJ                      HASH       SADDR
---------- ------------- -------------------------------- ---------- ----------------
      4224             8 DECLARE job BINARY_INTEGER :...  4104900106 00007FFA8EF8AC10
      4216             8 CREATE GLOBAL TEMPORARY TABL...  3046473697 00007FFA8D1AD560
      4200             8 SELECT source, (case...          1320826287 00007FFA8D2F7CB8
      4152             8 MGMT_DB_LL_METRICS               3448408159 00007FFA8EF8AC10
      4128             8 MGMT_DB_LL_METRICS               3448408159 00007FFA8EF8AC10
      4120             8 UTL_FILE                         2513398948 00007FFA8EF8AC10
      4096             8 select obj#,type#,ctime,mtim...  2698389488 00007FFA8EF85EE0
      4000             8 WRI$_OPTSTAT_HISTHEAD_HISTORY    3565337079 00007FFA8D2D6068

8 rows selected.

-- If a second query returns no rows that means no new objects have been loaded
-- into the shared pool since the previous query


SQL> /

CONTIG_MEM OBJ_DISPLACED INBOUND_OBJ                            HASH SADDR
---------- ------------- -------------------------------- ---------- ----------------
         0             0                                           0 00
         0             0                                           0 00
         0             0                                           0 00
         0             0                                           0 00
         0             0                                           0 00
         0             0                                           0 00
         0             0                                           0 00
         0             0                                           0 00


-- to view the full text use the hash, to view the session use the SADDR

SELECT ksmlrsiz CONTIG_MEM, ksmlrnum OBJ_DISPLACED, ksmlrohv HASH, ksmlrses ADDR, sql_fulltext
FROM x$ksmlru x, v$sqlarea v
WHERE x.ksmlrohv = v.hash_value;

-- If the shared pool is too large, and too fragmented, the time spent on latching may impact performance
-- If that is the case, flush the shared pool from time-to-time as required.
 
Adaptive Cursor Sharing
For statement using bind variables -- bind sensitive/bind aware
SELECT sql_id, is_bind_sensitive, is_bind_aware
FROM v$sql
WHERE executions > 10000;

SQL_ID        I I
------------- - -
49s332uhbnsma N N
d7bgf84kwj3s7 N N
3qk9bh5x55an0 N N
5dqz0hqtp9fru N N
5dqz0hqtp9fru N N
b9nbhsbx8tqz5 N N
1rpgk59t8pvs6 N N
gjaap3w3qbf8c N N
gjaap3w3qbf8c N N

9 rows selected.


-- cursor sharing execution statistics
SELECT sql_id, peeked, executions
FROM v$sql_cs_statistics
WHERE executions > 10;

no rows selected

-- also access the following dynamic performance views for additional information
v$sql_cs_histogram
v$sql_cs_selectivity
 
Alert Messages
Much can be learned by familiarizing yourself with normal alert message traffic so you can recognize abnormal behavior SELECT user_data
FROM sys.alert_qt
WHERE enq_time > SYSDATE-1
ORDER BY enq_time;
 
Caching
One of the things you want to avoid, in any situation, is rereading data. If you have it in memory ... you want to be able to perform multiple operations on it right then and there.

This demo is based on code I received from Tim Hill.
conn uwclass/uwclass@pdbdev

CREATE OR REPLACE TYPE t_airplanes_row AUTHID CURRENT_USER AS OBJECT (
program_id     VARCHAR2(3),
line_number    NUMBER(10),
customer_id    VARCHAR2(4),
order_date     DATE,
delivered_date DATE);
/

CREATE OR REPLACE TYPE t_airplanes_tab AS TABLE OF t_airplanes_row;
/

CREATE OR REPLACE PACKAGE cache_demo AUTHID CURRENT_USER AS
 l_tab t_airplanes_tab;
 PROCEDURE populate_cache;
 FUNCTION get_airplanes RETURN t_airplanes_tab;
END cache_demo;
/

CREATE OR REPLACE PACKAGE BODY cache_demo AS
 PROCEDURE populate_cache AS
 BEGIN
   SELECT t_airplanes_row(program_id, line_number,
   customer_id, order_date, delivered_date)
   BULK COLLECT INTO l_tab
   FROM airplanes
   WHERE rownum < 50001;
 END;
 ------------------------------------------------
 FUNCTION get_airplanes RETURN t_airplanes_tab AS
 BEGIN
   RETURN l_tab;
 END;
BEGIN
  populate_cache;
END cache_demo;
/

SELECT e.program_id, SUM(e.line_number)
FROM TABLE(cache_demo.get_airplanes) e
GROUP BY e.program_id;

SELECT e.program_id, SUM(e.line_number)
FROM TABLE(cache_demo.get_airplanes) e
WHERE order_date BETWEEN TO_DATE('01-NOV-2012') AND TO_DATE('31-DEC-2018')
GROUP BY e.program_id;
Scalar Subquery Caching

Tom Kyte discusses this in the Sep/Oct 2011 issue of Oracle Magazine. This demo is based on Tom's excellent article.
conn uwclass/uwclass@pdbdev

SELECT d.deptno, d.dname, COUNT(e.empno)
FROM dept d, emp e
WHERE d.deptno = e.deptno(+)
GROUP BY d.deptno, d.dname
ORDER BY 1;

    DEPTNO DNAME          COUNT(E.EMPNO)
---------- -------------- --------------
        10 ACCOUNTING                  3
        20 RESEARCH                    5
        30 SALES                       6
        40 OPERATIONS                  0


EXPLAIN PLAN FOR
SELECT d.deptno, d.dname, COUNT(e.empno) CNT
FROM dept d, emp e
WHERE d.deptno = e.deptno(+)
GROUP BY d.deptno, d.dname
ORDER BY 1;

SELECT * FROM TABLE(dbms_xplan.display);

---------------------------------------------------------------------------------------
| Id | Operation                     | Name    | Rows | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT              |         |    9 |   180 |     7  (29)| 00:00:01 |
|  1 |  SORT GROUP BY                |         |    9 |   180 |     7  (29)| 00:00:01 |
|  2 |   MERGE JOIN OUTER            |         |   15 |   300 |     6  (17)| 00:00:01 |
|  3 |    TABLE ACCESS BY INDEX ROWID| DEPT    |    4 |    52 |     2   (0)| 00:00:01 |
|  4 |     INDEX FULL SCAN           | PK_DEPT |    4 |       |     1   (0)| 00:00:01 |
|* 5 |    SORT JOIN                  |         |   15 |   105 |     4  (25)| 00:00:01 |
|  6 |     TABLE ACCESS FULL         | EMP     |   15 |   105 |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------------------


SELECT d.deptno, d.dname, (SELECT COUNT(*) FROM emp e WHERE e.deptno = d.deptno) CNT
FROM dept d
ORDER BY 1;

    DEPTNO DNAME                 CNT
---------- -------------- ----------
        10 ACCOUNTING              3
        20 RESEARCH                5
        30 SALES                   6
        40 OPERATIONS              0


EXPLAIN PLAN FOR
SELECT d.deptno, d.dname, (SELECT COUNT(*) FROM emp e WHERE e.deptno = d.deptno) CNT
FROM dept d
ORDER BY 1;

SELECT * FROM TABLE(dbms_xplan.display);

-------------------------------------------------------------------------------------
| Id | Operation                   | Name    | Rows | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT            |         |    4 |    52 |     2   (0)| 00:00:01 |
|  1 |  SORT AGGREGATE             |         |    1 |     3 |            |          |
|* 2 |   TABLE ACCESS FULL         | EMP     |    5 |    15 |     3   (0)| 00:00:01 |
|  3 |  TABLE ACCESS BY INDEX ROWID| DEPT    |    4 |    52 |     2   (0)| 00:00:01 |
|  4 |   INDEX FULL SCAN           | PK_DEPT |    4 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------
 
Control File
Control File Waits SELECT inst_id, total_waits, total_timeouts, time_waited
FROM gv$system_event
WHERE event = 'control file sequential read';
 
Disk I/O
A high ratio is indicative of full table scans I/O Rules of Thumb

dbfile sequential reads < 10ms
dbfile scattered reads 10-30ms (dependent on I/O size)
log file parallel writes < 5ms (into cache)
dbfile parallel writes < 5ms (into cache) but not as important if asynchronous I/O is enabled
-- phyrds is the number of physical reads
-- phyblkrd is the number of physical blocks read during the physical reads.


SELECT d.tablespace_name, f.file#, round(f.phyblkrd / f.phyrds, 3) RATIO
FROM gv$filestat f, dba_data_files d
WHERE f.file# = d.file_id;
 
Dynamic Sampling
When the database works with tables (not remote or external) missing statistics, the database, by default, dynamically gathers statistics /* dynamic sampling can be implemented by means of ALTER SYSTEM, ALTER SESSION, or hinting. The following demo uses alter session  but likely you will want to use the hint for production code. Be sure you carefully read the tahiti docs on the sampling levels, when they are appropriate and what they mean. Random use of dynamic sampling does not make things better. */

CREATE TABLE t AS SELECT * FROM dba_objects;

CREATE INDEX ix_t
ON t(object_name);

SELECT COUNT(*) FROM t;

SELECT blocks, num_rows, avg_row_len
FROM user_tables
WHERE table_name = 'T';

set autotrace traceonly explain

ALTER SESSION SET OPTIMIZER_DYNAMIC_SAMPLING=0;
SELECT * FROM t WHERE SUBSTR(object_name,1,4) =  'DBMS';

ALTER SESSION SET OPTIMIZER_DYNAMIC_SAMPLING=1;
SELECT * FROM t WHERE SUBSTR(object_name,1,4) =  'DBMS';

ALTER SESSION SET OPTIMIZER_DYNAMIC_SAMPLING=2;
SELECT * FROM t WHERE SUBSTR(object_name,1,4) =  'DBMS';

ALTER SESSION SET OPTIMIZER_DYNAMIC_SAMPLING=3;
SELECT * FROM t WHERE SUBSTR(object_name,1,4) =  'DBMS';
 
File Access
Unusually long access times may indicate I/O issues SELECT fmh.inst_id, fmh.begin_time, fmh.average_read_time, df.file_name, df.tablespace_name
FROM gv$filemetric_history fmh, dba_data_files df
WHERE fmh.average_read_time >= 50
AND fmh.file_id = df.file_id
ORDER BY 2,1;
Another File Access Query SELECT tsname, MAX(end_time), instance_number, SUM(phyreads) READS,
ROUND(DECODE(SUM(snapdur_secs),0,0,SUM(PHYREADS)/max(SNAPDUR_SECS)),2) AV_READS_PER_S,
ROUND(DECODE(SUM(phyreads),0,0,SUM(READTIME_MS)/SUM(phyreads)),2) AverageRdTime_in_MS,
ROUND(DECODE(SUM(phyreads),0,0,SUM(PHYBLKRD)/SUM(phyreads)),2) BLOCKS_PER_READ,
SUM(phywrites) Writes,
ROUND(DECODE(SUM(snapdur_secs),0,0,SUM(phywrites)/MAX(snapdur_secs)),2) AV_WRITES_PER_S,
SUM(wait_count) Buffer_Waits,
ROUND(DECODE(SUM(wait_count),0,0,SUM(wait_time_ms)/SUM(wait_count)),2) AvBuffer_Waittime_MS,
SUM(phyreads) + SUM(phywrites) io_operations_rw
FROM (
  SELECT s.snap_id, TRUNC(h.begin_interval_time, 'MI') BEGIN_TIME,
  TRUNC(h.end_interval_time, 'MI') END_TIME,
  end_interval_time - h.begin_interval_time INT_DURATION,
  (EXTRACT(SECOND FROM (h.begin_interval_time - (LAG(h.begin_interval_time,1) OVER (ORDER BY h.instance_number, s.filename, h.begin_interval_time))))
 + EXTRACT(MINUTE FROM (h.begin_interval_time - (LAG(h.begin_interval_time,1) OVER (ORDER BY h.instance_number, s.filename, h.begin_interval_time)))*60)
 + EXTRACT(HOUR from (h.begin_interval_time - (LAG(h.begin_interval_time,1) OVER (ORDER BY h.instance_number, s.filename, h.begin_interval_time)))*3600)
 + EXTRACT(DAY from (h.begin_interval_time - (LAG(h.begin_interval_time,1) OVER (ORDER BY h.instance_number, s.filename, h.begin_interval_time)))*86400)) snapdur_secs,
  DECODE(s.instance_number, (LAG(s.instance_number,1) OVER (ORDER BY h.instance_number, s.filename, h.begin_interval_time)),'Y','N') INSTMATCH,
  DECODE(s.file#, (LAG(s.file#,1) OVER(ORDER BY h.instance_number, s.filename, h.begin_interval_time)),'Y','N') filematch, s.instance_number, s.file#,
s.filename, s.tsname, (s.phyrds-(LAG(s.phyrds,1,0) OVER (ORDER BY  h.instance_number, s.filename, h.begin_interval_time))) phyreads,
(s.phywrts - (LAG(s.phywrts,1,0) OVER (ORDER BY h.instance_number, s.filename, h.begin_interval_time))) phywrites, (s.singleblkrds - (LAG(s.singleblkrds,1,0) OVER (ORDER BY h.instance_number, s.filename, h.begin_interval_time))) singleblkrds, 10 * (s.readtim - (LAG(s.readtim, 1, 0) OVER (ORDER BY h.instance_number, s.filename, h.begin_interval_time))) readtime_ms,
10 * (s.writetim - (LAG(s.writetim, 1, 0) OVER (ORDER BY h.instance_number, s.filename, h.begin_interval_time))) writetime_ms, 10 * (s.singleblkrdtim - (LAG(s.singleblkrdtim,1,0) OVER (ORDER BY  h.instance_number, s.filename, h.begin_interval_time))) singleblkrdtim_ms, (s.phyblkrd - (LAG(s.phyblkrd,1,0) OVER (ORDER BY h.instance_number, s.filename, h.begin_interval_time))) PHYBLKRD,
(s.phyblkwrt - (LAG(s.phyblkwrt,1,0) OVER (ORDER BY h.instance_number, s.filename, h.begin_interval_time))) PHYBLKWRT, (s.wait_count - (LAG(s.wait_count, 1, 0) OVER (order by h.instance_number, s.filename, h.begin_interval_time))) WAIT_COUNT,
10 * (s.time - (LAG(s.time, 1, 0) OVER (order by h.instance_number, s.filename, h.begin_interval_time))) WAIT_TIME_MS
  FROM dba_hist_filestatxs s, dba_hist_snapshot h
  WHERE h.instance_number = s.instance_number
  AND s.dbid = h.dbid
  AND s.snap_id = h.snap_id
  AND h.begin_interval_time > SYSDATE - 0.05)
WHERE phyreads >= 0
AND instmatch = 'Y'
AND filematch = 'Y'
GROUP BY tsname, snap_id, instance_number
ORDER BY tsname, snap_id, instance_number;
 
Fully Qualified Names
In all cases, but especially if multiple objects, in different schemas, share the same object names use fully qualified names in all SQL and PL/SQL -- SQL not utilizing fully qualified names
EXPLAIN PLAN FOR
SELECT srvr_id
FROM servers s
WHERE EXISTS (
  SELECT srvr_id
  FROM serv_inst i
  WHERE s.srvr_id = i.srvr_id);

-- SQL with fully qualified names. Write all SQL and PL/SQL this way
EXPLAIN PLAN FOR
SELECT srvr_id
FROM uwclass.servers s
WHERE EXISTS (
  SELECT srvr_id
  FROM uwclass.serv_inst i
  WHERE s.srvr_id = i.srvr_id);
 
Hinting
Full Hinting Demo CREATE TABLE t1 AS
SELECT * FROM all_objects
WHERE ROWNUM = 1;

ALTER TABLE t1
ADD CONSTRAINT pk_t1
PRIMARY KEY(object_id)
USING INDEX;

CREATE TABLE t2 AS
SELECT * FROM all_objects;

ALTER TABLE t2
ADD CONSTRAINT pk_t2
PRIMARY KEY(object_id)
USING INDEX;

exec dbms_stats.gather_table_stats('UWCLASS', 'T1', CASCADE=>TRUE);
exec dbms_stats.gather_table_stats('UWCLASS', 'T2', CASCADE => TRUE);

ALTER SESSION SET tracefile_identifier='base plan';
ALTER SESSION SET EVENTS '10053 trace name context forever,level 1';
ALTER SESSION SET EVENTS '10046 trace name context forever,level 1';

SELECT COUNT(*)
FROM t2, t1
WHERE t1.object_id = t2.object_id
AND t2.owner='UWCLASS';

ALTER SESSION SET tracefile_identifier='hinted plan';

SELECT /*+ use_nl(hint2 hint1) */ COUNT(*)
FROM t2, t1
WHERE t1.object_id = t2.object_id
AND t2.owner='UWCLASS';

ALTER SESSION SET tracefile_identifier='fully hinted plan';

SELECT /*+ ordered use_nl(hint2 hint1) */ COUNT(*)
FROM t2, t1
WHERE t1.object_id = t2.object_id
AND t2.owner='UWCLASS';

ALTER SESSION SET EVENTS '10053 trace name context off';
 
Log Buffer and Files
If you see substantial waits for log buffer space consider enlarging the memory based log buffer

If substantial waits for log file sync examine I/O performance of the online log buffers
-- waits for space in the log file
SELECT inst_id, name, value
FROM gv$sysstat
WHERE name = 'redo log space requests';

SELECT inst_id, name, block_size, resize_state, current_size, target_size
FROM gv$buffer_pool;

SELECT inst_id, name, value
FROM gv$parameter
WHERE name LIKE '%log%buf%';

-- this will require the database be bounced to take affect
ALTER SYSTEM SET log_buffer = 10240000 SCOPE=spfile SID='*' CONTAINER=ALL;
 
Memory Optimization
Memory over time

Posted by Steve Howard at c.d.o.server 4-Dec-2007 and is just as valid today, 12 years later
SELECT time, instance_number,
MAX(DECODE(name, 'free memory',   shared_pool_bytes, NULL)) free_memory,
MAX(DECODE(name, 'library cache', shared_pool_bytes, NULL)) library_cache,
MAX(DECODE(name, 'sql area',      shared_pool_bytes, NULL)) sql_area
FROM (
  SELECT TO_CHAR(begin_interval_time,'YYYY_MM_DD HH24:MI') time,
  dhs.instance_number, name, bytes - LAG(bytes, 1, NULL)
  OVER (ORDER BY dhss.instance_number,name,dhss.snap_id) AS shared_pool_bytes
  FROM dba_hist_sgastat dhss, dba_hist_snapshot dhs
  WHERE dhss.name IN ('free memory', 'library cache', 'sql area')
  AND pool = 'shared pool'
  AND dhss.snap_id = dhs.snap_id
  AND dhss.instance_number = dhs.instance_number
  ORDER BY dhs.snap_id,name)
GROUP BY time, instance_number
ORDER BY 1,2;
 
Migrating Table Rows
Migrated rows affect OLTP systems which use indexed reads. In the worst case, you can add an extra I/O to every reads conn sys@pdbdev as sysdba

col value format 99999999

SELECT sn.name, ss.value, ss.con_id
FROM v$statname sn, v$sysstat ss
WHERE sn.statistic# = ss.statistic#
AND lower(sn.name) = 'table fetch continued row';

-- in a multi-tenant container database check each container separately
 
NULL and the CBO
The CBO may make different decisions based on whether it is possible for a column to contains NULLs conn uwclass/uwclass@pdbdev

CREATE TABLE t (
rid  NUMBER(12),
col1 VARCHAR2(30),
col2 VARCHAR2(300));

ALTER TABLE t
ADD CONSTRAINT pk_t
PRIMARY KEY(rid)
USING INDEX
PCTFREE 0;

CREATE INDEX ix_t_col1
ON t(col1)
PCTFREE 0;

CREATE SEQUENCE seq_t_rid;

INSERT INTO t
SELECT seq_t_rid.NEXTVAL, dbms_crypto.randombytes(15), dbms_crypto.randombytes(150)
FROM dual
CONNECT BY LEVEL<=100000;

COMMIT;

SELECT *
FROM t
WHERE rownum < 11;

exec dbms_stats.gather_table_stats(USER, 'T', CASCADE=>TRUE);

EXPLAIN PLAN FOR
SELECT DISTINCT rid FROM t;

set linesize 121

SELECT * FROM TABLE(dbms_xplan.display);

EXPLAIN PLAN FOR
SELECT DISTINCT col1 FROM t;

SELECT * FROM TABLE(dbms_xplan.display);

ALTER TABLE t
MODIFY col1 NOT NULL;

EXPLAIN PLAN FOR
SELECT DISTINCT col1 FROM t;

SELECT * FROM TABLE(dbms_xplan.display);
 
NULL IS NOT NULL
The CBO uses constraints to make smarter decisions conn scott/tiger@pdbdev

SELECT deptno, COUNT(*)
FROM emp
GROUP BY deptno;

EXPLAIN PLAN FOR
SELECT * FROM emp WHERE deptno = 36;

SELECT * FROM TABLE(dbms_xplan.display);

ALTER TABLE emp
ADD CONSTRAINT cc_emp_deptno
CHECK(deptno IN (10,20,30));

EXPLAIN PLAN FOR
SELECT * FROM emp WHERE deptno = 36;

SELECT * FROM TABLE(dbms_xplan.display);

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(NULL IS NOT NULL)
   2 - filter("DEPTNO"=36)
 
NULL Pruning
NULL Pruning conn uwclass/uwclass@pdbdev

CREATE TABLE parent (
parent_id  NUMBER(10),
first_name VARCHAR2(20),
last_name  VARCHAR2(20));

CREATE TABLE child (
child_id   NUMBER(10),
parent_id  NUMBER(10),
birth_date DATE);

BEGIN
  FOR i IN 1..500000 LOOP
    INSERT INTO parent VALUES (i, 'Daniel', 'Morgan');
    INSERT INTO child VALUES (i*2, i, SYSDATE);
    INSERT INTO child VALUES (i*3, i, SYSDATE);
    INSERT INTO child VALUES (i*4, i, SYSDATE);
  END LOOP;
  COMMIT;
END;
/

UPDATE child
SET birth_date = NULL
WHERE TO_CHAR(child_id) LIKE '%2';

UPDATE child
SET birth_date = NULL
WHERE TO_CHAR(child_id) LIKE '%6';

COMMIT;

SELECT 'Is Not Null', COUNT(*)
FROM child
WHERE birth_date IS NOT NULL
UNION
SELECT 'Is Null', COUNT(*)
FROM child
WHERE birth_date IS NULL;

SELECT birth_date, COUNT(*)
FROM child
GROUP BY birth_date;

CREATE INDEX ix_child_dob
ON child(birth_date)
PCTFREE 0;

exec dbms_stats.gather_table_stats('UWCLASS', 'PARENT');
exec dbms_stats.gather_table_stats('UWCLASS', 'CHILD');

set timing on

SELECT COUNT(*)
FROM parent p, child c
WHERE p.parent_id = c.parent_id;

SELECT COUNT(*)
FROM parent p, child c
WHERE p.parent_id = c.parent_id
AND birth_date is NOT NULL;
 
Operating System
Operating System Statistics desc v$osstat

col stat_name format a25
col comments format a63

SELECT stat_name, value, comments, cumulative
FROM v$osstat
ORDER BY 1;

SELECT dho.instance_number, TRUNC(dhs.begin_interval_time, 'HH24') DATETIME, dho.stat_name, AVG(dho.value) AVG_VAL
FROM dba_hist_osstat dho, dba_hist_snapshot dhs
WHERE dho.snap_id = dhs.snap_id
AND dhs.begin_interval_time BETWEEN TRUNC(SYSDATE) AND SYSDATE
GROUP BY dho.instance_number, TRUNC(dhs.begin_interval_time, 'HH24'), dho.stat_name;
Set kernel swappiness to reduce the aggressive default behaviour # echo vm.swappiness=40 >> /etc/sysctl.conf
 
Parallel Query
Note: Be sure when using parallel query capabilities that you properly set the initialization parameters PARALLEL_DEGREE_POLICY, PARALLEL_FORCE_LOCAL, PARALLEL_IO_CAP_ENABLED, PARALLEL_DEGREE_LIMIT and validate the other related parameters have valid values.
Invoking Parallel Query ALTER SYSTEM SET parallel_force_local=TRUE SCOPE=BOTH container=ALL;
ALTER SYSTEM SET parallel_io_cap_enabled=TRUE SCOPE=BOTH container=ALL;

EXPLAIN PLAN FOR
SELECT SUM(amount_sold)
FROM sh.sales
WHERE time_id BETWEEN TO_DATE('01-JAN-21') AND TO_DATE('31-DEC-21');

SELECT * FROM TABLE(dbms_xplan.display);

----------------------------------------------------------------------
| Id | Operation                                    | Name           |
----------------------------------------------------------------------
|  0 | SELECT STATEMENT                             |                |
|  1 |  SORT AGGREGATE                              |                |
|* 2 |   FILTER                                     |                |
|  3 |    PARTITION RANGE ITERATOR                  |                |
|  4 |     TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| SALES          |
|  5 |      BITMAP CONVERSION TO ROWIDS             |                |
|* 6 |       BITMAP INDEX RANGE SCAN                | SALES_TIME_BIX |
----------------------------------------------------------------------


ALTER TABLE sh.sales PARALLEL 2;

EXPLAIN PLAN FOR
SELECT SUM(amount_sold)
FROM sh.sales
WHERE time_id BETWEEN TO_DATE('01-JAN-01') AND TO_DATE('31-DEC-10');

SELECT * FROM TABLE(dbms_xplan.display);

EXPLAIN PLAN FOR
SELECT /*+ PARALLEL */ SUM(amount_sold)
FROM sh.sales
WHERE time_id BETWEEN TO_DATE('01-JAN-01') AND TO_DATE('31-DEC-10');

SELECT * FROM TABLE(dbms_xplan.display);

---------------------------------------------------------------------------------------------------------
| Id | Operation                           | Name           | Pstart| Pstop | TQ    |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT                    |                |       |       |       |      |            |
|  1 |  SORT AGGREGATE                     |                |       |       |       |      |            |
|* 2 |   PX COORDINATOR                    |                |       |       |       |      |            |
|  3 |    PX SEND QC (RANDOM)              | :TQ10000       |       |       | Q1,00 | P->S | QC (RAND)  |
|  4 |     SORT AGGREGATE                  |                |       |       | Q1,00 | PCWP |            |
|* 5 |      FILTER                         |                |       |       | Q1,00 | PCWC |            |
|  6 |       PX PARTITION RANGE ITERATOR   |                |  KEY  |  KEY  | Q1,00 | PCWC |            |
|  7 |        TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| SALES          |  KEY  |  KEY  | Q1,00 | PCWP |            |
|  8 |         BITMAP CONVERSION TO ROWIDS |                |       |       | Q1,00 | PCWP |            |
|* 9 |          BITMAP INDEX RANGE SCAN    | SALES_TIME_BIX |  KEY  |  KEY  | Q1,00 | PCWP |            |
---------------------------------------------------------------------------------------------------------


Note
-----
-- automatic DOP: Computed Degree of Parallelism is 2
 
Parsing
Parsing Efficiency conn uwclass/uwclass@pdbdev

CREATE TABLE t (
mycol NUMBER(5));

set timing on

BEGIN
  FOR i IN 1 .. 10000
  LOOP
    EXECUTE IMMEDIATE 'INSERT INTO t VALUES (:x)'
    USING i;
  END LOOP;
END;
/

DECLARE
 cur    PLS_INTEGER := dbms_sql.open_cursor;
 str    VARCHAR2(200);
 retval NUMBER;
BEGIN
  FOR i IN 10001 .. 20000
  LOOP
    str := 'INSERT INTO t VALUES (' || TO_CHAR(i) || ')';
    dbms_sql.parse(cur, str, dbms_sql.native);
    RetVal := dbms_sql.execute(cur);
  END LOOP;
  dbms_sql.close_cursor(cur);
END;
/

DECLARE
 cur    PLS_INTEGER := dbms_sql.open_cursor;
 str    VARCHAR2(200);
 retval NUMBER;
BEGIN
  str := 'INSERT INTO t VALUES (:x)';
  dbms_sql.parse(cur, str, dbms_sql.native);
  FOR i IN 20001 .. 30000
  LOOP
    dbms_sql.bind_variable(cur,':x', i);
    RetVal := dbms_sql.execute(cur);
  END LOOP;
  dbms_sql.close_cursor(cur);
END;
/

BEGIN
  FOR i IN 30001..40000
  LOOP
    INSERT INTO t VALUES (i);
  END LOOP;
END;
/

-- 0.35 seconds

set timing off
 
Plan Changes
Look for times when a single query is generating multiple plans. Here are some statements that will help you find them based over 1, 2, and 3 days. WITH q AS (SELECT DISTINCT dhsp.sql_id, dhsp.plan_hash_value, dhsp.cpu_cost,
           dhsp.io_cost, TRUNC(dhsp.timestamp) TIMESTAMP
           FROM dba_hist_sql_plan dhsp)
SELECT q1.sql_id, q1.plan_hash_value, AVG(q1.cpu_cost) CPU1, AVG(q2.cpu_cost) CPU2,
       AVG(q1.io_cost) IO1, AVG(q2.io_cost) IO2
FROM q q1, q q2
WHERE q1.sql_id = q2.sql_id
AND q1.plan_hash_value <> q2.plan_hash_value
AND q1.timestamp = TRUNC(SYSDATE)
AND q2.timestamp = TRUNC(SYSDATE-1)
GROUP BY q1.sql_id, q1.plan_hash_value
ORDER BY 1;

SQL_ID        PLAN_HASH_VALUE       CPU1       CPU2        IO1        IO2
------------- --------------- ---------- ---------- ---------- ----------
3fa5qf0s2drdd      2767040103 3607014905 3810733513 116239.645 115333.419
a0atg9n74357g      3187600203   19236075   19236075    132.680    132.680


WITH q AS (SELECT DISTINCT dhsp.sql_id, dhsp.plan_hash_value, dhsp.cpu_cost,
           dhsp.io_cost, TRUNC(dhsp.timestamp) TIMESTAMP
           FROM dba_hist_sql_plan dhsp)
SELECT q1.sql_id, q1.plan_hash_value, AVG(q1.cpu_cost) CPU1, AVG(q2.cpu_cost) CPU2,
       AVG(q1.io_cost) IO1, AVG(q2.io_cost) IO2
FROM q q1, q q2
WHERE q1.sql_id = q2.sql_id
AND q1.plan_hash_value <> q2.plan_hash_value
AND q1.timestamp = TRUNC(SYSDATE)
AND q2.timestamp = TRUNC(SYSDATE-2)
GROUP BY q1.sql_id, q1.plan_hash_value
ORDER BY 1;

SQL_ID        PLAN_HASH_VALUE          CPU1           CPU2          IO1           IO2
------------- --------------- ------------- -------------- ------------ -------------
51tx2vm9uzvjr      1451871175 1088799691.00  505031090.000 5373.0000000 2749.25000000
9459585sd9f5u      3663801048    3571037.06     530616.444   17.4444444    6.38888889
a0atg9n74357g      3187600203   19236075.00   19234713.800  132.6800000  132.68000000
a0f6w1v53wjcd      2196979368   19766036.30   19873817.800   52.3448276   52.34482760


WITH q AS (SELECT DISTINCT dhsp.sql_id, dhsp.plan_hash_value, dhsp.cpu_cost,
           dhsp.io_cost, TRUNC(dhsp.timestamp) TIMESTAMP
           FROM dba_hist_sql_plan dhsp)
SELECT q1.sql_id, q1.plan_hash_value, AVG(q1.cpu_cost) CPU1, AVG(q2.cpu_cost) CPU2,
       AVG(q1.io_cost) IO1, AVG(q2.io_cost) IO2
FROM q q1, q q2
WHERE q1.sql_id = q2.sql_id
AND q1.plan_hash_value <> q2.plan_hash_value
AND q1.timestamp = TRUNC(SYSDATE)
AND q2.timestamp = TRUNC(SYSDATE-3)
GROUP BY q1.sql_id, q1.plan_hash_value
ORDER BY 1;

SQL_ID        PLAN_HASH_VALUE          CPU1           CPU2            IO1             IO2
------------- --------------- ------------- -------------- -------------- ---------------
3fa5qf0s2drdd      2767040103 3607014905.00 3813338992.000 116239.6450000 115534.32300000
7fkw2jry3hg0b       958654912 1009699385.00  277742149.000  43568.4000000   5346.68182000
7srus1y8yb52a      2995177599   95936343.60     851028.360   1151.0000000      8.63636364
9459585sd9f5u      3663801048    3571037.06     530660.889     17.4444444      6.38888889
a0atg9n74357g      3187600203   19236075.00   19229652.400     132.680000    132.68000000
 
Recovery
Make sure substantial overhead is not being devoted to a hung recovery

As you can see from this real-world example ... it was going to be  a very long time before recovery completed due to a bug.
SELECT inst_id, sid, serial#, qcsid, degree, req_degree
FROM gv$px_session
ORDER BY qcsid;

SELECT inst_id, usn, state, undoblockstotal Total, undoblocksdone Done, undoblockstotal-undoblocksdone "ToDo",
DECODE(cputime, 0, 'Unknown', SYSDATE + (((undoblockstotal-undoblocksdone) / (undoblocksdone/cputime))/86400)) "Est Time To Complete"
FROM gv$fast_start_transactions
WHERE state = 'RECOVERING'
ORDER BY 7;

If there is an issue the output might look like this:

INST_ID  USN STATE          Total Done        ToDo Est Time To Complete
------- ---- ---------- -------- -------- -------- --------------------
      6 1956 RECOVERING    29472    22833     6639 02-JUN-2011 14:24:25
      6 1949 RECOVERING     2098      808     1290 03-JUN-2011 11:06:01
      6 2110 RECOVERING 15412860     7878 15404982 07-JUN-2011 05:29:45
      6  392 RECOVERING 28101467      505 28100962 17-OCT-2011 14:09:21
Diving in deeper SELECT ktuxeusn, TO_CHAR(SYSDATE, 'DD-MON-YYYY HH24:MI:SS') Time, ktuxesiz, ktuxesta
FROM x$ktuxe
WHERE ktuxecfl = 'DEAD';

 KTUXEUSN  Time                   KTUXESIZ KTUXESTA
---------- -------------------- ---------- --------
      2110 01-JUN-2011 08:13:43   16848468 ACTIVE
 
Resource Manager
Resource Manager throttling can lead to work being queued rather than running. This SQL will produce a, minute-by-minute, throttling profile. The listing shows 15 minutes. SELECT dbms_utility.current_instance, TO_CHAR(begin_time, 'HH
60 * (SELECT value FROM v$osstat WHERE stat_name = 'NUM_CPUS'
60 * (SELECT value FROM v$parameter WHERE name = 'cpu_count')
SUM(cpu_consumed_time) / 1000 consumed, SUM(cpu_wait_time) /
FROM v$rsrcmgrmetric_history
GROUP BY begin_time
ORDER BY 2;

CURRENT_INSTANCE TIME       TOTAL   DB_TOTAL   CONSUMED  THROTTLED
---------------- ----- ---------- ---------- ---------- ----------
               8 03:39       3600       3120   2009.709     18.628
               8 03:40       3600       3120   2865.319   1536.380
               8 03:41       3600       3120   2902.345   1603.648
               8 03:42       3600       3120   1399.993     33.981
               8 03:43       3600       3120   1932.755          0
               8 03:44       3600       3120   2620.447    427.551
               8 03:45       3600       3120   2859.545    923.482
               8 03:46       3600       3120   2605.484      8.523
               8 03:47       3600       3120   2101.087          0
               8 03:48       3600       3120   1913.453     82.561
               8 03:49       3600       3120   1887.246     24.277
               8 03:50       3600       3120   2724.415   2383.994
               8 03:51       3600       3120   2513.275    173.410
               8 03:52       3600       3120   1814.456          0
               8 03:55       3600       3120   2185.224     54.734
 
Result Cache
Library References Look for result cache information in the "RELATED TOPICS" section at page bottom.

PL/SQL demos on the DBMS_RESULT_CACHE page
SQL demos on the DBMS_RESULT_CACHE page
The RESULT_CACHE section on the TABLES page
 
Session Statistics
Collect session statistics conn uwclass/uwclass@pdbdev

SELECT sid FROM v$mystat WHERE rownum = 1;

SELECT *
FROM v$sesstat s, v$statname sn
WHERE s.statistic# = sn.statistic#
AND s.value > 0
AND s.sid = 145;

-- do some work then compare the change in values

SELECT *
FROM v$sesstat s, v$statname sn
WHERE s.statistic# = sn.statistic#
AND s.value > 0
AND s.sid = 145;
Examine session stats col name format a15
col module format a20
col program format a20
col action format a20

SELECT s1.*, ses.module, ses.program, ses.action
FROM gv$session ses, (
  SELECT *
  FROM (
    SELECT ses.sid, ses.inst_id, stat.name, ses.value
    FROM gv$sesstat ses, gv$statname stat
    WHERE stat.statistic# = ses.statistic#
    AND LOWER(stat.name) LIKE LOWER('%calls to kcmgas%')
    AND ses.value >= 10
    AND ses.inst_id = stat.inst_id
    ORDER  BY 4 DESC)
  WHERE rownum < 31 ) s1
WHERE s1.sid = ses.sid
AND s1.inst_id = ses.inst_id;
Another query related to KCMGAS WITH sesstat1 AS (
  SELECT s.inst_id, s.sid, s.serial#, s.module, s.program, s.action, s.osuser,
         s.schemaname, s.username, s.logon_time, n.name stat_name, ss.value value
  FROM gv$session s, gv$sesstat ss , v$statname n
  WHERE s.inst_id = ss.inst_id
  AND s.sid = ss.sid
  AND ss.statistic# = n.statistic#
  AND n.name IN ('redo size', 'calls to kcmgas','db block changes' , 'consistent changes',
'db block gets', 'consistent gets','rollback changes - undo records applied')),
     sesstat2 AS (
  SELECT s.inst_id, s.sid, s.serial#, s.module, s.program, s.action, s.osuser,
         s.schemaname, s.username, s.logon_time,
         CASE WHEN stat_name ='redo size' then value end redo_size,
         CASE WHEN stat_name ='calls to kcmgas' then value end kcmgas_calls,
         CASE WHEN stat_name ='calls to get snapshot scn: kcmgss'
         THEN value END kcmgss_calls,
         CASE WHEN stat_name ='db block changes' then value end db_block_changes,
         CASE WHEN stat_name ='consistent changes' then value end consistent_changes,
         CASE WHEN stat_name ='db block gets' then value end db_block_gets,
         CASE WHEN stat_name ='rollback changes - undo records applied'
         THEN value END rc_undo_applied
  FROM sesstat1 s)
  SELECT s.inst_id, s.sid, s.serial#, s.module, s.program, s.action, s.osuser,
         s.schemaname, s.username, s.logon_time, MAX(redo_size) redo_size,
         MAX(kcmgas_calls) kcmgas_calls, MAX(kcmgss_calls) kcmgss_calls,
         MAX(db_block_changes) db_block_changes,
         MAX(consistent_changes) consistent_changes,
         MAX(db_block_gets) db_block_gets,
         MAX(rc_undo_applied) rc_undo_applied
  FROM sesstat2 s
  GROUP BY s.inst_id, s.sid, s.serial#, s.module, s.program, s.action, s.osuser,
            s.schemaname, s.username, s.logon_time
  HAVING MAX(kcmgas_calls) > 0;
 
Setting CBO Statistics
Some joins are better than others conn scott/tiger@pdbdev

exec dbms_stats.gather_schema_stats(USER, cascade=>TRUE);

set autotrace on

SELECT DISTINCT d.deptno, d.dname
FROM dept d, emp e
WHERE e.deptno = d.deptno
ORDER BY 1;

SELECT d.deptno, d.dname
FROM dept d
WHERE EXISTS (
  SELECT NULL
  FROM emp e
  WHERE e.deptno = d.deptno)
  ORDER BY 1;

CREATE INDEX ix_emp_deptno
ON emp(deptno);

exec dbms_stats.set_table_stats(USER, 'EMP', numrows=>1000000, numblks=>10000, avgrlen=>74);

exec dbms_stats.set_index_stats(USER, 'ix_emp_deptno', numrows=>1000000, numlblks=>1000, numdist=>10000, clstfct=>1);

exec dbms_stats.set_column_stats(USER, 'emp', 'deptno', distcnt=>10000);

exec dbms_stats.set_table_stats(USER, 'dept', numrows=>100, numblks=>100);

-- repeat queries

exec dbms_stats.set_table_stats(USER, 'dept', numrows=>100000, numblks=>10000);

-- again repeat queries
 
Setting OPTIMIZER_INDEX_COST_ADJ
These queries provides a guideline, a starting point, tuning is the next step

That said this parameter is more than a decade old and changing it will likely do nothing positive
SELECT ROUND((s.time_waited/e.time_waited)*100, 0)
optimizer_index_cost_adj
FROM v$system_event s, v$system_event e
WHERE s.event = 'db file sequential read'
AND e.event = 'db file scattered read';

SELECT ROUND(AVG(singleblkrdtim)/AVG(readtim-singleblkrdtim)*100,0)
optimizer_index_cost_adj
FROM v$filestat;
 
SQL Statements
Access Objects set linesize 131
col object format a20
col object_type format a11
col owner format a20
col username format a10
col osuser format a25

SELECT a.object, a.type OBJECT_TYPE , a.owner, s.username, s.osuser, s.status, s.type USER_TYPE
FROM gv$access a, gv$session s
WHERE a.sid = s.sid
ORDER BY 2,1;
Resources desc gv$resource_limit

set linesize 121
col event format a30

SELECT *
FROM gv$resource_limit
ORDER BY 2,1;
Session Environment desc gv$ses_optimizer_env

set linesize 121
col event format a30

SELECT s.inst_id, oe.sid, id, name, isdefault, value
FROM gv$ses_optimizer_env oe, gv$session s
WHERE oe.sid = s.sid
AND s.service_name <> 'SYS$BACKGROUND';
SQL Stats History and republished here only so that it doesn't disappear should Tim, someday, drop the original 2016 posting

Written by Tim Gorman
col snap_time format a12 truncate heading "Snapshot|Time"
col execs format 999,990 heading "Execs"
col lio_per_exec format 999,999,999,990.00 heading "Avg LIO|Per Exec"
col pio_per_exec format 999,999,999,990.00 heading "Avg PIO|Per Exec"
col cpu_per_exec format 999,999,999,990.00 heading "Avg|CPU (secs)|Per Exec"
col ela_per_exec format 999,999,999,990.00 heading "Avg|Elapsed (secs)|Per Exec"
ttitle center 'Summary Execution Statistics Over Time'

SELECT TO_CHAR(s.begin_interval_time, 'DD-MON HH24:MI') snap_time,
ss.executions_delta execs,
ss.buffer_gets_delta/decode(ss.executions_delta,0,1,ss.executions_delta) lio_per_exec,
ss.disk_reads_delta/decode(ss.executions_delta,0,1,ss.executions_delta) pio_per_exec,
( ss.cpu_time_delta/1000000)/decode(ss.executions_delta,0,1,ss.executions_delta) cpu_per_exec,
(ss.elapsed_time_delta/1000000)/decode(ss.executions_delta,0,1,ss.executions_delta) ela_per_exec
FROM dba_hist_snapshot s, dba_hist_sqlstat ss
WHERE ss.dbid = s.dbid
AND ss.instance_number = s.instance_number
AND ss.snap_id = s.snap_id
AND ss.sql_id = 'f705bwx3q0ydq'
AND ss.executions_delta > 0
AND s.begin_interval_time >= SYSDATE - 3/24
ORDER BY s.snap_id;

                                Summary Execution Statistics Over Time
                                                  Avg              Avg
Snapshot               Avg LIO    Avg PIO  CPU (secs)   Elapsed (secs)
Time           Execs  Per Exec   Per Exec    Per Exec         Per Exec
------------ ------- ---------- --------- -----------  ---------------
07-SEP 14:00       1  14,280.00      0.00        0.01             0.02
07-SEP 15:04       1  14,280.00      0.00        0.03             0.02
07-SEP 16:00       1  14,280.00      0.00        0.01             0.02
Wait Times SELECT sid, schemaname
FROM gv$session
ORDER BY 2;

set linesize 121
col event format a30

SELECT inst_id, seq#, event, p1, p2, p3, wait_time
FROM gv$session_wait_history
WHERE sid = 158;
Wait Time Trend Analysis col interval format a20

SELECT TO_CHAR(begin_interval_time,'YYYY_MM_DD HH24:MI') AS interval,
dhse.instance_number, time_waited_micro - LAG(time_waited_micro, 1, 0) OVER (ORDER BY dhse.instance_number, dhse.snap_id) AS time_waited,
total_waits - LAG(total_waits, 1, 0) OVER (ORDER BY dhse.instance_number, dhse.snap_id) AS total_waits
FROM dba_hist_snapshot dhs, dba_hist_system_event dhse
WHERE dhs.snap_id = dhse.snap_id
AND dhs.instance_number = dhse.instance_number
AND event_name = 'db file sequential read'
ORDER BY 1,2;
What happened during the execution of a SQL statement set linesize 121
col username format a8
col name format a60

SELECT s.sid, s.serial#, s.username, sn.name, ms.value
FROM gv$statname sn, gv$mystat ms, gv$session s
WHERE MS.SID = s.sid
AND ms.statistic# = sn.statistic#
ORDER BY 4;

-- run your SQL statement here

SELECT s.sid, s.serial#, s.username, sn.name, ms.value
FROM gv$statname sn, gv$mystat ms, gv$session s
WHERE MS.SID = s.sid
AND ms.statistic# = sn.statistic#
ORDER BY 4;
 
Startup Parameters
Examine init parameters and consider the affect of modifying one or more after testing SELECT name, value
FROM gv$parameter
WHERE (name LIKE '%optimize%');
 
System Change Number (SCN) Calls
Requests for an SCN number are identified in the Oracle database as kcmgas calls SELECT a.*, b.value "calls to kcmgas",
b.value/ DECODE(((SYSDATE-a.logon_time)*86400),0,1, ((SYSDATE-a.logon_time)*86400))"AVG calls to kcmgas/sec"
FROM (
  SELECT inst_id, SID,username,program, module, action,logon_time, sql_id, event
  FROM gv$session) a,
     (
  SELECT *
  FROM gv$sesstat where statistic#=142 AND VALUE>0) b
WHERE a.inst_id = b.inst_id
AND a.sid = b.sid
ORDER BY b.value/(SYSDATE - a.logon_time)*86400 DESC;
 
Temp Space Usage
Monitor closely, especially with RAC, to identify unbalancing cluster loading SELECT inst_id, tablespace, SUM(blocks)
FROM gv$tempseg_usage
GROUP BY inst_id, tablespace
ORDER BY 2,1;

INST_ID TABLESPACE               SUM(BLOCKS)
------- ------------------------ -----------
      1 TEMP01                        107680
      2 TEMP01                          2720
      3 TEMP01                        775968
      4 TEMP01                        387200
      5 TEMP01                        161984
      6 TEMP01                            32
      1 TEMP_ETL01                   1870880
      2 TEMP_ETL01                   2146848
      3 TEMP_ETL01                  15294784
      4 TEMP_ETL01                   4019232
      5 TEMP_ETL01                   2353760
      4 TEMP_GEN01                       192
      1 TEMP_LDR01                       160
      2 TEMP_LDR01                        32
      3 TEMP_LDR01                       128


SELECT b.inst_id, b.tablespace_name TSNAME, b.bytes_cached_gb,
ROUND((b.bytes_cached_gb/a.size_gb)*100,2) cached_perc, b.bytes_used_gb,
ROUND((b.bytes_used_gb/a.size_gb)*100,2) bytes_used_perc,
(b.bytes_cached_gb - b.bytes_used_gb) free_cached_gb,
ROUND(((b.bytes_cached_gb-b.bytes_used_gb)/a.size_gb)*100,2) free_perc,
a.size_gb tablespace_size_gb
FROM (
  SELECT tablespace_name, round(sum(bytes)/1024/1024/1024) size_gb
  FROM dba_temp_files
  GROUP BY tablespace_name) a,
     (
  SELECT inst_id, tablespace_name, ROUND(SUM(bytes_cached)/1024/1024/1024) bytes_cached_gb,
  ROUND(sum(bytes_used)/1024/1024/1024) bytes_used_gb
  FROM gV$TEMP_EXTENT_POOL
  GROUP BY inst_id, tablespace_name) b
WHERE a.tablespace_name=b.tablespace_name
ORDER BY 2;
Queries using more than 1GB of temp SELECT gv$session.inst_id, sid, serial#, username, logon_time,
ROUND((SYSDATE-logon_time)*1440) timeonline_min, a.*, c.tempsize_GB
FROM gv$session, (
  SELECT inst_id, sql_id, ROUND(SUM(tempseg_size) /1024/1024/1024) tempsize_GB
  FROM gv$sql_workarea_active
  WHERE tempseg_size > 1000000000
  GROUP BY inst_id, sql_id) c, (
    SELECT inst_id instance_number,sql_id, sql_text,
    ROUND((conc_wait_sec_exec / elap_sec_exec)*100) con_perc,
    ROUND((clu_wait_sec_exec / elap_sec_exec)*100) clust_perc,
    ROUND((user_io_wait_sec_exec / elap_sec_exec)*100) io_perc,
    conc_wait_sec_exec, clu_wait_sec_exec, user_io_wait_sec_exec,
    cpu_time_sec_exec, elap_sec_exec, buffer_gets,
    ROUND((buffer_gets*32678)/1024/1024/1024) buffer_gb,
    disk_reads, rows_processed, module,service, action
    FROM (
      SELECT inst_id, sql_id, sql_text,
      ROUND((concurrency_wait_time/1000000)/DECODE(executions,NULL,1,0,1, executions),2) conc_wait_sec_exec,
      ROUND((cluster_wait_time/1000000)/DECODE(executions,NULL,1,0,1,executions),2) clu_wait_sec_exec,
      ROUND((user_io_wait_time/1000000)/DECODE(executions,NULL,1,0,1,executions),2) user_io_wait_sec_exec,
      ROUND((direct_writes/DECODE(executions,NULL,1,0,1,executions)),2) direct_writes_exec,
      ROUND((cpu_time/1000000)/DECODE(executions,NULL,1,0,1,executions),2) cpu_time_sec_exec,
      ROUND(( elapsed_time/1000000)/DECODE(executions,NULL,1,0,1,executions),2) elap_sec_exec,
      ROUND((io_interconnect_bytes/DECODE(executions,NULL,1,0,1,executions)),2) io_inter_by_exec,
      concurrency_wait_time, cluster_wait_time, user_io_wait_time, direct_writes,
      cpu_time, elapsed_time, io_interconnect_bytes,
      ROUND(sorts/DECODE(executions,NULL,1,0,1,executions),2) sort_exec,
      fetches, rows_processed, executions, parse_calls,
      ROUND(disk_reads/DECODE(executions,NULL,1,0,1,executions),2) disk_exec,
      ROUND(buffer_gets/DECODE(executions,NULL,1, 0,1,executions),2) buff_exec,
      service, module, action, buffer_gets, disk_reads
      FROM gv$sql
      WHERE users_opening > 0
      AND elapsed_time/DECODE(executions, NULL, 1, 0, 1, executions) >= 30000000)
    ) a
WHERE a.sql_id=gv$session.sql_id
AND a.instance_number = gv$session.inst_id
AND a.sql_id = c.sql_id
AND a.instance_number = c.inst_id;
 
Time Modeling
System Time Model col value format 999999999999

SELECT stat_name, value
FROM v$sys_time_model
ORDER BY 2;
Session Time Model col value format 999999999999

SELECT stat_name, value
FROM v$sess_time_model
WHERE sid = (SELECT sid FROM v$mystat WHERE rownum = 1)
ORDER BY 2;

-- follow up by working with dba_hist_active_sess_history
 
Unspecified and other Wait Events
Find transactions with Waits exceeding a threshold SELECT inst_id, username, sid, serial#, event, state, seconds_in_wait secs_in_wait
FROM gv$session
WHERE service_name <> 'BACKGROUND'
AND seconds_in_wait >= 5
ORDER BY 7;
With database 11g Oracle has created the "Unspecified Wait Event" category to cover many otherwise uninstrumented waits. This statement will give you a window into them.

I have not yet seen this in 12c so perhaps it has been remapped: Time and experience will tell.
SELECT inst_id, program, module, action, ROUND(SUM(time_waited)/1000000) wait_in_secs, COUNT(*)
FROM (
  SELECT sysdate, ash.inst_id, ash.sample_time, ash.session_id, ash.session_serial#, ash.sql_id, ash.sql_plan_hash_value, ash.sql_plan_line_id, ash.sql_plan_options, ash.sql_exec_start, ash.event, ash.p1text, ash.p2text, ash.p3text, ash.wait_class, ash.wait_time, ash.session_state, ash.time_waited, ash.blocking_session, ash.current_obj#, ash.current_file#, ash.current_block#, ash.current_row#, ash.consumer_group_id, ash.remote_instance#, ash.program, ash.module, ash.action
FROM gv$active_session_history ash
WHERE ash.event = 'unspecified wait event')
GROUP BY inst_id, program, module, action
ORDER BY 5 DESC;

SELECT sysdate, inst_id, event, ROUND(SUM(time_waited)/1000000) wait_in_secs, COUNT(*)
FROM gv$active_session_history
GROUP BY sysdate, inst_id, event
HAVING SUM(time_waited) > 1000000000;
 
Windows
Kernel Tuning

The following was contained in a private note from Oakie Charles Hooper in Oct. 2008
By default, Windows servers are set up to provide fast performance for file sharing, which sets aside large portions of the RAM for file caching. You need to change this so that it is set to be optimized for network applications. In the Windows Control Panel, find the network settings (if a network card is listed, right-click it and select Properties), and then inside the network settings display the settings for "File and Printer Settings". Change the setting to be optimized for network applications. There is a chance that client computers may be disconnected briefly when this change is made.

There was a Microsoft document floating around which suggested changing a Windows registry setting (TcpAckFrequency) from the default to 13 in order to improve performance. The result of doing so is that a file which should transfer across a gigabit network from a Linux box to the Windows box in 3 seconds, actually requires 45 minutes to complete.

Related Topics
Bind Variables
Chained Rows
Clustering Factor
Cursor Sharing
DBMS_ADDM
DBMS_AUTO_SQLTUNE
DBMS_HPROF
DBMS_PROFILER
DBMS_RESULT_CACHE
DBMS_SQLTUNE
DBMS_STATS
Explain Plan
Hints
Histograms
JDBC Thin
Indexes
RAC
TABLES
TKPROF & Tracing
What's New In 21c
What's New In 23c

Morgan's Library Page Footer
This site is maintained by Dan Morgan. Last Updated: This site is protected by copyright and trademark laws under U.S. and International law. © 1998-2023 Daniel A. Morgan All Rights Reserved
  DBSecWorx