Home
Resources
Library
How Can I?
Presentations
Links
Book Reviews
Downloads
User Groups
General
Contact Us
About Us
Services
Legal Notices & Terms of Use
Privacy Statement |
Working with DBMS_STATS |
Lets be honest here ... Oracle's default database backup job isn't worth the bytes used to construct it.
All of the power and flexibility built into the DBMS_STATS package is ignored in favour of a one-size fits all which is not going to be a good
solution unless you purchased your Oracle database licenses to track sales and inventory of a one-person hotdog stand.
So this code, released under the Gnu license addresses the real-world stats collection issues by driving statistics collection from metadata tables
and by collecting statistics that can be used to continuously manage and improve the statistics collection cycle.
First lets build the metadata tables and their related infrastructure. In the following context "SC" stands for "statistics
collection," "sch" is short for "schema," and "seg" is short for "segment." Inst is the working
abbreviation for "instance."
Warning: Be aware of the implications of running DBMS_STATS and collecting histograms where histograms have not previously existed or exist
but differ from the collection set in this application. It is possible for a small number of histogram changes to have a major impact on an application. |
CREATE SEQUENCE sc_run_seq;
CREATE TABLE sc_schema_metadata (
schema_name VARCHAR2(30),
inst_id NUMBER(2),
where_clause VARCHAR2(4000) NOT NULL,
schema_priority NUMBER(1) DEFAULT 5 NOT NULL,
active_flag VARCHAR2(1) DEFAULT 'A')
pctfree 0 pctused 99;
ALTER TABLE sc_schema_metadata
ADD CONSTRAINT pk_sc_sch_metadata
PRIMARY KEY (schema_name, inst_id);
ALTER TABLE sc_schema_metadata
ADD CONSTRAINT cc_sc_sch_metadata_inst
CHECK (inst_id BETWEEN 1 AND 8);
ALTER TABLE sc_schema_metadata
ADD CONSTRAINT cc_sc_sch_priority
CHECK (schema_priority BETWEEN 1 AND 9);
ALTER TABLE sc_schema_metadata
ADD CONSTRAINT cc_sc_sch_metadata_flag
CHECK (active_flag IN ('A','I'));
CREATE TABLE sc_segment_metadata (
schema_name VARCHAR2(30),
inst_id NUMBER(2),
table_name VARCHAR2(30),
est_pct NUMBER DEFAULT 1 NOT NULL,
block_sample VARCHAR2(5) DEFAULT 'TRUE' NOT NULL,
method_opt VARCHAR2(50) DEFAULT 'FOR ALL INDEXED COLUMNS SIZE 1' NOT NULL,
degree NUMBER(3) DEFAULT 2 NOT NULL,
granularity VARCHAR2(20) NOT NULL,
cascade VARCHAR2(5) DEFAULT 'TRUE' NOT NULL,
no_invalidate VARCHAR2(5) DEFAULT 'TRUE' NOT NULL,
stattype VARCHAR2(10) DEFAULT 'DATA' NOT NULL,
force VARCHAR2(5) DEFAULT 'TRUE' NOT NULL,
segment_priority NUMBER(1) DEFAULT 9 NOT NULL,
active_flag VARCHAR2(1) DEFAULT 'A' NOT NULL)
pctfree 0 pctused 99;
ALTER TABLE sc_segment_metadata
ADD CONSTRAINT pk_sc_seg_metadata
PRIMARY KEY (schema_name, inst_id, table_name);
ALTER TABLE sc_segment_metadata
ADD CONSTRAINT fk_sc_seg_schema_inst
FOREIGN KEY (schema_name, inst_id)
REFERENCES sc_schema_metadata(schema_name, inst_id)
INITIALLY DEFERRED DEFERRABLE;
ALTER TABLE sc_segment_metadata
ADD CONSTRAINT cc_sc_seg_metadata_estpct
CHECK (est_pct BETWEEN 0.000001 AND 100);
ALTER TABLE sc_segment_metadata
ADD CONSTRAINT cc_sc_seg_metadata_blksamp
CHECK (block_sample IN ('TRUE','FALSE'));
-- the following assumes a maximum allowed 12 degrees of parallelism for collection.
-- most likely you should use a far smaller number to avoid consuming too much CPU.
ALTER TABLE sc_segment_metadata
ADD CONSTRAINT cc_sc_seg_metadata_degree
CHECK (degree BETWEEN 0 AND 12);
ALTER TABLE sc_segment_metadata
ADD CONSTRAINT cc_sc_seg_granularity
CHECK (granularity IN ('AUTO', 'DEFAULT', 'GLOBAL', 'GLOBAL AND PARTITION', 'PARTITION', 'SUBPARTITION'));
ALTER TABLE sc_segment_metadata
ADD CONSTRAINT cc_sc_seg_cascade
CHECK (cascade IN ('TRUE','FALSE'));
ALTER TABLE sc_segment_metadata
ADD CONSTRAINT cc_sc_seg_no_invalidate
CHECK (no_invalidate IN ('TRUE','FALSE'));
ALTER TABLE sc_segment_metadata
ADD CONSTRAINT cc_sc_seg_stattype
CHECK (stattype IN ('DATA'));
ALTER TABLE sc_segment_metadata
ADD CONSTRAINT cc_sc_seg_force
CHECK (force IN ('TRUE','FALSE'));
ALTER TABLE sc_segment_metadata
ADD CONSTRAINT cc_sc_segment_priority
CHECK (segment_priority BETWEEN 1 AND 9);
ALTER TABLE sc_segment_metadata
ADD CONSTRAINT cc_sc_seg_metadata_flag
CHECK (active_flag IN ('A','I'));
CREATE TABLE sc_event_log (
instance_id NUMBER,
run_no NUMBER,
program_name VARCHAR2(30),
line_number NUMBER,
beg_date DATE,
end_date DATE,
host_name VARCHAR2(64),
instance_name VARCHAR2(16),
active_instances NUMBER(3,0),
sql_errno VARCHAR2(6),
event_text VARCHAR2(256),
log_comment VARCHAR2(256))
PARTITION BY RANGE (instance_id) INTERVAL (1) (
PARTITION "ROOT_PAR" VALUES LESS THAN (2) PCTFREE 0 PCTUSED 99); |
Then the stats collection package header |
CREATE OR REPLACE PACKAGE stats_collection AUTHID CURRENT_USER IS
/*******************************************************************************
* NAME: stats_collection *
* TYPE: Package Header *
* *
* PURPOSE: Metadata driven table and partition statistics collection with *
* associated logging and instrumentation *
* *
* Revision History: *
/*******************************************************************************
* Date Author Comments *
* 01-AUG-2011 DA Morgan Initial Release *
/******************************************************************************/
TYPE event_array IS RECORD (
instance_id sc_event_log.instance_id%TYPE,
run_no sc_event_log.run_no%TYPE,
program_name sc_event_log.program_name%TYPE,
line_number sc_event_log.line_number%TYPE,
beg_date sc_event_log.beg_date%TYPE,
end_date sc_event_log.end_date%TYPE,
host_name sc_event_log.host_name%TYPE,
instance_name sc_event_log.instance_name%TYPE,
active_instances sc_event_log.active_instances%TYPE,
sql_errno sc_event_log.sql_errno%TYPE,
event_text sc_event_log.event_text%TYPE,
log_comment sc_event_log.log_comment%TYPE);
i NUMBER;
NumRecs INTEGER;
eArray event_array;
mArray mlib_metrics.metrics_array;
AbortExecution EXCEPTION;
InstanceNotRunning EXCEPTION;
InvalidObj EXCEPTION;
InvalidMethodOpt EXCEPTION;
InvalidSize EXCEPTION;
InvalidUpdate EXCEPTION;
FUNCTION get_version RETURN VARCHAR2;
PROCEDURE collect_table_stats(pInstID IN NUMBER);
PROCEDURE purge_stats_jobs;
PROCEDURE insert_log_event(pEArray IN event_array);
PROCEDURE update_log_event(pEArray IN event_array);
PROCEDURE log_metrics(pMArray IN mlib_metrics.metrics_array);
END stats_collection;
/
sho err |
Then the stats collection package body |
CREATE OR REPLACE PACKAGE BODY stats_collection IS
/*******************************************************************************
* NAME: stats_collection *
* TYPE: Package Header *
* *
* PURPOSE: Metadata driven table and partition statistics collection with *
* associated logging and instrumentation as well as an API for basic *
* DBMS_STATS package functionality *
* *
* Revision History: *
********************************************************************************
* Date Author Comments *
* 21-FEB-2011 DA Morgan Version 1 Initial release *
* 30-MAR-2011 DA Morgan Version 2 Initial release *
* 12-AUG-2011 DA Morgan Version 3 Initial release *
*******************************************************************************/
FUNCTION get_version RETURN VARCHAR2 IS
BEGIN
RETURN '3.0.1';
END get_version;
--------------------------------------------------------------------------------
PROCEDURE collect_table_stats(pInstID IN NUMBER) IS
sysRefCur SYS_REFCURSOR;
sdmrec sc_segment_metadata%ROWTYPE;
CURSOR simcur IS
SELECT /* stats_collection.collect_table_stats1 */ som.schema_name, som.where_clause
FROM sc_schema_metadata som
WHERE som.inst_id = pInstID
AND som.active_flag = 'A'
ORDER BY som.schema_priority, som.schema_name;
vErrMesg sc_metrics_log.failure_desc%TYPE;
vLADate dba_tables.last_analyzed%TYPE;
vMetricsAr sc_metrics_log%ROWTYPE;
-------------------------------------------------------------- private procedure
PROCEDURE child(
p2InstID IN sc_schema_metadata.inst_id%TYPE,
pSchemaName IN sc_schema_metadata.schema_name%TYPE,
pWhereClause IN sc_schema_metadata.where_clause%TYPE,
pRetCur OUT SYS_REFCURSOR,
pNumRecs OUT INTEGER)
IS
sqlStr VARCHAR2(2048);
BEGIN
sqlStr := 'SELECT COUNT(*) FROM sc_segment_metadata ' ||
' ' || pWhereClause || ' ' ||
' AND inst_id = :B1 ' ||
' AND schema_name = :B2 ' ||
' AND active_flag = ''A'' ';
execute immediate sqlStr
INTO pNumRecs
USING p2InstID, pSchemaName;
OPEN pRetCur FOR 'SELECT * FROM sc_segment_metadata ' ||
' ' || pWhereClause || ' ' ||
' AND schema_name = ''' || pSchemaName || ''' ' ||
' AND inst_id = ' || p2InstID || ' ' ||
' AND active_flag = ''A'' ';
END child;
-------------------------------------------------------- private inline function
FUNCTION to_boolean(pString IN VARCHAR2) RETURN BOOLEAN DETERMINISTIC IS
BEGIN
IF pString = 'TRUE' THEN
RETURN TRUE;
ELSIF pString = 'FALSE' THEN
RETURN FALSE;
ELSE
RETURN FALSE;
END IF;
END to_boolean;
-------------------------------------------------------- private inline function
/*
need to explain explicit assumptions with respect to partition naming
*/
FUNCTION go_nogo(pSegName IN VARCHAR2, pDateVal IN LONG) RETURN BOOLEAN DETERMINISTIC IS
vAnalyze BOOLEAN := TRUE;
vDateVal DATE;
BEGIN
-- convert long to date
vDateVal := TO_DATE(SUBSTR(TO_CHAR(pDateVal),11,19), 'SYYYY-MM-DD HH24:MI:SS');
IF pSegName LIKE '%MO' THEN
IF vDateVal < SYSDATE-365 THEN -- ignore partitions more than 365 days old
vAnalyze := FALSE;
END IF;
ELSIF pSegName LIKE '%WK' THEN
IF vDateVal < SYSDATE-21 THEN -- ignore partitions more than 21 days old
vAnalyze := FALSE;
END IF;
ELSIF pSegName LIKE '%DY' THEN
IF vDateVal < SYSDATE-14 THEN -- ignore partitions more than 14 days old
vAnalyze := FALSE;
END IF;
ELSE
IF vDateVal < SYSDATE-14 THEN -- ignore partitions more than 14 days old
vAnalyze := FALSE;
END IF;
END IF;
RETURN vAnalyze;
EXCEPTION
WHEN OTHERS THEN -- if not a date range partition analyze always
RETURN TRUE;
END go_nogo;
--------------------------------------------------------------------------------
BEGIN
-- if already running on the instance do not run
SELECT COUNT(*)
INTO i
FROM gv$session gs
WHERE gs.client_info = 'SC' || TO_CHAR(pInstID);
IF i <> 0 THEN
RAISE AbortExecution;
ELSE
mlib_utils.set_clientinfo('SC' || TO_CHAR(pInstID));
mlib_utils.set_module('stats_collection.load_metadata_tables', 'log start');
earray.run_no := sc_run_seq.NEXTVAL;
earray.program_name := 'COLLECT_TABLE_STATS' || TO_CHAR(pInstID);
earray.beg_date := SYSDATE;
insert_log_event(eArray);
marray.run_no := earray.run_no;
END IF;
-- purge both logs of obsolete rows
mlib_utils.set_module('stats_collection.load_metadata_tables', 'Purge Events Log');
DELETE /* stats_collection.collect_table_stats2 */ sc_event_log WHERE end_date < TRUNC(SYSDATE-90);
mlib_utils.set_module('stats_collection.load_metadata_tables', 'Purge Metrics Log');
DELETE /* stats_collection.collect_table_stats3 */ sc_metrics_log WHERE end_run < TRUNC(SYSDATE-30);
COMMIT;
mlib_utils.set_module('stats_collection.load_metadata_tables', 'Purge Completed');
FOR simrec IN simcur LOOP
child(pInstID, simrec.schema_name, simrec.where_clause, sysRefCur, NumRecs);
FOR i IN 1..NumRecs LOOP
FETCH sysRefCur INTO sdmrec;
-- start table-level metrics collection
mArray.schema_name := simrec.schema_name;
mArray.seg_name := sdmrec.table_name;
mArray.beg_run := SYSTIMESTAMP;
mArray.cpu_time := mlib_metrics.get_CPU;
mArray.phys_reads := mlib_metrics.get_PhysRead_Bytes;
mArray.blocks_changed := mlib_metrics.get_BlockChanges;
mArray.outcome := 'S';
IF sdmrec.granularity = 'PARTITION' THEN
-- process partitions segments
DECLARE
CURSOR acur is
SELECT /* stats_collection.launch_collection2 */
dtp.partition_name, dtp.high_value
FROM dba_tab_partitions dtp
WHERE dtp.table_owner = sdmrec.schema_name
AND dtp.table_name = sdmrec.table_name
AND ((dtp.last_analyzed IS NULL) OR (dtp.last_analyzed < SYSDATE-12/24))
ORDER BY partition_name;
BEGIN
-- loop through qualified partitions
FOR arec IN acur LOOP
mlib_utils.set_module(sdmrec.schema_name, sdmrec.table_name || ':' ||
arec.partition_name);
PRAGMA INLINE(GO_NOGO, 'YES');
IF go_nogo(arec.partition_name, arec.high_value) THEN
-- process qualified partitions
PRAGMA INLINE(TO_BOOLEAN, 'YES');
dbms_stats.gather_table_stats(sdmrec.schema_name, sdmrec.table_name,
arec.partition_name, sdmrec.est_pct, TO_BOOLEAN(sdmrec.block_sample),
sdmrec.method_opt, sdmrec.degree, sdmrec.granularity,
TO_BOOLEAN(sdmrec.cascade), NULL, NULL, NULL, TO_BOOLEAN(sdmrec.no_invalidate),
sdmrec.stattype, TO_BOOLEAN(sdmrec.force));
dbms_stats.lock_partition_stats(sdmrec.schema_name, sdmrec.table_name,
arec.partition_name);
END IF;
END LOOP;
EXCEPTION
WHEN OTHERS THEN
mArray.failure_desc := 'PT: ' || sdmrec.table_name;
mArray.outcome := 'F'; -- record the failure and move to the next table
END;
ELSIF sdmrec.granularity = 'GLOBAL' THEN
-- process non-partitioned segments
BEGIN
mlib_utils.set_module(sdmrec.schema_name, sdmrec.table_name);
SELECT /* stats_collection.launch_collection3 */ dt.last_analyzed
INTO vLADate
FROM dba_tables dt
WHERE dt.owner = sdmrec.schema_name
AND dt.table_name = sdmrec.table_name;
IF (vLADate IS NULL) OR vLADate < SYSDATE-12/24 THEN
-- process the table and lock table stats
PRAGMA INLINE(TO_BOOLEAN, 'YES');
dbms_stats.gather_table_stats(sdmrec.schema_name, sdmrec.table_name, NULL,
sdmrec.est_pct, TO_BOOLEAN(sdmrec.block_sample), sdmrec.method_opt, sdmrec.degree,
sdmrec.granularity, TO_BOOLEAN(sdmrec.cascade), NULL, NULL, NULL,
TO_BOOLEAN(sdmrec.no_invalidate), sdmrec.stattype, TO_BOOLEAN(sdmrec.force));
dbms_stats.lock_table_stats(sdmrec.schema_name, sdmrec.table_name);
END IF;
EXCEPTION
WHEN OTHERS THEN
mArray.failure_desc := 'HT: ' || sdmrec.table_name || ' - ' || SQLERRM;
mArray.outcome := 'F'; -- record the failure and move to the next table
END;
ELSE -- trap for a granularity that the program is not designed to handle
mArray.failure_desc := 'Unknown Granularity: ' || sdmrec.granularity;
mArray.outcome := 'F'; -- record the failure and move to the next table
END IF;
-- record table-level metrics and loop for next table
mArray.cpu_time := (mlib_metrics.get_CPU - mArray.cpu_time);
mArray.phys_reads := (mlib_metrics.get_PhysRead_Bytes - mArray.phys_reads);
mArray.blocks_changed := (mlib_metrics.get_BlockChanges - mArray.blocks_changed);
log_metrics(mArray);
END LOOP;
-- record owner-level events
earray.end_date := SYSDATE;
earray.sql_errno := 0;
earray.event_text := 'stats_collection.collect_table_stats';
earray.log_comment := 'Stats Collection Success';
update_log_event(eArray);
END LOOP;
mlib_utils.reset_module;
mlib_utils.reset_clientinfo;
EXCEPTION
WHEN AbortExecution THEN
mlib_utils.reset_module;
mlib_utils.reset_clientinfo;
RAISE_APPLICATION_ERROR(-20001, 'Statstics Collection Is Already Running On This Node');
WHEN OTHERS THEN
vErrMesg := SUBSTR(SQLERRM,1,256);
mArray.cpu_time := (mlib_metrics.get_CPU - mArray.cpu_time);
mArray.phys_reads := (mlib_metrics.get_PhysRead_Bytes - mArray.phys_reads);
mArray.blocks_changed := (mlib_metrics.get_BlockChanges - mArray.blocks_changed);
mArray.failure_desc := vErrMesg;
mArray.outcome := 'F';
log_metrics(mArray);
mlib_utils.reset_module;
mlib_utils.reset_clientinfo;
END collect_table_stats;
--------------------------------------------------------------------------------
PROCEDURE purge_stats_jobs IS
/*******************************************************************************
* Procedure PURGE_STATS_JOBS business rules *
* Loop through all jobs with the prefix "SCJOB" and drop them with the FORCE *
* parameter set to TRUE *
*******************************************************************************/
CURSOR sjcur IS
SELECT dsj.job_name
FROM dba_scheduler_jobs dsj
WHERE dsj.job_name LIKE 'SCZZYZX%';
BEGIN
FOR sjrec IN sjcur LOOP
dbms_scheduler.drop_job(sjrec.job_name, TRUE);
END LOOP;
END purge_stats_jobs;
--------------------------------------------------------------------------------
PROCEDURE insert_log_event(pEArray IN event_array) IS
/*******************************************************************************
* Procedure INSERT_LOG_EVENT business rules *
* 1. if any array elements are null provide the missing values *
* 2. provide missing values and insert the results into the event log table *
*******************************************************************************/
leArray event_array;
vInstCount sc_event_log.active_instances%TYPE;
vInstTab dbms_utility.instance_table;
vLineNumber all_source.line%TYPE;
vObjType all_objects.object_type%TYPE;
vPkgName all_objects.object_name%TYPE;
vSchemaName all_objects.owner%TYPE;
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
owa_util.who_called_me(vSchemaName, vPkgName, vLineNumber, vObjType);
IF dbms_utility.is_cluster_database THEN
dbms_utility.active_instances(vInstTab, vInstCount);
END IF;
leArray.instance_id := COALESCE(pEArray.instance_id, sys_context('USERENV', 'INSTANCE'));
leArray.run_no := pEArray.run_no;
leArray.program_name := pEArray.program_name;
leArray.line_number := vLineNumber;
leArray.beg_date := COALESCE(pEArray.beg_date, SYSDATE);
leArray.end_date := COALESCE(pEArray.end_date, SYSDATE);
leArray.host_name := sys_context('USERENV', 'SERVER_HOST');
leArray.instance_name := sys_context('USERENV', 'INSTANCE_NAME');
leArray.active_instances := vInstCount;
leArray.sql_errno := pEArray.sql_errno;
leArray.event_text := pEArray.event_text;
leArray.log_comment := pEArray.log_comment;
INSERT /* stats_collection.INSERT_LOG_EVENT */
INTO sc_event_log
VALUES leArray;
COMMIT;
END insert_log_event;
--------------------------------------------------------------------------------
PROCEDURE update_log_event(pEArray IN event_array) IS
/*******************************************************************************
* Procedure UPDATE_LOG_EVENT business rules *
* 1. if any array elements are null provide the missing values *
* 2. insert the results into the event log table *
*******************************************************************************/
leArray event_array;
vInstCount sc_event_log.active_instances%TYPE;
vInstTab dbms_utility.instance_table;
vLineNumber all_source.line%TYPE;
vObjType all_objects.object_type%TYPE;
vPkgName all_objects.object_name%TYPE;
vSchemaName all_objects.owner%TYPE;
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
owa_util.who_called_me(vSchemaName, vPkgName, vLineNumber, vObjType);
UPDATE /* stats_collection.UPDATE_LOG_EVENT */ sc_event_log
SET end_date = COALESCE(pEArray.end_date,SYSDATE),
line_number = vLineNumber,
sql_errno = pEArray.sql_errno,
event_text = pEArray.event_text,
log_comment = pEArray.log_comment
WHERE run_no = pEArray.run_no;
COMMIT;
END update_log_event;
--------------------------------------------------------------------------------
PROCEDURE log_metrics(pMArray IN mlib_metrics.metrics_array) IS
/*******************************************************************************
* Procedure LOG_EVENT business rules *
* 1. if any array elements are null provide the missing values *
* 2. calculate metric value deltas and insert the results into the metrics log *
* table *
*******************************************************************************/
lmArray mlib_metrics.metrics_array;
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
lmArray.instance_id := COALESCE(mArray.instance_id, sys_context('USERENV', 'INSTANCE'));
lmArray.run_no := COALESCE(mArray.run_no, sc_run_seq.NEXTVAL);
lmArray.service_name := sys_context('USERENV', 'SERVICE_NAME');
lmArray.schema_name := mArray.schema_name;
lmArray.seg_name := mArray.seg_name;
lmArray.beg_run := COALESCE(mArray.beg_run, SYSTIMESTAMP);
lmArray.end_run := COALESCE(mArray.end_run, SYSTIMESTAMP);
lmArray.cpu_time := mlib_metrics.get_CPU - COALESCE(mArray.cpu_time, mlib_metrics.get_CPU);
lmArray.phys_reads := mlib_metrics.get_PhysRead_Bytes - COALESCE(mArray.phys_reads, mlib_metrics.get_PhysRead_Bytes);
lmArray.blocks_changed := mlib_metrics.get_BlockChanges - COALESCE(mArray.blocks_changed, mlib_metrics.get_BlockChanges);
lmArray.failure_desc := marray.failure_desc;
lmArray.outcome := COALESCE(mArray.outcome,'F');
INSERT /* stats_collection.LOG_METRICS */
INTO sc_metrics_log
VALUES lmArray;
COMMIT;
END log_metrics;
--------------------------------------------------------------------------------
PROCEDURE add_sc_schema(
pSchema IN dba_users.username%TYPE,
pInstID IN sc_schema_metadata.inst_id%TYPE,
pWClause IN sc_schema_metadata.where_clause%TYPE DEFAULT 'WHERE 1=1')
IS
eiStr VARCHAR2(4000);
BEGIN
-- if any stats collection jobs are running do not run
SELECT COUNT(*)
INTO i
FROM gv$session
WHERE client_info LIKE 'SC%';
IF i <> 0 THEN
RAISE AbortExecution;
ELSE
mlib_utils.set_clientinfo('SC_ADD');
mlib_utils.set_module('stats_collection.add_sc_schema', 'adding schema');
END IF;
INSERT INTO sc_schema_metadata
(schema_name, inst_id, where_clause, schema_priority, active_flag)
VALUES
(pSchema, pInstID, pWClause, 9, 'A');
INSERT /*+ APPEND */ INTO sc_segment_metadata
SELECT /* stats_collection.load_metadata_tables16 */
DISTINCT dtp.table_owner, sim.inst_id, dtp.table_name, 1,
'TRUE', 'FOR ALL INDEXED COLUMNS SIZE 1', 1, 'PARTITION',
'TRUE', 'TRUE', 'DATA', 'TRUE', 9, 'A'
FROM dba_tab_partitions dtp, sc_schema_metadata sim
WHERE dtp.table_owner = sim.schema_name
AND dtp.table_name NOT LIKE 'ERR%'
AND dtp.table_name NOT LIKE 'EXT%'
AND dtp.table_name NOT LIKE 'TMP%'
AND dtp.table_name NOT LIKE 'UPGRADE';
INSERT /*+ APPEND */ INTO sc_segment_metadata
SELECT /* stats_collection.load_metadata_tables17 */ DISTINCT dt.owner, sim.inst_id,
dt.table_name, 1, 'TRUE',
'FOR ALL INDEXED COLUMNS SIZE 1', 1, 'GLOBAL', 'TRUE', 'TRUE',
'DATA', 'TRUE', 9, 'A'
FROM dba_tables dt, sc_schema_metadata sim
WHERE dt.owner = sim.schema_name
AND (owner, table_name) NOT IN (
SELECT schema_name, table_name
FROM sc_segment_metadata)
AND dt.table_name NOT LIKE 'ERR%'
AND dt.table_name NOT LIKE 'EXT%'
AND dt.table_name NOT LIKE 'TMP%'
AND dt.table_name NOT LIKE 'UPGRADE';
COMMIT;
mlib_utils.reset_module;
mlib_utils.reset_clientinfo;
EXCEPTION
WHEN OTHERS THEN
NULL;
END add_sc_schema;
--------------------------------------------------------------------------------
PROCEDURE drop_sc_schema(pSchema IN dba_users.username%TYPE) IS
BEGIN
-- if any stats collection or load jobs are running do not run
SELECT COUNT(*)
INTO i
FROM gv$session
WHERE client_info LIKE 'SC%';
IF i <> 0 THEN
RAISE AbortExecution;
ELSE
earray.beg_date := SYSDATE;
mlib_utils.set_clientinfo('SC_DROP');
mlib_utils.set_module('stats_collection.drop_sc_schema', 'dropping schema');
END IF;
DELETE sc_segment_metadata WHERE schema_name = pSchema;
DELETE sc_schema_metadata WHERE schema_name = pSchema;
COMMIT;
mlib_utils.reset_module;
mlib_utils.reset_clientinfo;
EXCEPTION
WHEN OTHERS THEN
earray.sql_errno := sqlcode;
earray.event_text := 'stats_collection.collect_table_stats';
earray.end_date := SYSDATE;
earray.log_comment := 'When Others Then Capture';
update_log_event(eArray);
mlib_utils.reset_module;
mlib_utils.reset_clientinfo;
END drop_sc_schema;
--------------------------------------------------------------------------------
END stats_collection;
/
sho err |
The next step is to create a trigger to enforce data integrity not possible using a constraint |
CREATE OR REPLACE TRIGGER biu_sc_segment_metadata
BEFORE INSERT OR UPDATE
ON sc_segment_metadata
FOR EACH ROW
DECLARE
i NATURAL;
vSizeNum NUMBER;
earray stats_collection.event_array;
-- AbortExecution EXCEPTION;
-- InstanceNotRunning EXCEPTION;
InvalidObj EXCEPTION;
InvalidMethodOpt EXCEPTION;
InvalidSize EXCEPTION;
InvalidUpdate EXCEPTION;
BEGIN
IF INSERTING THEN
-- enforce that schema_name+table_name combo integrity
:NEW.table_name := UPPER(:NEW.table_name);
SELECT /* BIU_SC_SEGMENT_METADATA */ COUNT(*)
INTO i
FROM dba_tables
WHERE owner = :NEW.schema_name
AND table_name = :NEW.table_name;
IF i <> 1 THEN
RAISE InvalidObj;
END IF;
ELSE
IF :NEW.schema_name <> :OLD.schema_name THEN
RAISE InvalidUpdate;
END IF;
IF :NEW.table_name <> :OLD.table_name THEN
RAISE InvalidUpdate;
END IF;
END IF;
-- enforce some method_opt integrity
:NEW.method_opt := UPPER(:NEW.method_opt);
IF :NEW.method_opt IN ('FOR ALL COLUMNS SIZE AUTO',
'FOR ALL COLUMNS SIZE REPEAT',
'FOR ALL COLUMNS SIZE SKEWONLY',
'FOR ALL COLUMNS SKEWONLY SIZE AUTO',
'FOR ALL INDEXED COLUMNS',
'FOR ALL INDEXED COLUMNS SIZE AUTO') THEN
NULL;
ELSE
-- perform size check
IF SUBSTR(:NEW.method_opt,1,21) = 'FOR ALL COLUMNS SIZE ' THEN
NULL;
ELSIF SUBSTR(:NEW.method_opt,1,28) = 'FOR ALL HIDDEN COLUMNS SIZE ' THEN
NULL;
ELSIF SUBSTR(:NEW.method_opt,1,29) = 'FOR ALL INDEXED COLUMNS SIZE ' THEN
NULL;
ELSE
RAISE InvalidMethodOpt;
END IF;
-- eval size parameter
vSizeNum := TO_NUMBER(TRIM(TRANSLATE(SUBSTR(:NEW.method_opt,INSTR(:NEW.method_opt,'SIZE', 1, 1)+5), 'ABCDEFGHIJKLMNOPQRSTUVWXYZ ',' ')));
IF vSizeNum NOT BETWEEN 1 AND 256 THEN
RAISE InvalidSize;
END IF;
END IF;
EXCEPTION
WHEN InvalidObj THEN
earray.sql_errno := SQLCODE;
earray.event_text := SQLERRM;
earray.log_comment := 'InvalidObj Exception - ' ||:NEW.schema_name||':'||:NEW.table_name;
stats_collection.insert_log_event(earray);
WHEN InvalidMethodOpt THEN
earray.sql_errno := SQLCODE;
earray.event_text := SQLERRM;
earray.log_comment := 'InvalidMethodOpt Exception - ' || :NEW.schema_name || ':' || :NEW.table_name;
stats_collection.insert_log_event(earray);
WHEN InvalidSize THEN
earray.sql_errno := SQLCODE;
earray.event_text := SQLERRM;
earray.log_comment := 'InvalidSize Exception - ' || :NEW.schema_name || ':' || :NEW.table_name;
stats_collection.insert_log_event(earray);
WHEN InvalidUpdate THEN
earray.sql_errno := 0;
earray.event_text := 'Schema and Table Names Can Not Be Altered';
earray.log_comment := 'InvalidUpdate Exception - ' || :NEW.schema_name || ':' || :NEW.table_name;
stats_collection.insert_log_event(earray);
WHEN OTHERS THEN
earray.sql_errno := SQLCODE;
earray.event_text := SQLERRM;
earray.log_comment := 'WHEN OTHERS Exception - ' || :NEW.schema_name || ':' || :NEW.table_name;
stats_collection.insert_log_event(earray);
END biu_sc_segment;
/
sho err |
And finally the job classes and scheduler jobs to run the stats collection procedure.
If you have stand-alone then you need only create job SC_NODE1 though it would be better if you created the job class as well to
become more familiar with "best practice." |
... CREATE JOB CLASS(es)
BEGIN
sys.dbms_scheduler.create_job(
job_name => 'SC_NODE1',
start_date => TRUNC(SYSDATE),
repeat_interval => 'FREQ=DAILY; BYHOUR=3;BYMINUTE=0;BYSECOND=0',
end_date => NULL,
job_class => 'JC_INST1',
job_type => 'PLSQL_BLOCK',
job_action => 'BEGIN sc_pkg.collect_table_stats(1); END; ',
comments => 'CBO statistics collection for Node 1');
sys.dbms_scheduler.create_job(
job_name => 'SC_NODE2',
start_date => TRUNC(SYSDATE),
repeat_interval => 'FREQ=DAILY; BYHOUR=3;BYMINUTE=1;BYSECOND=0',
end_date => NULL,
job_class => 'JC_INST2',
job_type => 'PLSQL_BLOCK',
job_action => 'BEGIN sc_pkg.collect_table_stats(2); END; ',
comments => 'CBO statistics collection for Node 2');
sys.dbms_scheduler.create_job(
job_name => 'SC_NODE3',
start_date => TRUNC(SYSDATE),
repeat_interval => 'FREQ=DAILY; BYHOUR=3;BYMINUTE=2;BYSECOND=0',
end_date => NULL,
job_class => 'JC_INST3',
job_type => 'PLSQL_BLOCK',
job_action => 'BEGIN sc_pkg.collect_table_stats(3); END; ',
comments => 'CBO statistics collection for Node 3');
sys.dbms_scheduler.create_job(
job_name => 'SC_NODE4',
start_date => TRUNC(SYSDATE),
repeat_interval => 'FREQ=DAILY; BYHOUR=3;BYMINUTE=3;BYSECOND=0',
end_date => NULL,
job_class => 'JC_INST4',
job_type => 'PLSQL_BLOCK',
comments => 'CBO statistics collection for Node 4');
sys.dbms_scheduler.set_attribute('SC_NODE1', 'job_priority', '1');
sys.dbms_scheduler.set_attribute('SC_NODE2', 'job_priority', '1');
sys.dbms_scheduler.set_attribute('SC_NODE3', 'job_priority', '1');
sys.dbms_scheduler.set_attribute('SC_NODE4', 'job_priority', '1');
sys.dbms_scheduler.enable('SC_NODE1');
sys.dbms_scheduler.enable('SC_NODE2');
sys.dbms_scheduler.enable('SC_NODE3');
sys.dbms_scheduler.enable('SC_NODE4');
sys.dbms_scheduler.run_job('SC_NODE1', FALSE);
sys.dbms_scheduler.run_job('SC_NODE2', FALSE);
sys.dbms_scheduler.run_job('SC_NODE3', FALSE);
sys.dbms_scheduler.run_job('SC_NODE4', FALSE);
END;
/
show errors; |
When collecting stats it is important that you understand the difference between how the EST_PCT value affects results. The following is a demo of the affect on two very small tables using different samplings. Note how the values vary from the actual. |
SQL> select count(*) from postal_code;
COUNT(*)
----------
42456
SQL> select count(*) from airplanes;
COUNT(*)
----------
250000
-- 1%
SQL> exec stats_collection.COLLECT_TABLE_STATS(1);
PL/SQL procedure successfully completed.
SQL> select table_name, num_rows, last_analyzed from user_tables;
-- actual
TABLE_NAME NUM_ROWS LAST_ANALYZED
------------------------------ ---------- --------------------
POSTAL_CODE 44086 19-OCT-2011 19:35:29
AIRPLANES 283900 19-OCT-2011 19:35:29
-- 10%
TABLE_NAME NUM_ROWS LAST_ANALYZED
------------------------------ ---------- --------------------
POSTAL_CODE 42456 20-OCT-2011 07:42:03
AIRPLANES 242120 20-OCT-2011 07:42:02
-- 20%
TABLE_NAME NUM_ROWS LAST_ANALYZED
------------------------------ ---------- --------------------
POSTAL_CODE 47847 21-OCT-2011 16:28:48
AIRPLANES 270655 21-OCT-2011 16:28:48
-- 30%
TABLE_NAME NUM_ROWS LAST_ANALYZED
------------------------------ ---------- --------------------
POSTAL_CODE 39325 22-OCT-2011 20:36:20
AIRPLANES 249097 22-OCT-2011 20:36:19
-- 40%
TABLE_NAME NUM_ROWS LAST_ANALYZED
------------------------------ ---------- --------------------
POSTAL_CODE 35908 23-OCT-2011 19:21:54
AIRPLANES 237928 23-OCT-2011 19:21:53
-- 50% next
x |
|