Tom Kyte
Return values from a PL/SQL table into a ref cursor?
How do I return the values from a PL/SQL table
(indexed by BINARY_INTEGER) into a ref cursor?
The contents of the PL/SQL table are NOT returnable by a single
SQL statement. AS it is a PL/SQL table ; I can't do a standard
select (This doesn't work -open TunnelCrs for
'select gw1, gw2 from a')
Currently, to create the ref. cursor am currently doing thefollowing:
open TunnelCrs for
'select '||to_char(a(1).gw1)||','||
to_char(a(1).gw1)||' from dual';
If there are multiple rows ; I am using an 'union all' .
The following is my type and PL/SQL table definitions:
TYPE gw_ttn is record (
gw_id1 INTEGER,
gw_id2 INTEGER
);
TYPE gw_tn is table of gw_ttn index by binary_integer;
TYPE TunnelCursor IS REF CURSOR;
a gw_tn;
Is there a more elegant solution?
Categories: DBA Blogs
JSON_TABLE, NESTED, and cartesian joins
using the example data from https://oracle-base.com/articles/19c/sql-nested-clause-instead-of-json_table-19c
If I want the Address *and* the Contact Details, the intuitive thing would be to add another NESTED clause, like this
<code>
SELECT jt.*
FROM json_documents
NESTED data
COLUMNS (FirstName, LastName, Job, Active,
NESTED ContactDetails COLUMNS (Email, Phone),
NESTED Address columns (Street, City, Country, Postcode)
)
jt;
</code>
However, what I end up with is
"FIRSTNAME" "LASTNAME" "JOB" "ACTIVE" "EMAIL" "PHONE" "STREET" "CITY" "COUNTRY" "POSTCODE"
"John" "Doe" "Clerk" "true" "" "" "99 My Street" "My City" "UK" "A12 34B"
"John" "Doe" "Clerk" "true" "john.doe@example.com" "44 123 123456" "" "" "" ""
"Jayne" "Doe" "Manager" "false" "" "" "100 My Street" "My City" "UK" "A12 34B"
"Jayne" "Doe" "Manager" "false" "jayne.doe@example.com" "" "" "" "" ""
I am getting one row with the first NESTED, and another with the other NESTED. Is it because NESTED is shorthand for JSON_TABLE, so it is creating 2 JSON_TABLEs, and hence I am getting a cartesian join? And is that what I *should* expect?
For the query to work correctly when I have more than one NESTED object, I have to forgo the NESTED clause for the 2nd level nesting and explicitly set out each column and its PATH, like this
<code>
SELECT j.id, jt.first_name, jt.last_name, jt.job, jt.email, jt.phone, jt.active
FROM json_documents j NESTED data
COLUMNS (first_name VARCHAR2(50 CHAR) PATH FirstName,
last_name VARCHAR2(50 CHAR) PATH LastName,
job VARCHAR2(10 CHAR) PATH Job,
email VARCHAR2(100 CHAR) PATH ContactDetails.Email,
phone VARCHAR2(50 CHAR) PATH ContactDetails.Phone,
street VARCHAR2(100 CHAR) PATH Address.Street,
city VARCHAR2(50 CHAR) PATH Address.City,
country VARCHAR2(100 CHAR) PATH Address.Country,
postcode VARCHAR2(50 CHAR) PATH Address.Postcode,
active VARCHAR2(5 CHAR) PATH Active) jt;
</code>
What it comes down to is why does this
<code>
SELECT jt.*
FROM json_documents
NESTED data
COLUMNS (FirstName, LastName, Job, Active,
NESTED ContactDetails COLUMNS (Email, Phone)
)
jt;
</code>
or this
<code>
SELECT jt.*
FROM json_documents
NESTED data
COLUMNS (FirstName, LastName, Job, Active,
NESTED Address columns (Street, City, Country, Postcode)
)
jt;
</code>
work correctly and give me 2 rows
but this
<code>
SELECT jt.*
FROM json_documents
NESTED data
COLUMNS (FirstName, Las...
Categories: DBA Blogs
Cost increases using json_value: jsontable evaluation
Hi everyone!
I'm working with 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
I'm trying to obtain json elements as columns in a view.
When I saw the plan for the following sql I realized that the cost increases when json_value is used more than once in the same column.
<code>
/*Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production*/
/*TESTING JSON_VALUE SELECTING FROM DUAL */
--COST OK
WITH Example AS(SELECT '{"element_1":"Radio controlled plane","element_2":"Ship"}' Json_1, '{"element_3":"Car","element_4":"Train"}' Json_2 FROM DUAL)
SELECT json_value(json_1,'$.element_1') element_1_value,json_value(json_1,'$.element_2') element_2_value,json_value(json_2,'$.element_3') element_3_value
FROM Example A;
/*TESTING JSON_VALUE FROM EXAMPLE TABLE*/
drop table JSON_TEST;
CREATE TABLE JSON_TEST
(
test_id number(4),
JSON_1 VARCHAR2(100 BYTE),
JSON_2 VARCHAR2(100 BYTE)
);
/*INSERT TEST DATA*/
Insert into JSON_TEST
(test_id,JSON_1, JSON_2)
Values
(1,'{"element_1":"Plane","element_2":"Ship"}', '{"element_3":"Car","element_4":"Train"}');
Insert into JSON_TEST
(test_id,JSON_1, JSON_2)
Values
(2,'{"element_1":"Duck","element_2":"Dog"}', '{"element_3":"Cat","element_4":"Mouse"}');
Insert into JSON_TEST
(test_id,JSON_1, JSON_2)
Values
(3,'{"element_1":"Glass","element_2":"Knife"}', '{"element_3":"Bottle","element_4":"Cup"}');
COMMIT;
/*TESTING COST WITH ONLY ONE ELEMENT FROM EACH JSON COLUMN (JSON_1, JSON_2)*/
--COST OK
SELECT json_value(json_1,'$.element_1') element_1_value,json_value(json_2,'$.element_3') element_3_value
FROM JSON_TEST ;
/*TESTING COST WITH TWO ELEMENTS FROM ONE JSON COLUMN (JSON_1)*/
--COST INCREASES A LOT, JSONTABLE EVALUATION IS NOW PRESENT IN THE PLAN
SELECT json_value(json_1,'$.element_1') element_1_value,json_value(json_1,'$.element_2') element_2_value
FROM JSON_TEST ;
</code>
There's a bug or i'm doing something wrong?
How can I or our DBA's fix this problem?
Thanks!!
Categories: DBA Blogs
If exists then insert query performance comparison
Hi Tom,
Below are the scripts for your reference wherein it works like if a row is not present in a table then insert it based on one column value.
1.My query to you is simply looking at below scripts (without any alteration)
which method do you think is better w.r.t to good execution time and performance and why?
2.If needed to check execution plan/time of a plsql block (as in first method ) of below scripts .How should we do it?
--table creation
create table test1(id number,name varchar2(500),city varchar2(500));
--method1 using plsql block
declare
vtemp number(10,0):=0;
begin
begin
select 1 into vtemp from dual
where not EXISTS
(
select * from test1
where city='New York' --based on city
);
exception
when no_data_found then null;
end;
if vtemp=1
then
insert into test1 values('1','Alexa','New York');
commit;
end if;
end;
/
OR
----method2 using insert select
insert into test1(id,name,city)
select 1,'Alexa','New York' from dual
where
'New York' not in (select city from test1 where city='New York');
?- based on city new york will inserting in table
Thanks
Categories: DBA Blogs
Golden Gate for OLTP
Hello Team,
I wanted to know if it is possible to run the same OLTP application on two different sites with databases on both sites being synced using the Golden Gate? Generally, do we need another dataguard enviornment(s) to cater as DR solution or above configuration suffice as the DR solution?
What points need to be considered/taken care of for such configurations?
Regards,
Categories: DBA Blogs
Using hierarchical query and its impact
Hi Tom,
I do have a large table millions of records. and the levels are much deeper it would vary case to case
In order to achieve the results, whether hierarchical query is good or some other approach suits the purpose
Regards,
Jithesh
Categories: DBA Blogs
Comparison between ALTER TABLE MOVE ONLINE and DBMS_REDEFINITION
I need to move online a huge table with BLOBs (around 25 TB) to a new tablespace with bigger block size.
Wich method is recommended? Is DBMS_REDEFINITION still the method recommended by oracle? Or is now ALTER TABLE MOVE ONLINE?
Categories: DBA Blogs
ORA-00054: RESOURCE BUSY AND ACQUIRE WITH NOWAIT SPECIFIED
Hi,
I am getting following error while migration.
ORA-00054: RESOURCE BUSY AND ACQUIRE WITH NOWAIT SPECIFIED
Migration code is given below. I am not able to understand I have acquired exclusive lock on this table then why I am getting this error. <b>Please help ASAP this has to go in production tomorrow. Thanks in Advance</b>
Code:
<code>
DECLARE
l_script clob := q'[BEGIN
DELETE FROM fqng_report.rpt_refresh_history_data_queue q
WHERE q.source_table_name = 'FQNG_INVENTORY';
EXECUTE IMMEDIATE 'ALTER TABLE FQNG_REPORT.RPT_INVENTORY NOLOGGING';
EXECUTE IMMEDIATE 'TRUNCATE TABLE FQNG_REPORT.rpt_inventory';
EXECUTE IMMEDIATE 'LOCK TABLE FQNG_REPORT.rpt_inventory IN EXCLUSIVE MODE';
INSERT INTO FQNG_REPORT.RPT_INVENTORY
SELECT *
FROM FQNG.fqng_inventory
;
COMMIT;
EXECUTE IMMEDIATE 'ALTER TABLE FQNG_REPORT.RPT_INVENTORY LOGGING';
END;]';
BEGIN
DBMS_SCHEDULER.CREATE_JOB (
job_name => 'MASTER.REPORT_DATA_MIGRATION_INVENT',
job_type => 'PLSQL_BLOCK',
job_action => l_script,
number_of_arguments => 0,
start_date => SYSDATE,
repeat_interval => NULL,
end_date => NULL,
job_class => 'SYS.DEFAULT_JOB_CLASS',
enabled => TRUE,
auto_drop => TRUE,
comments => 'This is one time job for Reports Migration',
credential_name => NULL,
destination_name => NULL);
end;
/
</code>
Here the data is migrated from the transaction table to the table in the reporting schema.
Categories: DBA Blogs
query performance tuning
hi, there are queries which are performing slow from past 1 day, earlier same queries were giving output in 22 mins and not it takes 55 mins, this is basically the EOD process.
Query 1:
<code>SELECT DISTINCT DECODE(:B1 ,'C',IR.REASON_CODE,'D',IR.DESCRIPTION)
REASON FROM DEA_REQN_REASONS RR INNER JOIN DEA_INVESTMENT_REASON IR
ON UPPER(RR.REASON_CODE) = UPPER(IR.REASON_CODE) WHERE RR.REQ_ID IN
(SELECT DEAL_ID FROM DEA_SMD_DEALS_ALLOC WHERE APP_REFER = :B2 UNION
ALL SELECT DEAL_ID FROM DEA_SMD_DEALS_ALLOC_HIST WHERE APP_REFER =
:B2 UNION ALL SELECT DEAL_ID FROM DEA_PMDEAL_ALLOC WHERE APP_REFER =
:B2 UNION ALL SELECT DEAL_ID FROM DEA_PMDEAL_ALLOC_HIST WHERE
APP_REFER = :B2 ) AND IR.RECTYPE='L'</code>
Query 2:
<code>select * from v_chkposint where exists (select scheme from scheme
where amc_code = v_chkposint.amc_code and scheme = v_chkposint.scheme
and project_y_n = 'Y' and rectype = 'L')</code>
what's the scope of tuning, if you expert advise is given, will help to improve the performance.
<code>BEGIN_INTERVAL_TIME PLAN_HASH_VALUE OPTIMIZER_ SQL_PROFIL DISK_READS_TOTAL BUFFER_GETS_TOTAL CPU_TIME_TOTAL ELAPSED_TIME_TOTAL IOWAIT_TOTAL
------------------------------ --------------- ---------- ---------- ---------------- ----------------- ----------------- ------------------ -----------------
20-OCT-23 04.30.46.535 PM 1596829921 ALL_ROWS 45897 11332241 18512737 68903861 8012199
20-OCT-23 01.30.04.043 PM 1596829921 ALL_ROWS 680 7598713 12852262 47355121 428789
19-OCT-23 03.30.03.269 PM 1596829921 ALL_ROWS 27854 393591114 729369364 2402058351 781202
19-OCT-23 11.30.11.237 AM 1596829921 ALL_ROWS 27854 379549234 706629192 2327540410 781202
18-OCT-23 06.30.23.024 PM 1596829921 ALL_ROWS 13936 126131290 220169362 733216565 418827</code>
Please find below execution plan for SQL_ID [bjhx2g3b6krmf and ff0sjh69smwx2].
+++++++++++++Execution Plan for SQL_ID [bjhx2g3b6krmf]
<code>SQL> SELECT * FROM table(DBMS_XPLAN.DISPLAY_AWR('bjhx2g3b6krmf'));
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
SQL_ID bjhx2g3b6krmf
--------------------
SELECT DISTINCT DECODE(:B1 ,'C',IR.REASON_CODE,'D',IR.DESCRIPTION)
REASON FROM DEA_REQN_REASONS RR INNER JOIN DEA_INVESTMENT_REASON IR ON
UPPER(RR.REASON_CODE) = UPPER(IR.REASON_CODE) WHERE RR.REQ_ID IN
(SELECT DEAL_ID FROM DEA_SMD_DEALS_ALLOC WHERE APP_REFER = :B2 UNION
ALL SELECT DEAL_ID FROM DEA_SMD_DEALS_ALLOC_HIST WHERE APP_REFER = :B2
UNION ALL SELECT DEAL_ID FROM DEA_PMDEAL_ALLOC...
Categories: DBA Blogs
Writting DDL_EVENT Triggers
Do you have a full example of writing a ddl_Even trigger. I want to audit on a schema level any creates, alters, or drops of any objects....and write the actual calling syntax to a table.
Oracle8 has the new feature of DDL_EVENT, and their is an example in the SQL Reference, but it just shows a pl/sql_block not how to extract the calling DDL.
If their is not an easier way I would end up querying v$sql_text and looking for text with some ddl call in it. I would then write the text to a seperate table.
Any other ideas or examples would be appreciated.
Thanks
Categories: DBA Blogs
Unified Auditing on tables
We have been trying to enable auditing on few DDL activities in autonomous database.
As per the following code we created, enabled and verified the audit configs. Everything looks normal to us, however none of the table create/drop activities (listed at the end) are not being tracked in <code>unified_audit_trail</code> table.
We can only see audit management related activities like <code>DBMS_AUDIT_MGMT BEGIN dbms_audit_mgmt.flush_unified_audit_trail; END;</code> in <code>unified_audit_trail</code> table.
I must be missing something, can you please help me navigate through it.
<code>
-- Connect to ADMIN user
-- DB Version
SELECT *
FROM v$version;
-- Verfiy Audit Status
SELECT parameter
, value
FROM v$option
WHERE parameter = 'Unified Auditing';
-- Enable immediate write mode
BEGIN
dbms_audit_mgmt.set_audit_trail_property(audit_trail_type => dbms_audit_mgmt.audit_trail_unified
, audit_trail_property => dbms_audit_mgmt.audit_trail_write_mode
,audit_trail_property_value => dbms_audit_mgmt.audit_trail_immediate_write);
END;
/
-- Validate audit configs
SELECT *
FROM dba_audit_mgmt_config_params;
-- Create an audit policy
CREATE AUDIT POLICY xx_ddl_test_policy
PRIVILEGES CREATE ANY TABLE
, ALTER ANY TABLE
, DROP ANY TABLE
WHEN 'SYS_CONTEXT (''USERENV'', ''SESSION_USER'') IN (''ADMIN'',''XXTST'')'
EVALUATE PER STATEMENT ;
-- Enable Audit Policy
AUDIT POLICY xx_ddl_test_policy;
-- Validate policy
SELECT *
FROM audit_unified_enabled_policies
WHERE policy_name IN ('XX_DDL_TEST_POLICY');
SELECT *
FROM audit_unified_policies
WHERE policy_name in (SELECT policy_name
FROM audit_unified_enabled_policies);
-- Flush audit trail
EXEC dbms_audit_mgmt.flush_unified_audit_trail;
-- Connect to XXTST user
-- Test Audit Policy
CREATE TABLE xx_test (vc_col VARCHAR2(25));
INSERT INTO xx_test ('Test 01');
COMMIT;
TRUNCATE TABLE xx_test;
DROP TABLE xx_test;
-- Verify audit trial
SELECT *
FROM unified_audit_trail
ORDER BY event_timestamp DESC;
</code>
Categories: DBA Blogs
Statistics Gathering using older version feature
Hello Connor/Chris,
I am in the middle of troubleshooting a slow-running application maintenance job for a database that has been migrated/upgraded from on-premise 12.1.0.2 on linux x64 to Exadata Cloud @ Customer setup. Many of the "Top SQLs" that have regressed are internal SQLs generated by call to GATHER_TABLE_STATS call that is part of the application job. The internal SQLs are similar to below but for some reason are running 5-6 times slower on 19c database. I am running out of options to figure out what could be causing this but I was wondering whether there exists a way to replicate the behaviour of statistics gathering from 12.1.0.2 in 19c database.
Example internal SQL that has regressed:
<code>select /*+ full(t) no_parallel(t) no_parallel_index(t) dbms_stats
cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring
xmlindex_sel_idx_tbl opt_param('optimizer_inmemory_aware' 'false')
no_substrb_pad */to_char(count("OWNER")),substrb(dump(min("OWNER"),16,0,64)
,1,240),substrb(dump(max("OWNER"),16,0,64),1,240),
to_char(count("OBJECT_NAME")),substrb(dump(min("OBJECT_NAME"),16,0,64),1,
240),substrb(dump(max("OBJECT_NAME"),16,0,64),1,240),
to_char(count("SUBOBJECT_NAME")),substrb(dump(min("SUBOBJECT_NAME"),16,0,64)
,1,240),substrb(dump(max("SUBOBJECT_NAME"),16,0,64),1,240),
to_char(count("OBJECT_ID")),substrb(dump(min("OBJECT_ID"),16,0,64),1,240),
substrb(dump(max("OBJECT_ID"),16,0,64),1,240),
to_char(count("DATA_OBJECT_ID")),substrb(dump(min("DATA_OBJECT_ID"),16,0,64)
,1,240),substrb(dump(max("DATA_OBJECT_ID"),16,0,64),1,240),
to_char(count("OBJECT_TYPE")),substrb(dump(min("OBJECT_TYPE"),16,0,64),1,
240),substrb(dump(max("OBJECT_TYPE"),16,0,64),1,240),
to_char(count("CREATED")),substrb(dump(min("CREATED"),16,0,64),1,240),
substrb(dump(max("CREATED"),16,0,64),1,240),to_char(count("LAST_DDL_TIME")),
substrb(dump(min("LAST_DDL_TIME"),16,0,64),1,240),
substrb(dump(max("LAST_DDL_TIME"),16,0,64),1,240),to_char(count("TIMESTAMP")
),substrb(dump(min("TIMESTAMP"),16,0,64),1,240),
substrb(dump(max("TIMESTAMP"),16,0,64),1,240),to_char(count("STATUS")),
substrb(dump(min("STATUS"),16,0,64),1,240),substrb(dump(max("STATUS"),16,0,
64),1,240),to_char(count("TEMPORARY")),substrb(dump(min("TEMPORARY"),16,0,
64),1,240),substrb(dump(max("TEMPORARY"),16,0,64),1,240),
to_char(count("GENERATED")),substrb(dump(min("GENERATED"),16,0,64),1,240),
substrb(dump(max("GENERATED"),16,0,64),1,240),to_char(count("SECONDARY")),
substrb(dump(min("SECONDARY"),16,0,64),1,240),substrb(dump(max("SECONDARY"),
16,0,64),1,240),to_char(count("NAMESPACE")),substrb(dump(min("NAMESPACE"),
16,0,64),1,240),substrb(dump(max("NAMESPACE"),16,0,64),1,240),
to_char(count("EDITION_NAME")),substrb(dump(min("EDITION_NAME"),16,0,64),1,
240),substrb(dump(max("EDITION_NAME"),16,0,64),1,240),
to_char(count("SHARING")),substrb(dump(min("SHARING"),16,0,64),1,240),
substrb(dump(m...
Categories: DBA Blogs
How to extract the tag value form the xml tag with colon
How to Extract the Tag value from xml which as tag name with colon.
<code>
<link>https://www.rba.gov.au/statistics/frequency/exchange-rates.html#JPY</link>
<description>AUD/JPY representative rate as at 4.00 pm Eastern Australian time on 14 Nov 2023</description>
<dc:date>2023-11-14T17:52:00+11:00</dc:date>
<dc:language>en</dc:language>
<dcterms:audience>general</dcterms:audience>
<cb:statistics rdf:parseType="Resource">
<rdf:type rdf:resource="http://www.cbwiki.net/wiki/index.php/RSS-CB_1.2_RDF_Schema#Exchange-Rates"/>
<cb:country>AU</cb:country>
<cb:institutionAbbrev>RBA</cb:institutionAbbrev>
<cb:exchangeRate rdf:parseType="Resource">
<rdf:type rdf:resource="http://www.cbwiki.net/wiki/index.php/RSS-CB_1.2_RDF_Schema#Exchange-Rates"/>
<cb:observation rdf:parseType="Resource">
<rdf:type rdf:resource="http://www.cbwiki.net/wiki/index.php/RSS-CB_1.2_RDF_Schema#Exchange-Rates"/>
<cb:value>96.64</cb:value>
<cb:unit>AUD</cb:unit>
<cb:decimals>2</cb:decimals>
</cb:observation>
<cb:baseCurrency>AUD</cb:baseCurrency>
<cb:targetCurrency>JPY</cb:targetCurrency>
<cb:rateType>4.00 pm foreign exchange rates</cb:rateType>
<cb:observationPeriod rdf:parseType="Resource">
<rdf:type rdf:resource="http://www.cbwiki.net/wiki/index.php/RSS-CB_1.2_RDF_Schema#Exchange-Rates"/>
<cb:frequency>daily</cb:frequency>
<cb:period>2023-11-14</cb:period>
</cb:observationPeriod>
</cb:exchangeRate>
</cb:statistics>
</code>
Tried with by inserting the xml as clob object.
<code>SELECT
XMLTYPE(t.xclob).EXTRACT('/rdf:RDF/item/cb:statistics/cb:exchangeRate/cb:baseCurrency/text()').getStringVal()
FROM
test_clob t;
Getting dORA-31011: XML parsing failed
ORA-19202: Error occurred in XML processing
LPX-00601: Invalid token in: ' '/rdf:RDF/item/cb:statistics/cb:exchangeRate/cb:baseCurrency/text()'
31011. 00000 - "XML parsing failed"
*Cause: XML parser returned an error while trying to parse the document.
*Action: Check if the document to be parsed is valid.</code>
Please suggest.
Categories: DBA Blogs
Oracle SE2 Licence NUP
Hello,
i am very new at Oracle licensing issues and i am facing a situation with an Oracle SE2 Licence NUP.
i am very confused how to count the number of NUP's i will have to purchase.
is a NUP an explicit Oracle user i have created? what about system Oracle users ?
i have to deal with a web/backend application using an Oracle user to connect to an Oracle instance.
but this application is in fact creating parallel processes each connecting to Oracle with this user.
how to count the needed NUP's in this case. is NUP licensing the right way to go ?
thanks in advance for your help.
regards.
Laure
Categories: DBA Blogs
503 Service Unavailable - ORA-28001: the password has expired
hi sir
i got this message when i try to run my app
"503 Service Unavailable
2023-11-15T09:24:51.336389300Z | 8-GbuZcF3_cnIHgb1ul5Kg |
ORDS was unable to make a connection to the database. This can occur if the database is unavailable, the maximum number of sessions has been reached or the pool is not correctly configured. The connection pool named: |default|lo| had the following error(s): ORA-28001: the password has expired"
how can i solve it ?
thanks
Categories: DBA Blogs
Question about Leading Hint and outer Joined tables
Hi Tom,
I have 5 tables in a query, Tab1, Tab2, Tab3, Tab4 and Tab5 and I have the following query below.
Select *
From Tab1, Tab2, Tab3, Tab4, Tab5
Where Tab1.x = Tab2.y(+)
and Tab1.x = Tab3.y
and Tab3.x = Tab4.y(+)
and Tab3.x = Tab5.y;
Question 1. Can a table who's outer joined (in this case Tab2, Tab4) be a Driving table?
Question 2. In the Join Order what happens to tables that are outer joined (in this case Tab2, Tab4). Do we only use tables that are inner joined in the Leading hint?
Regards,
Ravi
Categories: DBA Blogs
Pros and cons of DBLinks for application integration
Hello Tom,
A colleague of mine is a big fan of service-oriented architectures and middle-ware. His view is that dblinks are 99.9% of the time a bad idea and that you should go via some sort of message layer in order to pass information between applications.
In many circumstances I can see the value of this, particularly in heterogeneous environments, however we are an all Oracle shop and I don't see the value in writing a windows service to pull data from one database and insert it into another.
What are your views on this?
I realise this is not really a plsql question but I would be interested to hear your thoughts on this.
Regards
Alastair
Categories: DBA Blogs
Why does ROWID break when used with CONNECT BY, but only in a subquery?
Hi Tom!
ROWID works in a subselect:
<code>SELECT * FROM(SELECT rowid, d.* FROM DUAL d)</code>
and it also works with connect by:
<code>SELECT rowid, LEVEL FROM DUAL CONNECT BY LEVEL < 2</code>
But when used with both:
<code>SELECT * FROM(SELECT rowid, LEVEL FROM DUAL CONNECT BY LEVEL < 2)</code>
then it throws:
<code>ORA-01446: cannot select ROWID from, or sample, a view with DISTINCT, GROUP BY, etc.</code>
I've stumbled on this, since I used CONNECT BY LEVEL as a substitute for generate_series in a view, which was used in an APEX calendar, which had "select ROWID" ticket by default.
Categories: DBA Blogs
sys_context expressions. Can they be indexed?
Dear Tom,
I am trying to tune a query (12.1.0.1 version, 4 node RAC).
The database appears to use DBMS_RLS policies.
I see one context defined as:
<code>CREATE OR REPLACE CONTEXT CRAMERSESSION
USING CRAMER.PKGSESSIONCONTEXT
/</code>
CRAMER.PKGSESSIONCONTEXT code is obfuscated.
In the explain plan, I see this:
<code>74 TABLE ACCESS FULL TABLE CRAMER.SERVICEOBJECT Object Instance: 90 Filter Predicates: "SERVICEOBJECT2DIMOBJECT"=3 AND "RPPLANID"=TO_NUMBER(NVL(SYS_CONTEXT('CRAMERSESSION','PLANCONTEXT'),'0')) Cost: 124,425 Bytes: 2,898 Cardinality: 138 Time: 5</code>
"RPPLANID"=TO_NUMBER(NVL(SYS_CONTEXT('CRAMERSESSION','PLANCONTEXT'),'0')) -- this repeats throughout the explain plan.
This evaluates to 0:
<code>select TO_NUMBER(NVL(SYS_CONTEXT('CRAMERSESSION','PLANCONTEXT'),'0')) from dual;</code>
is there any way to index this expression?
"RPPLANID"=TO_NUMBER(NVL(SYS_CONTEXT('CRAMERSESSION','PLANCONTEXT'),'0'))
Thank you.
Categories: DBA Blogs
ROW_NUMBER over DESC date column gives wrong results when partitioned index is used
Hi,
Please refer to the linked LiveSQL for the table, index, and data definition.
I execute a query that uses ROW_NUMBER() OVER (ORDER BY SOME_DATE DESC). When a partitioned index on SOME_DATE DESC is used, the result contains wrong row numbers. The order of rows is correct only inside each partition, but partitions are in the wrong order.
<code>
SELECT
ID,
TO_CHAR(SOME_DATE, 'YYYY-MM-DD HH24:MI:SS'),
ROW_NUMBER() OVER (ORDER BY SOME_DATE DESC) AS row_num
FROM
TEST_TABLE
WHERE
SOME_GROUP = 1
AND SOME_DATE < to_date('2023-11-08 15:00:00', 'YYYY-MM-DD HH24:MI:SS');
</code>
Gives me wrong ROW_NUM:
<code>
ID TO_CHAR(SOME_DATE,' ROW_NUM
---------- ------------------- ----------
3 2023-11-06 03:00:00 1
2 2023-11-06 02:00:00 2
1 2023-11-06 01:00:00 3
6 2023-11-07 03:00:00 4
5 2023-11-07 02:00:00 5
4 2023-11-07 01:00:00 6
10 2023-11-08 04:00:00 7
9 2023-11-08 03:00:00 8
8 2023-11-08 02:00:00 9
7 2023-11-08 01:00:00 10
</code>
If I force a full table scan instead of an index scan with /*+ FULL() */ hint, the result is correct:
<code>
SELECT /*+ FULL(TEST_TABLE) */
ID,
TO_CHAR(SOME_DATE, 'YYYY-MM-DD HH24:MI:SS'),
ROW_NUMBER() OVER (ORDER BY SOME_DATE DESC) AS row_num
FROM
TEST_TABLE
WHERE
SOME_GROUP = 1
AND SOME_DATE < to_date('2023-11-08 15:00:00', 'YYYY-MM-DD HH24:MI:SS');
</code>
Gives me:
<code>
ID TO_CHAR(SOME_DATE,' ROW_NUM
---------- ------------------- ----------
10 2023-11-08 04:00:00 1
9 2023-11-08 03:00:00 2
8 2023-11-08 02:00:00 3
7 2023-11-08 01:00:00 4
6 2023-11-07 03:00:00 5
5 2023-11-07 02:00:00 6
4 2023-11-07 01:00:00 7
3 2023-11-06 03:00:00 8
2 2023-11-06 02:00:00 9
1 2023-11-06 01:00:00 10
</code>
Categories: DBA Blogs