Feed aggregator

Flashback Archive

Jonathan Lewis - Tue, 2019-12-24 14:33

A classic example of Oracle’s “mix and match” problem showed up on the Oracle Developer Forum a few days ago. Sometimes you see two features that are going to be really helpful in your application – and when you combine them something breaks. In this case it was the combination of Virtual Private Database (VPD/FGAC/RLS) and Flashback Data Archive (FDA/FBA) that resulted in the security predicate not being applied the way you would expect, hence allowing users to see data they were not supposed to see.

The OP supplied us with a model (based in part on Tim Hall’s FDA article) to demonstrate the issue on 11.2.0.4, and I’ve hacked it about a bit to explain it here, and to test it on 12.2.0.1 and 19.3.0.0 where the same failure occurs.

I’m going to start with just the VPD part of the setup before adding in the FDA. Most of the code has been written to run as the SYS user and it creates a new tablespace and a couple of users so you may want to do some editing before you try any tests. There’s also a short script at the end of the blog to remove the flashback data archive, tablespace, and users – again, something to be run by SYS.

You’ll note that this script assumes you already have a tablespace called test_8k_assm, and a temporary tablespace called temp. The latter may well be a valid assumption, the former probably isn’t.

rem
rem     Script:         vpd_fda_bug.sql
rem     Author:         Duncs (ODC)
rem     Dated:          Dec 2019
rem
rem     Last tested
rem             19.3.0.0
rem             12.2.0.1
rem             11.2.0.4
rem
rem     Notes
rem     Has to be run as SYS
rem
rem     See also
rem     https://community.oracle.com/thread/4307453
rem


create user vpd_test_data_owner identified by Password_1234 
        default tablespace test_8k_assm
        temporary tablespace temp 
        quota unlimited on test_8k_assm
;
 
grant 
        connect,
        resource, 
        create any context
to
        vpd_test_data_owner
;

grant
        execute on dbms_rls
to
        vpd_test_data_owner
;
 
 
create table vpd_test_data_owner.person (
        person_id       number, 
        surname         varchar2(30), 
        unit_id         number
);

insert into  vpd_test_data_owner.person values (-1, 'One',  -1);
insert into  vpd_test_data_owner.person values (.2, 'Two',  -2);
insert into  vpd_test_data_owner.person values (.3, 'Three',-3);
insert into  vpd_test_data_owner.person values (-4, 'Four', -4);
insert into  vpd_test_data_owner.person values (-5, 'Five', -5);

commit;

create user vpd_test_function_owner identified by Password_1234
        default tablespace test_8k_assm 
        temporary tablespace temp 
        quota unlimited on test_8k_assm
;
 
grant 
        connect, 
        resource
to 
        vpd_test_function_owner
;
 
prompt  ============================================
prompt  Create a packaged function to set a context
prompt  that we will use in a VPD security predicate
prompt  ============================================

create or replace package vpd_test_function_owner.context_api_pkg AS

procedure set_parameter(
        p_name  in  varchar2,
        p_value in  varchar2
);

end context_api_pkg;
/
 
create or replace package body vpd_test_function_owner.context_api_pkg IS
 
procedure set_parameter (
        p_name  in  varchar2,
        p_value in  varchar2
) is
begin
        dbms_session.set_context('my_vpd_context', p_name, p_value);
end set_parameter;

end context_api_pkg;
/

prompt  ======================================================
prompt  Allow public to set the context value.  (Not sensible)
prompt  ======================================================

grant execute on vpd_test_function_owner.context_api_pkg to public;

prompt  ===============================================================
prompt  Create a context that can only be set by our packaged procedure
prompt  ===============================================================

create or replace context my_vpd_context 
        using vpd_test_function_owner.context_api_pkg
;

prompt  =====================================================
prompt  Create a security function that generates a predicate
prompt  based on our context, then create a policy to connect
prompt  the function to the test table for select statements.
prompt  =====================================================
 
create or replace function vpd_test_function_owner.test_vpd_function (
    p_schema  in varchar2 default null
  , p_object  in varchar2 default null
)
return varchar2
as
    lv_unit_id number := nvl(sys_context('my_vpd_context','unit_id'), -1);
begin
    return 'unit_id = ' || lv_unit_id;
end test_vpd_function;
/

begin
      dbms_rls.add_policy (
               object_schema    => 'vpd_test_data_owner'
             , object_name      => 'person'
             , policy_name      => 'test_vpd_policy'
             , function_schema  => 'vpd_test_function_owner'
             , policy_function  => 'test_vpd_function'
             , statement_types  => 'select'
      );
end;
/


There are several quick and dirty bits to the script – you shouldn’t be using the connect and resoruce roles, for example; they exist only for backwards compatibility and don’t even manage that very well any more. Any grants made should be carefully chosen to be the minimum necessary to achieve the required functionality, and you should be defining roles of your own rather than using pre-existing ones.

Generally you don’t expect to set up a security policy that stops the owner of the data from seeing all the data – and I’ve left the policy to default to dynamic which means the function will execute on every parse and execute of a statement accessing the table (and that’s somethin to avoid if you can). For convenience I’ve also alloweed the owner of the data to execute the function that changes the context that is used by the predicate function – and you don’t really want to allow anyone who is constrained by a security policy to be able to modify their own access rights like this.

Since the code allows a deliberately lax setup on VPD you could at this point do something like the following to check that VPD is actually working before moving on to test the effect of FDA:

connect vpd_test_data_owner/Password_1234
select * from person;

execute vpd_test_function_owner.context_api_pkg.set_parameter('unit_id',-2)
select * from person;

The first execution of the query should show you only the row where unit_id = -1 as “unit_id = -1” is the default return value from the security function. The second execution should return only the row where unit_id = -2 as the call to set_parameter() changes the context value so that when the security function re-executes it generate a new security predicate “unit_it = -2”. (It’s worth noting that one of the options for security policies is to make them context-dependent so that they re-execute only when the relevant context is changed – but in this case the policy defaults to “re-execute the function on every parse and execute”.)  [NOTE: for some clues on the possible performance impact of badly defined VPD, check the comments made in response to this blog note]

Once you’re satisfied that the security policy is working correctly you can move on to the second feature – flashback data archive. Logging on as SYS once again, execute the following code – which, amongst other things, creates a new tablespace. You’ll notice that I’ve got three lines in the “create tablespace” statement naming a datafile (though one of them doesn’t actually supply a name). The names (or absence thereof) correspond to the default naming conventions I have for my sandbox 11g, 12c, and 19c instances in that order. You will want to adjust according to your file-naming conventions.


prompt  ============================
prompt  Setting up Flashback Archive
prompt  ============================

create tablespace fda_ts 
        datafile        
--              no name needed if OMF
--              '/u01/app/oracle/oradata/orcl12c/orcl/fda_ts.dbf'
--              '/u02/data/OR19/orclpdb/fda_ts.dbf'
        size 1m autoextend on next 1m
;

alter user vpd_test_data_owner quota unlimited on fda_ts;

create flashback archive default fda_1year tablespace fda_ts
quota 1g retention 1 year;
 
grant flashback archive on fda_1year to vpd_test_data_owner;
grant flashback archive administer to vpd_test_data_owner;
grant execute on dbms_flashback_archive to vpd_test_data_owner;
 
prompt  Sleeping for 1 minute before adding table to flashback archive
execute dbms_lock.sleep(60);
alter table vpd_test_data_owner.person flashback archive fda_1year;

prompt  Sleeping for 1 minute before updating the date
execute dbms_lock.sleep(60);
update vpd_test_data_owner.person set surname = upper(surname);

commit;

prompt  Sleeping for 5 minutes to give FDA a chance to do its thing.
execute dbms_lock.sleep(300);
alter system flush shared_pool;

prompt  ==================================================
prompt  Now connect to the data owner schema and run the 
prompt  original query then a couple of flashback queries, 
prompt  pulling their plans from memory
prompt  ==================================================

connect vpd_test_data_owner/Password_1234

set linesize 120
set pagesize 50
set trimspool on
set serveroutput off

spool vpd_fda_bug.lst
-- set autotrace on explain

select * from vpd_test_data_owner.person;
select * from vpd_test_data_owner.person AS OF TIMESTAMP SYSDATE-1/1440;
select * from vpd_test_data_owner.person AS OF TIMESTAMP SYSDATE-2/1440;
select * from vpd_test_data_owner.person AS OF TIMESTAMP SYSDATE-3/1440;
select * from vpd_test_data_owner.person AS OF TIMESTAMP SYSDATE-4/1440;
select * from vpd_test_data_owner.person AS OF TIMESTAMP SYSDATE-5/1440;
select * from vpd_test_data_owner.person AS OF TIMESTAMP SYSDATE-6/1440;
select * from vpd_test_data_owner.person AS OF TIMESTAMP SYSDATE-7/1440;  
select * from vpd_test_data_owner.person AS OF TIMESTAMP SYSDATE-8/1440;  
select * from vpd_test_data_owner.person AS OF TIMESTAMP SYSDATE-9/1440;  
select * from vpd_test_data_owner.person AS OF TIMESTAMP SYSDATE-10/1440; 
select * from vpd_test_data_owner.person AS OF TIMESTAMP SYSDATE-15/1440; 
set autotrace off
spool off 

I’ve created a tablespace that I’m going to reserve for the flashback archive and given my data owner a quota on that tablespace; then I’ve created a flashback archive in that tablespace and granted various privileges relating to flashback archive to my data owner.

The next few lines of code include a couple of calls to dbms_lock.sleep() because I want to avoid the risk of getting an Oracle error ORA-01466: unable to read data – table definition has changed, but all I’ve done otherwise is modify the person table to be archiving and then made a little change that will eventually be recorded as part of the archive.

I’ve then introduced a 5 minute wait as it seems to take about 5 minutes before the flashback process takes any action to capture the original table data and copy any related undo; but after that 5 minutes is up I’ve queried the person table directly (which should show you the one row where unit_id = -1, then gradually gone backwards in time re-querying the data.

You should see the same result being produced for a few minutes, then a version of the “pre-update” data (upper case ‘ONE’ changing to mixed case ‘One’), and then you will (I hope) see the entire original data set appearing and finally you should see Oracle raising error “ORA-01466: unable to read data – table definition has changed” when your “as of timestamp” goes back beyond the moment you created the archive. (Except that that doesn’t happen with 11.2.0.4, which manages to report the data as if it had existed long before you created it).

I’ve commented out the “set autotrace on explain” in the above, but if you leave it in, or introduce it for one of the queries, you’ll see what’s going on that allows flashback data archive show you data that should have been hidden by the security predicate. Here’s the execution plan for one run:

-----------------------------------------------------------------------------------------------------------------
| Id  | Operation                 | Name                | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |                     |     2 |    86 |    17  (12)| 00:00:01 |       |       |
|   1 |  VIEW                     |                     |     2 |    86 |    17  (12)| 00:00:01 |       |       |
|   2 |   UNION-ALL               |                     |       |       |            |          |       |       |
|*  3 |    FILTER                 |                     |       |       |            |          |       |       |
|   4 |     PARTITION RANGE SINGLE|                     |     1 |    71 |     7   (0)| 00:00:01 |   KEY |     1 |
|*  5 |      TABLE ACCESS FULL    | SYS_FBA_HIST_353151 |     1 |    71 |     7   (0)| 00:00:01 |   KEY |     1 |
|*  6 |    FILTER                 |                     |       |       |            |          |       |       |
|   7 |     MERGE JOIN OUTER      |                     |     1 |  2083 |    10  (20)| 00:00:01 |       |       |
|   8 |      SORT JOIN            |                     |     1 |    55 |     7  (15)| 00:00:01 |       |       |
|*  9 |       TABLE ACCESS FULL   | PERSON              |     1 |    55 |     6   (0)| 00:00:01 |       |       |
|* 10 |      SORT JOIN            |                     |     5 | 10140 |     3  (34)| 00:00:01 |       |       |
|* 11 |       TABLE ACCESS FULL   | SYS_FBA_TCRV_353151 |     5 | 10140 |     2   (0)| 00:00:01 |       |       |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("TIMESTAMP_TO_SCN"(SYSDATE@!-.004861111111111111111111111111111111111111)<12670390363943)
   5 - filter("ENDSCN" .le. 12670390363943 AND ("OPERATION" IS NULL OR "OPERATION"<>'D') AND
              "ENDSCN">"TIMESTAMP_TO_SCN"(SYSDATE@!-.004861111111111111111111111111111111111111) AND ("STARTSCN" IS
              NULL OR "STARTSCN" .le. "TIMESTAMP_TO_SCN"(SYSDATE@!-.004861111111111111111111111111111111111111)))
   6 - filter("STARTSCN"<="TIMESTAMP_TO_SCN"(SYSDATE@!-.004861111111111111111111111111111111111111) OR
              "STARTSCN" IS NULL)
   9 - filter("UNIT_ID"=(-1) AND ("VERSIONS_OPERATION" IS NULL OR "VERSIONS_OPERATION"<>'D') 
             AND ("VERSIONS_STARTSCN" IS NULL OR "VERSIONS_STARTSCN".le."TIMESTAMP_TO_SCN(SYSDATE@!-.004861111111111111111111111111111111111111))
             AND ("VERSIONS_ENDSCN" IS NULL OR "VERSIONS_ENDSCN">"TIMESTAMP_TO_SCN"(SYSDATE@!-.004861111111111111111111111111111111111111)))
  10 - access("RID"(+)=ROWIDTOCHAR("T".ROWID))
       filter("RID"(+)=ROWIDTOCHAR("T".ROWID))
  11 - filter(("ENDSCN"(+) IS NULL OR "ENDSCN"(+)>12670390363943) AND ("STARTSCN"(+) IS NULL OR
              "STARTSCN"(+)<12670390363943))

Note
-----
   - dynamic sampling used for this statement (level=2)

Notice that the predicate “unit_id = -1″ appears on the full table scan of person at operation 9 – that’s Oracle applying the security predicate to the person table. But the flashback code has replaced the person table with a union all of (some partititions of) the SYS_FBA_HIST_353151 and a join between the person table and the SYS_FBA_TCRV_353151 table. And the code path that attaches the security predicate fails to attach it to the history table.

tl;dr

VPD (virtual private database) does not seem to be aware of the query rewrite that takes place if a table has an assocated FDA (flashback data archive), so a flashback query may report rows from the “history” table that should have been blocked by the VPD security policy.

Lagniappe

There is another little problem with FDA that might affect you if you try to optimizer flashback queries by creating SQL Plan Baselines. If you create a baseline on a test system (that isn’t a backup copy of the production system) and use the export/import facility to move the baseline to production then the baseline won’t work because the sys_fba_hist_nnnnn and sys_dba_tcrv_nnnnn table names are constructed from the object_id of the base table – which means the archive table names (and associated baseline hints) in the test system are probably going to have different names from the production system.

Housekeeping

To clean up the database after you’ve done all this testing, run the following script (modified to match any changes you’ve made in the test) after logging on as SYS:


alter table vpd_test_data_owner.person no flashback archive;

drop flashback archive fda_1year;

drop USER VPD_TEST_FUNCTION_OWNER cascade;
drop USER VPD_TEST_DATA_OWNER cascade;

drop tablespace fda_ts including contents and datafiles;

OT Footnote

I’ve decided this year to donate to a charity that works to reduce infant mortality rates in Nepal with a two-pronged attack on malnutrition: feeding starving children, then educating their parents on how to make best use local resources to grow the most appropriate crops and use the best preparation methods to  produce nourishing meals in the future. (They also run other projects to improve the lives of the young people in Nepal – here’s a link to their home page, and a direct link to a 4 minute video that gives you a quick insight into what they do and how they do it.)

If you’re thinking of making any small donations to charity over the next couple of weeks, please think of this one. To make your donation more valuable I’ve set up a justgiving page and will match any donations made there before 5th Jan 2020, up to a total of £1,000.

 

 

 

Identity wallets used by Oracle GoldenGate Microservices

DBASolved - Tue, 2019-12-24 13:54

Wallets, Wallets, and more wallets! … Wallets are used for a few different things within Oracle GoldenGate Microservices; identifying what wallet is being used by a service is simple if you know where to look.   The biggest usage of wallets within Oracle GoldenGate Microservices is to help secure the communication between the Distribution Service […]

The post Identity wallets used by Oracle GoldenGate Microservices appeared first on DBASolved.

Categories: DBA Blogs

Publishing Keras Model API with TensorFlow Serving

Andrejus Baranovski - Tue, 2019-12-24 11:40
Building a ML model is a crucial task. Running ML model in production is not a less complex and important task. I had a post in the past about serving ML model through Flask REST API — Publishing Machine Learning API with Python Flask. While this approach works, it certainly lacks some important points:

  • Model versioning 
  • Request batching 
  • Multithreading 

TensorFlow comes with a set of tools to help you run ML model in production. One of these tools — TensorFlow Serving. There is an excellent tutorial that describes how to configure and run it — TensorFlow Serving with Docker. I will follow the same steps in my example.

Read more in my Towards Data Science post.

One Host, multiple PostgreSQL Clusters – Monitoring using EDB PEM

Yann Neuhaus - Tue, 2019-12-24 07:21

At a customer we implemented the EDB Postgres Enterprise Manager. As they run multiple PostgreSQL cluster on one server, they wanted, for sure, to monitor all the cluster using PEM. This blog should give you a short guidance on how to add databases to PEM.

Starting position

– PEM Server installed, up and running
– PEM Agent installed on the hosts, that should be monitored
– Hosts already registered in the PEM
– pg_hba.conf of all databases prepared to accept connections from PEM

host    postgres        enterprisedb    192.168.22.53/32           trust
First impression

As you can see, the hosts, where the agent is installed and registered with PEM are visible in PEM, but the cluster are not automatically detected.

Add a cluster

Let’s add a PostgeSQL cluster.


First of all make sure to choose a self-documenting name. This name (per default) is not unique. So you can add many servers with the same name. Very confusing.

In the connection tab, enter the hostname or IP address and the other connection settings for the PostgreSQL cluster.

In case you run EFM High Availability, you can configure this on the Tab “Advanced”


Add the second cluster

Use the same steps as in step “Add a cluster”. Just take another name and type the correct port.

That’s it, really simple and straight forward. And all cluster and their databases are shown in a nice manner.

Cet article One Host, multiple PostgreSQL Clusters – Monitoring using EDB PEM est apparu en premier sur Blog dbi services.

Merry Christmas and Happy New Year !! (“Heroes”)

Richard Foote - Mon, 2019-12-23 16:01
I would like to take this opportunity to wish all my readers a very Merry Christmas and a most happy, peaceful and prosperous New Year. My gift this year is not David Bowie and Bing Crosby doing their famous Christmas duet but a performance by Bowie of his classic “Heroes” as featured on the same […]
Categories: DBA Blogs

Oracle VM Server: Pool is locked forever

Dietrich Schroff - Mon, 2019-12-23 15:27
If you are using Oracle VM Server it is possible that you are hitting the following problem:

After changing something on a server pool the pool gets locked and the lock stays there:
Even a reboot of your OVMM (oracle vm manager) does not remove the lock.

The solution can be very easy:
Just edit the pool and add (or change) the description.

This edit triggers an update and removes the lock. I think every other change would do the job, but this is a minimal change, which solves the problem...

Hopefully there will be an update, so this problem will disappear in future versions.


I See Systems

Floyd Teter - Mon, 2019-12-23 14:46
Working in enterprise software has changed my perspective on the world around me... I see everything through more of a systemic lens.

My wife recently had knee replacement surgery.  World-renown surgeon performing the surgery.  But being done at his 2nd choice of hospitals due to limitations of our health insurance policy.  Good hospital, but not his hospital of choice.  Like everything else involving health care in the U.S., it's a money thing.

The surgery was originally scheduled for Friday, December 13th but postponed until Monday, December 16th.  On the 16th, we were requested to arrive at the hospital at 11 am for a planned surgery of 1 pm.  But the surgery did not begin until 6 pm.  At this point, I'm thinking that these guys have no idea about linear programming and managing their supply chain.

While my wife was in surgery, I was directed to a "Surgery Waiting Room".  The room allegedly had an attendant and a television screen for monitoring patient status during surgery.  When I entered the waiting room, I immediately noticed the attendant's desk was unoccupied.  Seems that the attendant's shift ends at 5 pm - those waiting on surgeries beyond 5 pm are asked to answer the phones on a volunteer basis.  The idea being that surgical staff will call the waiting room to update family and friends when surgery is completed, and to alert those waiting that the surgeon is on the way up with a more detailed debrief.  So, being the volunteering type, I sit at the attendant's desk and start answering the phone.  After about three incoming calls, I notice a trend - no friends or family in the room to receive the updates.  And I'm also noticing surgeons coming into the room looking for friends and family who are not there.  I put the phone to use and make a few inquiries within the hospital - and discover that there are three surgery waiting rooms in this tower alone (the hospital in question has seven towers).  Neither the surgical staff nor the surgeons have any idea which waiting room contains friends and family for any particular patient, as they have no waiting room check-in function.  So they're literally hunting for the pea under any one of three shells as they go from waiting room to waiting room searching for friends and family of a particular patient.  Low-key chaos.

Another observation from the surgical waiting room:  the status screen showing patient status is listed by case number rather than patient name.  I did not have a case number for my wife's surgery.  And a quick survey of the folks in the waiting room indicated that none of them had a case number either.  So the status screen was useless to those of us waiting on surgical outcomes.  During the next few incoming calls from surgical staff, I asked for case numbers.  But the surgical staff had no idea of the case numbers either.  What in the world?

Fortunately, the surgery went well, the outcome was positive, and Marlene is now home working through the long recovery process.

After rolling the surgical experience over in my head, I decided to view this experience as a systemic failure for this particular hospital.  Lots of good people with lots of energy all trying to do the right thing, but nobody has tied it all together.  So how might I suggest improving things from a system point of view?

First, I'd establish personas:  the patient, the family member/friend of the patient, the surgical staff member, the attendant, the pre-op nurse, the surgeon, and so on.  Then I'd walk through the entire process, from the time a patient walks in the door until they leave post-op recovery.  And I'd do it for each persona.  Essentially build a systemic "As Is" flow.  Then design how we want things to work, review the changes with the people doing the work, then implement.

Sounds a bit like an enterprise applications implementation project, doesn't it?  Well, enterprise applications are all about implementing systems.  Like I said, working in enterprise software has changed my perspective of the world around me...

How about you?  Ever have a similar experience?  Healthcare?  The DMV? Retail returns?  Sound off in the comments about your experience and how you'd approach a fix.

I See Systems

Floyd Teter - Mon, 2019-12-23 14:46
Working in enterprise software has changed my perspective on the world around me... I see everything through more of a systemic lens.

My wife recently had knee replacement surgery.  World-renown surgeon performing the surgery.  But being done at his 2nd choice of hospitals due to limitations of our health insurance policy.  Good hospital, but not his hospital of choice.  Like everything else involving health care in the U.S., it's a money thing.

The surgery was originally scheduled for Friday, December 13th but postponed until Monday, December 16th.  On the 16th, we were requested to arrive at the hospital at 11 am for a planned surgery of 1 pm.  But the surgery did not begin until 6 pm.  At this point, I'm thinking that these guys have no idea about linear programming and managing their supply chain.

While my wife was in surgery, I was directed to a "Surgery Waiting Room".  The room allegedly had an attendant and a television screen for monitoring patient status during surgery.  When I entered the waiting room, I immediately noticed the attendant's desk was unoccupied.  Seems that the attendant's shift ends at 5 pm - those waiting on surgeries beyond 5 pm are asked to answer the phones on a volunteer basis.  The idea being that surgical staff will call the waiting room to update family and friends when surgery is completed, and to alert those waiting that the surgeon is on the way up with a more detailed debrief.  So, being the volunteering type, I sit at the attendant's desk and start answering the phone.  After about three incoming calls, I notice a trend - no friends or family in the room to receive the updates.  And I'm also noticing surgeons coming into the room looking for friends and family who are not there.  I put the phone to use and make a few inquiries within the hospital - and discover that there are three surgery waiting rooms in this tower alone (the hospital in question has seven towers).  Neither the surgical staff nor the surgeons have any idea which waiting room contains friends and family for any particular patient, as they have no waiting room check-in function.  So they're literally hunting for the pea under any one of three shells as they go from waiting room to waiting room searching for friends and family of a particular patient.  Low-key chaos.

Another observation from the surgical waiting room:  the status screen showing patient status is listed by case number rather than patient name.  I did not have a case number for my wife's surgery.  And a quick survey of the folks in the waiting room indicated that none of them had a case number either.  So the status screen was useless to those of us waiting on surgical outcomes.  During the next few incoming calls from surgical staff, I asked for case numbers.  But the surgical staff had no idea of the case numbers either.  What in the world?

Fortunately, the surgery went well, the outcome was positive, and Marlene is now home working through the long recovery process.

After rolling the surgical experience over in my head, I decided to view this experience as a systemic failure for this particular hospital.  Lots of good people with lots of energy all trying to do the right thing, but nobody has tied it all together.  So how might I suggest improving things from a system point of view?

First, I'd establish personas:  the patient, the family member/friend of the patient, the surgical staff member, the attendant, the pre-op nurse, the surgeon, and so on.  Then I'd walk through the entire process, from the time a patient walks in the door until they leave post-op recovery.  And I'd do it for each persona.  Essentially build a systemic "As Is" flow.  Then design how we want things to work, review the changes with the people doing the work, then implement.

Sounds a bit like an enterprise applications implementation project, doesn't it?  Well, enterprise applications are all about implementing systems.  Like I said, working in enterprise software has changed my perspective of the world around me...

How about you?  Ever have a similar experience?  Healthcare?  The DMV? Retail returns?  Sound off in the comments about your experience and how you'd approach a fix.

To compare two same tables from different schema without primary key and not same number of columns

Tom Kyte - Fri, 2019-12-20 08:55
We have table 'CUSTOMER' in two different schema's. Both are not having any primary key and the column numbers in both table do not match(i.e schema1 table can have 97 column other schema table has 101).the column names are same which are present in ...
Categories: DBA Blogs

PL/SQL code in packages on DB vs on APEX sites - how it affect performance?

Tom Kyte - Fri, 2019-12-20 08:55
Dear Tom, As you develop in APEX you can move all your PL/SQL code into packages or you can put all on APEX. How this affect performance? I know that it is better to move code to packages to make quick changes and have more control over code. ...
Categories: DBA Blogs

PLSQL

Tom Kyte - Fri, 2019-12-20 08:55
Requ:- If table do not have records then need to be inserted if have already the END DATE column only update with an no.of years based on Terms( For example If Term year is 10, then 10*12=120 Months, means..10 years needs to added to the END DATE col...
Categories: DBA Blogs

Merge Delete

Tom Kyte - Fri, 2019-12-20 08:55
How do I delete with a merge statement? I want to update MERGE_DELETE_TEST2 to match MERGE_DELETE_TEST1 (think ETL). I cannot get merge delete to remove the row that exists in MERGE_DELETE_TEST2 that does not exist in MERGE_DELETE_TEST1.
Categories: DBA Blogs

Import Production Dump to new schema

Tom Kyte - Fri, 2019-12-20 08:55
Hi, We have a live project which has only 1 schema. We are upgrading some features and so we are going to release it as version 2.0 . My question is that the production dump which is only one schema has to be to imported to 4 different schema o...
Categories: DBA Blogs

Check Constraints and Explain Plan Filter Predicates

Tom Kyte - Fri, 2019-12-20 08:55
Why does the Oracle SQL Optimizer include a filter predicate for a Check Constraint when generating an execution plan for a SELECT statement? If the constraint is valid (according to DBA_CONSTRAINTS), then the table rows are all compliant with the co...
Categories: DBA Blogs

Dynamic filters and arriving bind variables for them.

Tom Kyte - Fri, 2019-12-20 08:55
Team, we have an application, that used to search using any kind of filters on any colums - something like below. the procedure is used to return the resultset to the application, based on the WHERE clause being passed as input. when running...
Categories: DBA Blogs

Error while relocating database service

Tom Kyte - Fri, 2019-12-20 08:55
Hello, Ask Tom Team. <b>My environment: </b> I have a database running on 2-node RAC. I created a database service with TAF and transaction guard srvctl add service -db dbprod -service dbprod1_xa -preferred dbprod1 -available dbprod2 -fail...
Categories: DBA Blogs

Purge Cursor

Jonathan Lewis - Fri, 2019-12-20 07:54

This is a note I first drafted about 5 years ago (the date stamp says March 2014) and rediscovered a few days ago when the question came up on a Twitter thread.

How do you purge a single SQL statement from the library cache without having to execute “alter system flush shared_pool”?

The answer is in the package dbms_shared_pool, specfically the purge() procedure. This package changes significantly in the upgrade from 11.2 (manual page here) to 12.1 (manual page here) so it’s best to check the reference manual for the version you’re using in case it changes again.  In 11.2 (and earlier) there’s just one option for the purge() procedure but in 12.1 the package gets 3 overloaded versions of the procedure – and one of the operloads gets an extra parameter (edition) by 19c.

Side note: In very early versions of Oracle the package wasn’t installed automatically, so you may have to execute $ORACLE_HOME/admin/rdbms/dbmspool.sql (possibly followed by prvtpool.plb) to install it. The facility to purge a cursor appeared in 11.1 and was then back-ported to 10.2.0.4. The manual pages for the procedure are, however, not up to date and don’t list all the possible flags that tell the procedure what type of object it is supposed to be purging.

The only use I’ve made of the purge() procedure is to purge a cursor from memory, though you can purge other types of object if you want to. Technically you could flush the execution plan from memory without eliminating the cursor, though you would still have to re-optimize the statement so there may be no benefit (or very little benefit) in doing so.

To demonstrate the mechanism I’m going to use three sessions – two to run a query with different optimizer environments, then a third to find and purge the cursors. Here’s the code for the first two sessions:


rem
rem     Script: purge_cursor.sql
rem     Dated:  March 2014
rem     Author: Jonathan Lewis
rem
rem     Last tested
rem             12.2.0.1
rem             11.2.0.4
rem             10.2.0.4  -- with variations to get  two child cursors
rem

/*    To be run by session 1    */

create table t1 as select * from all_objects where rownum <= 10000;
create index t1_i1 on t1(object_id) invisible;

alter session set optimizer_use_invisible_indexes = true;

set serveroutput off
select object_name from t1 where object_id = 250;
select * from table(dbms_xplan.display_cursor);

/*    To be run by session 2    */

set serveroutput off
select object_name from t1 where object_id = 250;
select * from table(dbms_xplan.display_cursor);

You’ll see that I’ve created an invisible index on the table then allowed one session to use invisible indexes while the other session isn’t allowed to. As a consequence session 1 will produce an execution plan that shows an index range scan for child cursor 0, and session 2 will produce an execution plan that shows a table scan for child cursor 1. I won’t show the output from these two sessions, but my calls to dbms_xplan reported the sql_id as ‘ab08hg3s62rpq’ and I’ve used that as the value for a substituion variable in the code to be run by session 3.

The final demo srcipt is a little messy because it’s going to attempt to report all the execution plans for that sql_id three times, but it will also write and execute a script to call the purge() procedure twice – once to eliminate the plans but leave the cursors in place, then a second time to purge the cursors.

define m_sql_id = 'ab08hg3s62rpq'

spool purge_cursor

prompt  =============================
prompt  Before purge- 2 child cursors
prompt  =============================

select * from table(dbms_xplan.display_cursor('&m_sql_id', null));

spool off

set verify off
set feedback off
set heading off

spool temp_purge.sql

select 
        q'{ execute dbms_shared_pool.purge('}' ||
                address || ',' || hash_value || 
                q'{', 'C', 64) }'
from 
        V$sqlarea 
where 
        sql_id = '&m_sql_id'
;

spool off

@temp_purge

set heading on
set feedback on
set verify on

spool purge_cursor append

prompt  =======================================
prompt  After heap 6 purge- 2 cursors, no plans
prompt  =======================================

select * from table(dbms_xplan.display_cursor('&m_sql_id', null));

spool off

set verify off
set feedback off
set heading off

spool temp_purge.sql

select 
        q'{ execute dbms_shared_pool.purge('}' ||
                address || ',' || hash_value || 
                q'{', 'C') }'
from 
        V$sqlarea 
where 
        sql_id = '&m_sql_id'
;

spool off

@temp_purge

set heading on
set verify on
set feedback on

spool purge_cursor append

prompt  =================================
prompt  After complete purge - no cursors
prompt  =================================

select * from table(dbms_xplan.display_cursor('&m_sql_id', null));

On the first pass the select from v$sqlarea produces a script with the following line:

 execute dbms_shared_pool.purge('000000008D3B4FD0,4032913078', 'C', 64)

On the second pass the select produces a script with the following line:

 execute dbms_shared_pool.purge('000000008D3B4FD0,4032913078', 'C')

The effect of the first call is to purge heap 6 (power(2,6) = 64) for any child cursors that exist for the sql_id. The effect of the second call is to purge the entire set of child cursors. The purge_cursor.lst file ends up with the following results:


=============================
Before purge- 2 child cursors
=============================

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------
SQL_ID  ab08hg3s62rpq, child number 0
-------------------------------------
select object_name from t1 where object_id = 250

Plan hash value: 3320414027

---------------------------------------------------------------------------------------------
| Id  | Operation                           | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |       |       |       |     2 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1    |     1 |    25 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | T1_I1 |     1 |       |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("OBJECT_ID"=250)

SQL_ID  ab08hg3s62rpq, child number 1
-------------------------------------
select object_name from t1 where object_id = 250

Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |    27 (100)|          |
|*  1 |  TABLE ACCESS FULL| T1   |     1 |    25 |    27   (8)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("OBJECT_ID"=250)


37 rows selected.

=======================================
After heap 6 purge- 2 cursors, no plans
=======================================
old   1: select * from table(dbms_xplan.display_cursor('&m_sql_id', null))
new   1: select * from table(dbms_xplan.display_cursor('ab08hg3s62rpq', null))

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------
SQL_ID  ab08hg3s62rpq, child number 0

select object_name from t1 where object_id = 250

NOTE: cannot fetch plan for SQL_ID: ab08hg3s62rpq, CHILD_NUMBER: 0
      Please verify value of SQL_ID and CHILD_NUMBER;
      It could also be that the plan is no longer in cursor cache (check v$sql_plan)

SQL_ID  ab08hg3s62rpq, child number 1

select object_name from t1 where object_id = 250

NOTE: cannot fetch plan for SQL_ID: ab08hg3s62rpq, CHILD_NUMBER: 1
      Please verify value of SQL_ID and CHILD_NUMBER;
      It could also be that the plan is no longer in cursor cache (check v$sql_plan)


16 rows selected.

=================================
After complete purge - no cursors
=================================
old   1: select * from table(dbms_xplan.display_cursor('&m_sql_id', null))
new   1: select * from table(dbms_xplan.display_cursor('ab08hg3s62rpq', null))

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------
SQL_ID: ab08hg3s62rpq cannot be found


2 rows selected.

As you can see we start with two child cursors and two different execution plans, then the two child cursors “lose” their execution plans, and finally the cursors disappear completely. (And the parent disappears at the same time.)

Presenting the procedure with a little more formality – the formal declaration of the procedure that I’ve been using reads:


procedure purge(
        name    varchar2, 
        flag    char    DEFAULT 'P', 
        heaps   number  DEFAULT 1
)

To purge a cursor we set the flag to ‘C’ (or ‘c’)  and the name to ‘{address},{hash_value}’ (make sure you don’t get extra spaces in that expression). If we want to purge just the execution plan we need to target heap 6 which means setting the heaps value to power(2,6). Different types of object use difference (sub)heaps so if you want to delete multiple heaps you need to add together the appropriate power(2,N); the default value for heaps is 1, which equates to heap 0, which means the whole object.

For other types of object there is a reference list (under the keep() procedure) that expands on the manuals to give us the following possible values of flag:


  --        Value        Kind of Object to {keep}
  --        -----        ------------------------
  --          P          package/procedure/function
  --          Q          sequence
  --          R          trigger
  --          T          type
  --          JS         java source
  --          JC         java class
  --          JR         java resource
  --          JD         java shared data
  --          C          cursor

You will note, of course, that I’ve used v$sqlarea rather than v$sql when I was searching for the address and hash value. When you purge a cursor you purge every child cursor you can’t identify an individual child. Even if you query v$sql instead of v$sqlarea, and use a single child_address instead of the (parent) address the purge() procedure will still purge every child for the parent.

Warning

There is a comment in my original notes about a bug that was fixed by 11.2.0.4 – if any of the child cursors is currently executing then the purge() procedure will go into a loop waiting on “cursor: pin X”, timing out every 1/100 second. Unfortunately my notes didn’t make any explicit comment about what impact this had on any session trying to parse or execute any of the children for that parent cursor – but I suspect that you’d end up with a nasty race condition and an apparent hang.

OT Footnote

I’ve decided this year to donate to a charity that works to reduce infant mortality rates in Nepal with a two-pronged attack on malnutrition: feeding starving children, then educating their parents on how to make best use local resources to grow the most appropriate crops and use the best preparation methods to  produce nourishing meals in the future. (They also run other projects to improve the lives of the young people in Nepal – here’s a link to their home page, and a direct link to a 4 minute video that gives you a quick insight into what they do and how they do it.)

If you’re thinking of making any small donations to charity over the next few weeks, please think of this one. To make your donation more valuable I’ve set up a justgiving page and will match any donations made there, up to a total of £1,000.

ASC 842 :Audit Procedures for Leases

OracleApps Epicenter - Fri, 2019-12-20 03:59
The new Financial Accounting Standards Board (FASB) and international financial reporting standards (IFRS) lease accounting standards (ASC 842 and IFRS 16) will take effect in 2020 for private companies. The standards bring many leases onto the balance sheet and could significantly impact a business’ financial statements. Here is great post on Audit Procedures for Leases […]
Categories: APPS Blogs

Microsoft and Oracle Expand Interoperability Partnership to Canada

Oracle Press Releases - Thu, 2019-12-19 07:00
Blog
Microsoft and Oracle Expand Interoperability Partnership to Canada

By Vinay Kumar, vice president, product management, Oracle Cloud Infrastructure—Dec 19, 2019

Today, we are announcing the continued expansion of our cloud interoperability partnership with Microsoft to help joint customers worldwide run their mission-critical workloads across Oracle Cloud and Microsoft Azure. Our new interconnect location means enterprises can now build workloads that seamlessly interoperate between Microsoft and Oracle cloud regions in Canada. This interconnect builds on an existing partnership announced in June of 2019.

The partnership has received a huge amount of interest, as 80 percent of enterprises use a combination of Microsoft and Oracle software to run their businesses. As cloud computing becomes ubiquitous, and businesses rely on multiple cloud providers, the partnership makes managing companies’ most important cloud workloads significantly easier. These workloads include financial planning, inventory, sales applications – and their underlying databases.

The expansion will give more customers direct, fast and highly reliable network connectivity between Microsoft Azure and Oracle Cloud, while providing first-class customer service and support that enterprises have come to expect from the two companies. This unique multi-cloud solution delivers the performance, easy integration, rigorous service level agreements, and collaborative enterprise support that they need to simplify their operations. 

Simply put, the Oracle-Microsoft partnership means cloud services run by the two providers will interoperate as if they were part of a single cloud, making it easier for customers to run their mission-critical workloads across the two clouds.

“The global demand for running applications and databases in multi-cloud environments continues to accelerate,” said Clay Magouyrk, senior vice president of engineering, Oracle Cloud Infrastructure.  “With the new interconnect, our Canadian customers can now take advantage of a nearly seamless cloud integration between the world's largest enterprise cloud providers, Microsoft and Oracle.”

More Regions to Reach More Customers

The two companies are putting customers first by enabling them to run full-stack applications side-by-side across clouds, or one part of a workload within Azure and another part of the same workload within Oracle Cloud. For example, using the interconnect makes it possible to connect Azure services like analytics and AI to Oracle Cloud services like Autonomous Database. Together, Azure and Oracle Cloud offer customers a one-stop shop for all the cloud services and applications they need to run their entire business.

From a technical perspective, the interconnect means less latency or delay, which enables better data transfer and application interaction between clouds. It also supports a broader spectrum of workloads, using resources available on both sides. Accenture recently performed testing on the performance of the interconnect and confirmed that the solution offers customers low latency and high ease of use.

Microsoft and Oracle plan to make the direct interconnect available in additional regions, including on the US West Coast, in a US Government specific region, in Asia, and in the European Union.

Earlier this year, Oracle and Microsoft created an interconnect in Ashburn (North America), Azure US East, and in London (United Kingdom).

London March 2020: “Oracle Indexing Internals and Best Practices” and “Oracle Performance Diagnostics and Tuning” Seminars

Richard Foote - Thu, 2019-12-19 02:06
Places are filling up, but there are still some available at both of my acclaimed seminars that I’ll be running in London, UK in March 2020. The dates and registration links are as follows: 23-24 March 2020: “Oracle Indexing Internals and Best Practices” seminar – Tickets and Registration Link 25-26 March 2020: “Oracle Performance Diagnostics and […]
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator