Feed aggregator
Restoring a PDB from another CDB can increase your SCN
I start with a 19c CDB called "ORCLCDB" and PDB "ORCLPDB1" that is present in it.
SQL> col name format a16 SQL> select dbid, name, current_scn from v$database; DBID NAME CURRENT_SCN ---------- ---------------- ----------- 2778483057 ORCLCDB 20906515 SQL> select dbid, guid, name, open_mode from v$pdbs; DBID GUID NAME OPEN_MODE ---------- -------------------------------- ---------------- ---------- 2054948555 88129263B99F4BBDE0530100007F7BDF PDB$SEED READ ONLY 4057974068 8812E29864F47615E0530100007FA424 ORCLPDB1 READ WRITE 181927604 C07D1F2BA98C23D0E0530100007F7D34 RMANCAT MOUNTED SQL> SQL> alter pluggable database orclpdb1 close; Pluggable database altered. SQL> quit Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.12.0.0.0 oracle19c>rman target / Recovery Manager: Release 19.0.0.0.0 - Production on Sun Sep 24 09:39:55 2023 Version 19.12.0.0.0 Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved. connected to target database: ORCLCDB (DBID=2778483057) RMAN> backup pluggable database orclpdb1 tag for_migration; Starting backup at 24-SEP-23 using channel ORA_DISK_1 using channel ORA_DISK_2 channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00034 name=/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_tpcctab_jyl0mzsp_.dbf input datafile file number=00009 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/system01.dbf input datafile file number=00026 name=/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_data_min_j2p8z0qn_.dbf input datafile file number=00032 name=/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoojqn_.dbf channel ORA_DISK_1: starting piece 1 at 24-SEP-23 channel ORA_DISK_2: starting full datafile backup set channel ORA_DISK_2: specifying datafile(s) in backup set input datafile file number=00011 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/undotbs01.dbf input datafile file number=00010 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/sysaux01.dbf input datafile file number=00012 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/users01.dbf input datafile file number=00031 name=/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoo5nr_.dbf channel ORA_DISK_2: starting piece 1 at 24-SEP-23 channel ORA_DISK_1: finished piece 1 at 24-SEP-23 piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2023_09_24/o1_mf_nnndf_FOR_MIGRATION_ljz5825q_.bkp tag=FOR_MIGRATION comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03 channel ORA_DISK_2: finished piece 1 at 24-SEP-23 piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2023_09_24/o1_mf_nnndf_FOR_MIGRATION_ljz5828m_.bkp tag=FOR_MIGRATION comment=NONE channel ORA_DISK_2: backup set complete, elapsed time: 00:00:03 Finished backup at 24-SEP-23 Starting Control File and SPFILE Autobackup at 24-SEP-23 piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2023_09_24/o1_mf_s_1148377605_ljz585bw_.bkp comment=NONE Finished Control File and SPFILE Autobackup at 24-SEP-23 RMAN> RMAN> quit Recovery Manager complete. oracle19c>sqlplus / as sysdba SQL*Plus: Release 19.0.0.0.0 - Production on Sun Sep 24 09:48:04 2023 Version 19.12.0.0.0 Copyright (c) 1982, 2021, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.12.0.0.0 SQL> alter pluggable database orclpdb1 unplug into '/opt/oracle/product/19c/dbhome_1/orclpdb1.xml'; Pluggable database altered. SQL> !ls -l /opt/oracle/product/19c/dbhome_1/orclpdb1.xml -rw-r--r--. 1 oracle oinstall 12583 Sep 24 09:50 /opt/oracle/product/19c/dbhome_1/orclpdb1.xml SQL> SQL> quit Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.12.0.0.0 oracle19c>
Now I go to the target CDB CDB2 and identify that there is no conflicting PDB. I also find that it has a lower SCN.
SQL> col name format a16 SQL> select dbid, name, current_scn from v$database; DBID NAME CURRENT_SCN ---------- ---------------- ----------- 837065713 CDB2 20664227 SQL> select dbid, name, current_scn from v$database; DBID NAME CURRENT_SCN ---------- ---------------- ----------- 837065713 CDB2 20664231 SQL> SQL> select * from v$pdbs 2 / CON_ID DBID CON_UID GUID ---------- ---------- ---------- -------------------------------- NAME -------------------------------------------------------------------------------------------------------------------------------- OPEN_MODE RES OPEN_TIME CREATE_SCN TOTAL_SIZE BLOCK_SIZE RECOVERY ---------- --- --------------------------------------------------------------------------- ---------- ---------- ---------- -------- SNAPSHOT_PARENT_CON_ID APP APP APP APPLICATION_ROOT_CON_ID APP PRO LOCAL_UNDO UNDO_SCN UNDO_TIME CREATION_ DIAGNOSTICS_SIZE ---------------------- --- --- --- ----------------------- --- --- ---------- ---------- --------- --------- ---------------- PDB_COUNT AUDIT_FILES_SIZE MAX_SIZE MAX_DIAGNOSTICS_SIZE MAX_AUDIT_SIZE LAST_CHANGE TEM ---------- ---------------- ---------- -------------------- -------------- ----------- --- TENANT_ID ------------------------------------------------------------------------------------------------------------------------------------ UPGRADE_LEVEL GUID_BASE64 ------------- ------------------------------ 2 2054948555 2054948555 88129263B99F4BBDE0530100007F7BDF PDB$SEED READ ONLY NO 24-SEP-23 09.26.31.678 AM +08:00 1997190 957349888 8192 ENABLED NO NO NO NO NO 1 280 04-MAY-19 0 0 0 0 0 0 COMMON USER NO 1 iBKSY7mfS73gUwEAAH973wA= 3 2205532678 2205532678 E6BD9C73839C10BEE0550A00275FC834 PDBTMP MOUNTED 17541716 999292928 8192 ENABLED NO NO NO NO NO 1 280 21-AUG-22 0 0 0 0 0 0 COMMON USER NO 1 5r2cc4OcEL7gVQoAJ1/INAA=
I then have the backups and the "closed" datafiles from the source CDB copied over to the target. (The CREATE PLUGGABLE DATABASE command will not be using the RMAN Backups but the PDB Describle XML file and the "closed" datafiles).
oracle19c>pwd /tmp/From_Source oracle19c>ls -l total 1882384 -rw-r-----. 1 oracle oinstall 1146109952 Sep 24 09:56 o1_mf_nnndf_FOR_MIGRATION_ljz5825q_.bkp -rw-r-----. 1 oracle oinstall 758202368 Sep 24 09:56 o1_mf_nnndf_FOR_MIGRATION_ljz5828m_.bkp -rw-r-----. 1 oracle oinstall 23232512 Sep 24 09:55 o1_mf_s_1148377605_ljz585bw_.bkp -rw-r--r--. 1 oracle oinstall 12583 Sep 24 09:59 orclpdb1.xml oracle19c>
I now prepare to copy the PDB to the target CDB2.. First I check for compatibility.. And then I plug in the PDB with a new name "ORCLPDB1_NEW"
SQL> select dbid, name, current_scn from v$database; DBID NAME CURRENT_SCN ---------- ---------------- ----------- 837065713 CDB2 20664231 SQL> set serveroutput on SQL> declare 2 compatible constant varchar2(3) := 3 case dbms_pdb.check_plug_compatibility( 4 pdb_descr_file=>'/tmp/From_Source/orclpdb1.xml', 5 pdb_name=>'ORCLPDB1') 6 when true then 'YES' 7 else 'NO' 8 END; 9 begin 10 dbms_output.put_line(compatible); 11 end; 12 / YES PL/SQL procedure successfully completed. SQL> SQL> show parameter db_create NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ db_create_file_dest string /opt/oracle/oradata SQL> SQL> create pluggable database orclpdb1_new 2 using '/tmp/From_Source/orclpdb1.xml' 3 copy; --- copy will copy the datafiles to the location based on db_create_file_dest and CDB database name Pluggable database created. SQL> SQL> select dbid, name, current_scn from v$database; DBID NAME CURRENT_SCN ---------- ---------------- ----------- 837065713 CDB2 20910195 SQL> SQL> select dbid, guid, name, open_mode from v$pdbs; DBID GUID NAME OPEN_MODE ---------- -------------------------------- ---------------- ---------- 2054948555 88129263B99F4BBDE0530100007F7BDF PDB$SEED READ ONLY 2205532678 E6BD9C73839C10BEE0550A00275FC834 PDBTMP MOUNTED 4057974068 8812E29864F47615E0530100007FA424 ORCLPDB1_NEW READ WRITE SQL>
The SCN of the target CDB2 has now been set to the much higher 20910195 (that was 20664231 before I "plugged in" ORCLDB1 as ORCLDB1_NEW). This is because it "read" the SCN from the headers of the datafiles that were plugged in.
SQL> select dbid, name, current_scn from v$database; DBID NAME CURRENT_SCN ---------- ---------------- ----------- 2778483057 ORCLCDB 20910076 SQL> select dbid, guid, name, open_mode from v$pdbs; DBID GUID NAME OPEN_MODE ---------- -------------------------------- ---------------- ---------- 2054948555 88129263B99F4BBDE0530100007F7BDF PDB$SEED READ ONLY 4057974068 8812E29864F47615E0530100007FA424 ORCLPDB1 MOUNTED 181927604 C07D1F2BA98C23D0E0530100007F7D34 RMANCAT MOUNTED SQL> SQL> drop pluggable database orclpdb1 including datafiles; Pluggable database dropped. SQL> SQL> select dbid, guid, name, open_mode from v$pdbs; DBID GUID NAME OPEN_MODE ---------- -------------------------------- ---------------- ---------- 2054948555 88129263B99F4BBDE0530100007F7BDF PDB$SEED READ ONLY 181927604 C07D1F2BA98C23D0E0530100007F7D34 RMANCAT MOUNTED SQL> SQL> quit Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.12.0.0.0 oracle19c>
So, I have "migrated" ORCLPDB1 from ORCLCDB to CDB2 and, in the process, have found that the SCN in CDB2 got reset to a much higher value, because it "inherited" the higher Checkpoint SCN that was in the datafiles of ORCLPDB1.
SQL> select con_id, file#, plugin_change#, checkpoint_change# 2 from v$datafile 3 order by 1,2 4 / CON_ID FILE# PLUGIN_CHANGE# CHECKPOINT_CHANGE# ---------- ---------- -------------- ------------------ 1 1 0 20658011 1 3 0 20658011 1 4 0 20658011 1 7 0 20658011 2 5 0 13768423 2 6 0 13768423 2 8 0 13768423 3 58 0 20657657 3 59 0 20657657 3 60 0 20657657 4 69 0 20908595 4 70 0 20908595 4 71 0 20908595 4 72 0 20908595 4 73 0 20908595 4 74 0 20908595 4 75 0 20908595 4 76 0 20908595 18 rows selected. SQL> alter system checkpoint; System altered. SQL> select con_id, file#, plugin_change#, checkpoint_change# 2 from v$datafile 3 order by 1,2 4 / CON_ID FILE# PLUGIN_CHANGE# CHECKPOINT_CHANGE# ---------- ---------- -------------- ------------------ 1 1 0 20912767 1 3 0 20912767 1 4 0 20912767 1 7 0 20912767 2 5 0 13768423 2 6 0 13768423 2 8 0 13768423 3 58 0 20657657 3 59 0 20657657 3 60 0 20657657 4 69 0 20912767 4 70 0 20912767 4 71 0 20912767 4 72 0 20912767 4 73 0 20912767 4 74 0 20912767 4 75 0 20912767 4 76 0 20912767 18 rows selected. SQL> col name format a16 SQL> select con_id, name, open_mode from v$pdbs; CON_ID NAME OPEN_MODE ---------- ---------------- ---------- 2 PDB$SEED READ ONLY 3 PDBTMP MOUNTED 4 ORCLPDB1_NEW READ WRITE SQL> alter pluggable database pdbtmp open; Pluggable database altered. SQL> select con_id, file#, plugin_change#, checkpoint_change# 2 from v$datafile 3 order by 1,2 4 / CON_ID FILE# PLUGIN_CHANGE# CHECKPOINT_CHANGE# ---------- ---------- -------------- ------------------ 1 1 0 20912767 1 3 0 20912767 1 4 0 20912767 1 7 0 20912767 2 5 0 13768423 2 6 0 13768423 2 8 0 13768423 3 58 0 20912888 3 59 0 20912888 3 60 0 20912888 4 69 0 20912767 4 70 0 20912767 4 71 0 20912767 4 72 0 20912767 4 73 0 20912767 4 74 0 20912767 4 75 0 20912767 4 76 0 20912767 18 rows selected. SQL> alter system checkpoint; System altered. SQL> select con_id, file#, plugin_change#, checkpoint_change# 2 from v$datafile 3 order by 1,2 4 / CON_ID FILE# PLUGIN_CHANGE# CHECKPOINT_CHANGE# ---------- ---------- -------------- ------------------ 1 1 0 20912967 1 3 0 20912967 1 4 0 20912967 1 7 0 20912967 2 5 0 13768423 2 6 0 13768423 2 8 0 13768423 3 58 0 20912967 3 59 0 20912967 3 60 0 20912967 4 69 0 20912967 4 70 0 20912967 4 71 0 20912967 4 72 0 20912967 4 73 0 20912967 4 74 0 20912967 4 75 0 20912967 4 76 0 20912967 18 rows selected. SQL> SQL> alter session set container=ORCLPDB1_NEW; Session altered. SQL> col name format a54 SQL> select name from v$datafile; NAME ------------------------------------------------------ /opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA 424/datafile/o1_mf_system_ljz7d02h_.dbf /opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA 424/datafile/o1_mf_sysaux_ljz7d02l_.dbf /opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA 424/datafile/o1_mf_undotbs1_ljz7d02m_.dbf /opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA 424/datafile/o1_mf_users_ljz7d02o_.dbf /opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA 424/datafile/o1_mf_data_min_ljz7d02p_.dbf /opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA 424/datafile/o1_mf_hemant_d_ljz7d02r_.dbf /opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA 424/datafile/o1_mf_hemant_d_ljz7d02s_.dbf /opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA 424/datafile/o1_mf_tpcctab_ljz7d02t_.dbf 8 rows selected. SQL>
CON_ID=1 is Root and the datafiles get the new SCN after a Checkpoint. Note these datafiles still had the "older" SCN 20658011 before I issued the Checkpoint.
Oracle Forensics Response
Posted by Pete On 22/09/23 At 01:07 PM
Oracle Database 23c Generally Available – Sort Of
Oracle Cloud World is happening, which typically means lots of announcements. One of the welcome announcement was the release of Oracle 23c on OCI Oracle Base Database Service, so there is a production version generally available… Sort of… Why do I say “sort of”? OCI Oracle Base Database Service This is a single cloud service, … Continue reading "Oracle Database 23c Generally Available – Sort Of"
The post Oracle Database 23c Generally Available – Sort Of first appeared on The ORACLE-BASE Blog.Oracle Database 23c Generally Available – Sort Of was first posted on September 21, 2023 at 10:53 am.©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
Database Vault without Database Vault
Posted by Pete On 18/09/23 At 01:00 PM
Ansible automates roadshow 2023
I attended this tour from Red Hat to promote automation at Ansible automates roadshow with my colleague Arnaud Berbier. You’ll find his review of the journey here) in Geneva.
It was the third date from their tour. In case you’re interested in automation with Ansible there is 2 more dates in Germany.
- Frankfurt in October 17
- Munich in October 19

It started with Loïc Avenel who is the EMEA Manager in field product management for Ansible. He explained his vision of automation with 3 keys concepts which are to
- Think like software developers do in modern development
- Have an automation-first cultural mindset, and this start by us
- Leverage and share individuals skills and experience, by filling the talent gap and up skilling your team members
Loïc put emphasis in collaboration and gave some “facts” numbers to depict that without this. Many company failed to transform their IT and this number goes up to 70% from a financial analysis from Banque Cantonale de Genève.
He then shows 2 common ways enterprise does this. Some by updating gently their tasks to be automated (optimizing) and the other by changing totally their way to do things (transform). In fact, this is not bad, but doing both is better!
He leverages on a “Automation maturity curve” to explain his way to be more efficient for a successful transformation. And to resume his mind, an extra effort should be done at the beginning but at the end you’ll benefits more. And also more from the upcoming AI that everyone is talking about.
He assumes a lot of company already did the cultural shift to automate tasks, he calls that Day 0 / Day 1 and named this phase “Infrastructure as Code” but not really the next step for automation, he calls it Day 2 “Operation as Code” and consist in
- Standardize operation processes
- Go further in observability with remediations
As you may know, Red Hat is embedded in the big blue IBM and this Day 2 couldn’t come without something from them, the famous Watson with their “Watsonx code assistant” for generative AI code for plabook deployment, this magic is called Ansible lightspeed.
He also introduce the Event-Driven in Ansible that I can say is promising with all plugins and integrations. It consists on observability and automatic remediation directly in a playbook triggered on demand by specific events.

The presentation ended with some words about Ansible automation platform (replacing Ansible Tower). They push the concept farther with these extra features.
- Automation execution environments
- Automation mesh
- Ansible content collections
- Ansible content tools
- Red Hat Insights
- Automation services catalog
- Automation Hub
We then had talks from Bertrand d’Hérouville an Edge Specialist Solution Architect. He presented an extension to automate with Ansible with Edge devices and he took his home automation as an example.
The last talk before the lunch was from a Red Hat partner. The presentation was with Jean-François Pujol from Cisco and how he used Ansible to manage Network devices, another great success story!
Just before noon, it was the time for a very nice lunch buffet. Thanks Red Hat and Partner for this event.
During the lunch, I was thinking about what we saw the morning. It was a lot of theory and not enough concret usage or demos. Fortunately, we had one of the best slide of the day.

Demos will now start!!
Christian Jung from Red Hat did a small poll with hand raised to know if everyone in the room was familiar or not with playbooks. It was like one third people in the room, it surprises me a bit, but well. He started his demo with writing a basic playbook. And after a few minutes, he showed us lightspeed, a plugin for VS Code that analyze what you type and helps you developing faster. Below is an exemple at line 7.

After that, we had another successful use case with Ansible from Sylvain Chen. He explained his success story at Six. In an openShift environment and how he did manage to develop operators and used Ansible Automation Platform to do it.
The last talk was with RegData and the importance to protect data on on-prem or cloud infrastructure. By using a lot of features from Ansible like
- Generic roles
- variables management
- templating
This event was really great to know more about new features around Ansible. With for example lightspeed, Ansible driven-event or the new Ansible tower aka Ansible automation platform.
I recommend you to join the next events in case you would like to know more from Ansible.
Please find the external link here to Ansible automates roadshow 2023.
Thanks again dbi services for this opportunity.
L’article Ansible automates roadshow 2023 est apparu en premier sur dbi Blog.
Oracle to the rescue of MGM with SQL Firewall?
It is true that it’s a somewhat daring title, and it’s unlikely that SQL firewall has changed anything about the attack that several casinos in Las Vegas suffered last week. However, the opportunity was too good to pass up on this topic.

As I write this blog, Las Vegas casinos are experiencing a series of cyberattacks. It turns out that at the same time, OracleCloud World is taking place in Las Vegas, and one of its flagship products, the database, offers a new feature in its 23c version called ‘SQL firewall.’ So, the opportunity was too good not to write a short blog about this new functionality. I would like to emphasize here that this is only a very brief overview of all the features of DBMS_SQL_FIREWALL. Oracle provides extensive documentation on this package/feature, and you will find several blogs on the internet that go beyond the tests below (have a look in my conclusion)
How does SQL Firewall work?The principle of the package is very simple and is illustrated through the image below, which has been taken from the Oracle documentation:

- First, we capture the standard activity of an application (Capture SQL Activities). Naturally, the capture must run for a sufficient amount of time to cover all specific scenarios.
- By examining this activity (Review), we ensure that there is nothing abnormal.
- We generate the list (Generate Allow-List) of allowed execution contexts. This list can be modified if necessary by adding or removing specific executions using the following package procedures: DBMS_SQL_FIREWALL.ADD_ALLOWED_CONTEXT, DBMS_SQL_FIREWALL.DELETE_ALLOWED_CONTEXT, and DBMS_SQL_FIREWALL.DELETE_ALLOWED_SQL.
- We enable the list (Enable Allow-List).
- We monitor (Monitor) the operation using the DBA_SQL_FIREWALL_VIOLATIONS dictionary view.
For this example I will need two users:
- schema_owner who is the owner of the tables & views of the application
- app_user who needs to access to the application
We could use a specific user to manage database firewall by providing him the sql_firewall_admin role but as stated in my introduction I’ll keep this example very short and simple.
Users creationCreation of the user schema_owner:
SQL> create user schema_owner identified by schema_owner quota unlimited on users;
grant connect,resource to schema_owner;
grant create session grant any privilege to schema_owner;
User created.
Creation of the user app_user:
SQL> create user app_user identified by app_user;
grant create session to app_user;
grant select any table on schema schema_owner to app_user;
User created.
Objects creation
Now we will create few objects in our schema_owner schema:
SQL> CREATE TABLE schema_owner.customers
( customer_id number(10) NOT NULL,
customer_name varchar2(50) NOT NULL,
city varchar2(50),
credit_card number(16),
CONSTRAINT customers_pk PRIMARY KEY (customer_id)
);
Table created.
SQL> create view schema_owner.non_sensitive_view as select customer_name, city from schema_owner.customers;
View created.
Data insertion
We will now insert data in our table:
SQL> insert into schema_owner.customers values (001,'steulet','Delemont',00001111222233334444);
insert into schema_owner.customers values (002,'errard','Courrendlin',2222333344445555);
insert into schema_owner.customers values (003,'frund','Porrendutry',7777888899990000)
1 row created.
Start the capture of the statements (Capture SQL Activities)
Before being able to capture the statements, we have to be sure that SQL Firewall is enabled:
SQL> select status
from dba_sql_firewall_status;
STATUS
------------
DISABLED
SQL> exec dbms_sql_firewall.enable;
PL/SQL procedure successfully completed.
SQL> select status
from dba_sql_firewall_status;
STATUS
------------
Enable
We can now start the capure process of the standard context for the user app_user using the below procedure:
SQL> begin
dbms_sql_firewall.create_capture (
username => 'app_user',
top_level_only => false,
start_capture => true);
end;
PL/SQL procedure successfully completed.
Let’s run the application
We now need to run the application for the necessary duration in order to collect all user context information related to those who will be querying the application. We will greatly simplify this phase by simply performing a select on the view we created earlier. This select will be done in a separate session using the application user login & password:
[oracle@grs-oraclecloudinstance bin]$ ./sqlplus app_user/app_user@free
SQL*Plus: Release 23.0.0.0.0 - Developer-Release on Mon Sep 18 03:16:56 2023
Version 23.2.0.0.0Copyright (c) 1982, 2023, Oracle. All rights reserved.Connected to:
Oracle Database 23c Free, Release 23.0.0.0.0 - Developer-Release
Version 23.2.0.0.0
SQL> set linesize 200 pagesize 40
SQL> select * from schema_owner.non_sensitive_view;
CUSTOMER_NAME CITY
-------------------------------------------------- ---------------------------------
steulet Delemont
errard Courrendlin
Let’s stop the capture
We can admit that the application ran for the necessary duration in order to catch all the contextual information and we stop the capture:
SQL> exec dbms_sql_firewall.stop_capture('app_user');
PL/SQL procedure successfully completed.
Let’s have a look on what we captured (Review)
When querying dba_sql_fiewall_capture_logs, one can view the logs as demonstrated below:
SQL> column commande_type format a9
column current_user format a10
column client_program format a20
column os_user format a8
column ip_address format a10
column sql_text format a30
select command_type, current_user, client_program, os_user, ip_address, sql_text from dba_sql_firewall_capture_logs where username='APP_USER';
COMMAND_TYPE CURRENT_US CLIENT_PROGRAM OS_USER IP_ADDRESS SQL_TEXT
------------ ---------- -------------------- -------- ---------- ------------------------------
SELECT APP_USER sqlplus@grs-oraclecl oracle 10.0.0.79 SELECT * FROM SCHEMA_OWNER.NON
oudinstance (TNS V1- _SENSITIVE_VIEW
V3)
SELECT APP_USER sqlplus@grs-oraclecl oracle Local SELECT DECODE (USER,:"SYS_B_0"
oudinstance (TNS V1- ,XS_SYS_CONTEXT (:"SYS_B_1",:"
V3) SYS_B_2"),USER) FROM SYS.DUAL
SELECT APP_USER sqlplus@grs-oraclecl oracle 10.0.0.79 SELECT DECODE (USER,:"SYS_B_0"
oudinstance (TNS V1- ,XS_SYS_CONTEXT (:"SYS_B_1",:"
V3) SYS_B_2"),USER) FROM SYS.DUAL
To read the output more easily you will find a screenshot below:

Using DBMS_SQL_FIREWALL.GENERATE_ALLOW_LIST we generate the allow_list:
SQL> exec DBMS_SQL_FIREWALL.GENERATE_ALLOW_LIST ('APP_USER');
PL/SQL procedure successfully completed.
Let’s enable protect our database (Enable Allow-List)
Using the below procedure we will activate the allow list and block violations:
SQL> begin
dbms_sql_firewall.enable_allow_list (
username => 'app_user',
enforce => dbms_sql_firewall.enforce_all,
block => true);
end;
/
PL/SQL procedure successfully completed.
What about a test?
Let’s run a query that hasn’t been run before. We get a SQL Firewall violation:
[oracle@grs-oraclecloudinstance bin]$ ./sqlplus app_user/app_user@free
SQL> select credit_card from schema_owner.customers;
select credit_card from schema_owner.customers
*
ERROR at line 1:
ORA-47605: SQL Firewall violation
What can we find in the log?
Inspecting the log we can have some details regarding the rule violation:
SQL> select sql_text,
firewall_action,
ip_address,
cause,
occurred_at
from dba_sql_firewall_violations;
SQL_TEXT FIREWAL IP_ADDRESS CAUSE OCCURRED_AT
------------------------------ ------- ---------- -------------------- -------------
SELECT CREDIT_CARD FROM SCHEMA Blocked 10.0.0.79 SQL violation 18-SEP-23 04.08.54.371122 AM +00:00
_OWNER.CUSTOMERS
To read the output more easily you will find a screenshot below:

Oracle provides a series of tools to control data access. SQL Firewall is added to the list of these tools, such as DBM_CRYPTO, TDE, Oracle Virtual Private Database, to name just a few. SQL Firewall is used in this blog to block queries, but it can also be configured for query monitoring only. It’s also interesting to note that we can use Oracle Data Pump to export and import SQL Firewall captures and allow-list metadata. Of course on large volume of Data SQL Firewall can have an impact on performance (I didn’t test it). In such a context Oracle recommends the two below measures:
- Allocate at least an additional 2G to the
LARGE_POOL_SIZE
parameter setting, on top of the existingLARGE_POOL_SIZE
requirement. - Resize the
SGA_TARGET
parameter setting to include this additional requirement. Ensure that the final size is 8G or more.
Among the different blogs that address this topic we can find:
- 23c Oracle Firewall from Rob Watson
- SQL Firewall Oracle 23c from Pete Finnigan
- SQL Firewall in Oracle Database 23c from Oracle-Base (Tim)
L’article Oracle to the rescue of MGM with SQL Firewall? est apparu en premier sur dbi Blog.
Clone Any Voice with AI - Locally Install XTTS Model
This video shows in step by step tutorial as how to install and run Coqui XTTS model locally. TTS is a Voice generation model that lets you clone voices into different languages by using just a quick 3-second audio clip.
Commands Used:
!pip install transformers !pip install tts from TTS.api import TTS tts = TTS("tts_models/multilingual/multi-dataset/xtts_v1", gpu=True) tts.tts_to_file(text="This is my new cloned voice in AI. If you like, don't forget to subscribe to this channel.", file_path="output.wav", speaker_wav="speaker.wav", language="en")
Create Onion Layers of Security
Posted by Pete On 15/09/23 At 02:01 PM
Oracle databases on other clouds?
Oracle just announced the expansion of their partnership with Microsoft to deliver Oracle database services in Azure. You can read the blog post here. Oracle and Microsoft expand partnership to deliver Oracle database services in Azure This is a very interesting development for a number of reasons. Here are some of my thoughts… The database … Continue reading "Oracle databases on other clouds?"
The post Oracle databases on other clouds? first appeared on The ORACLE-BASE Blog.Oracle databases on other clouds? was first posted on September 15, 2023 at 10:05 am.©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
The Hint OPT_ESTIMATE is 20 years old: can we use it now ?
How to Install Llama 2 on Google Cloud Platform - Step by Step Tutorial
This video shows you step by step instructions as how to deploy and run Llama 2 and Code Llama models on GCP in Vertex AI API easily and quickly.
Las Vegas, from sin city to green city?
Oracle CloudWorld 2023 will start next week in Las Vegas, and I’m happy to participate in this worldwide event. As a Swiss citizen, when I think of Las Vegas, the first image that comes to my mind is that of dazzling lights, opulent casinos, and dancing fountains. A city that never sleeps, where excess seems to be the norm. However, behind this glittering facade lies a surprising reality: Las Vegas appears to be a leader in many sustainable initiatives and a model of responsible water management. It’s perhaps also because of this that a company such as Oracle decided to organize its most famous event in this city.

While it won’t be my first visit to Sin City, as Las Vegas hosts several events like CES and is a popular destination for those exploring the West Coast on road trips, each visit brings its own unique experiences. On my previous trips to Las Vegas, what truly captivated my attention was how a city nestled in the heart of the desert could orchestrate mesmerizing water spectacles, such as the renowned ones adorning the famous Las Vegas Strip.
As a company that has dedicated significant efforts to sustainability initiatives and proudly holds Ecovadis certification, I felt compelled to delve deeper into the question of how this desert city manages its energy, and its precious water resources. To craft this article, I had the privilege of engaging with Jace Radke, the Public & Media Relations Supervisor in Las Vegas as well as Mack Branson, who represents both the Southern Nevada Water Authority (SNWA) and the Las Vegas Valley Water District. I thank them for generously sharing their time and expertise, which greatly enriched the content of this article.

As stated by Jace Radke, “More than 2.1 million people reside in Southern Nevada and 40 million people from around the world travel to Las Vegas annually. Many know Las Vegas as a world-renowned entertainment and tourist destination, however, very few of them know that Las Vegas is also world-class when it comes to being green and sustainable.” Just to provide few facts and figures that attests the efforts of Las Vegas in terms of sustainability (a more exhaustive list can be found here: www.lasvegasnevada.gov/sustainability) :
- Certifications such as LEED (Leadership in Energy and Environmental Design) certified. buildings, including Las Vegas City Hall that opened in 2012.
- Investment in hundreds of miles of bike lanes.
- Completion of more than six megawatts of solar.
- Replacement of 42,000 streetlights with LED lighting.
- Reduced annual water consumption by 256 million gallons from 2008 level.
- Every public building, park and streetlight is served by renewable energy.
- Several years ago, the largest rooftop solar array in the country went online at Mandalay Bay, with 26,000 panels covering 28 acres.
Many other initiatives will be undertaken. For example, the city plans to plant 60’000 new trees to help mitigate the heat island impacts in the city and lower temperatures. This initiative is part of the city’s 2050 Master Plan: https://www.lasvegasnevada.gov/Business/Planning-Zoning/Master-Plan

These measures have been awarded. In 2014, the city was honored with the US Conference of Mayors’ Climate Protection Award in Dallas. Such initiatives also led to a reduction in annual energy costs, decreasing from a peak total of $15 million in 2008 to less than $10 million per year.
However, despite these initiatives, Jace Radke remains aware of the ongoing challenges that pose a threat to community well-being: “Many communities, like Las Vegas, are facing issues more heat waves, flooding, or drought – an issue they face together with cities across the Southwest. Of course, water is clearly a big part of our story”.
While everything may not be perfect in Vegas in terms of sustainability, there is still room for improvement. However, when we examine the 2050 Master Plan, it becomes evident that the authorities are well aware of this potential for enhancement.
Water savings of 48’425 Olympic-size swimming poolsBefore delving into water management, it’s crucial to establish a rough idea of the proportions we are discussing. As mentioned by Jace Radke and confirmed by Mack Branson, “while many look at Las Vegas’s growth and water use at casinos, golf courses and fountains as the cause of declining water levels in Lake Mead, the fact is that Las Vegas uses less than 4% of Colorado River water. Additionally, it’s important to mention that the resort industry only uses 3% of all water used in Las Vegas.”.
To maintain the focus on proportions, consider that the Southern Nevada region receives 100 millimeters of precipitation per year, while a country like Switzerland receives between 1000 to 1500 millimeters of precipitation per year (according to meteosuisse.admin.ch). Naturally, these conditions have prompted the Southern Nevada Water Authority to implement certain restrictions, such as:
- Replacing useless grass
- Limiting water gardens
- Prohibiting installation of new gras
- Limiting residential pool sizes
- Moratorium on evaporative cooling
- Annual water budgeting for Golf courses
As explained by Mack Branson, some strong measures will be enforced: “by the end of 2026, it will be illegal to irrigate grass which is considered decorative grass. This measure is due to the fact that grass needs approximately 3000 liters of water per square meters (73 Gallon per square foot). The reason of this measure is very simple, water that we use outdoor can be used only one time which is not the case of water used indoor because we can recycle it.“
Due to all these measures, Southern Nevada Authorities estimates that: the Southern Nevada’s consumptive water use declined by approximately 32 billion gallons (121.25 trillion liters) between 2002 and 2022, despite the influx of approximately 750,000 new residents and nearly 40 million annual visitors during that span. (Conservation fact sheet – Comprehensive 2023.pdf edited by Southern Nevada Water Authority) which represents approximately 48’425 Olympic size swimming pools.
Lake Mead, a challenge that goes beyond the borders of Las VegasAs we all witnessed in recent years, the water level of Lake Mead significantly declined. Fortunately, the water level has rebounded in the past months. Nevertheless, to confront the challenge of declining mean water levels, it’s important to take into consideration that Las Vegas uses only 4% of the Colorado River water. Therefore, additional efforts are required beyond Las Vegas and Nevada.

As mentioned by Mack Branson “Our community recycles and returns nearly every drop of indoor water use back to Lake Mead, which provides 90 percent of Southern Nevada water supply. Every liter returned allows us to tank another out of the lake, treat it to drinking water standards, and deliver it to the community. We are the only community in the U.S. that we know of that recycles all of its indoor water use.”
However, Mack Branson is aware that the challenges related to Lake Mead do not stop at the borders of Las Vegas.: “It’s going to require every sector in every state that uses Colorado River water to use less. The largest water user of the Colorado River by sector is agriculture, which is beneficial to us in our nation’s food supply. However, a significant portion of that agricultural water use still employs inefficient irrigation techniques and produces a lot of feed for cattle (alfalfa). It appears that alfalfa has become one of the dominant water users in the agricultural sector. While it’s essential to support the dairy industry, we also observe a substantial portion of this alfalfa being exported out of the United States. Thus, we are effectively exporting Colorado River water out of the United States.
Intake 3, a drought insuranceIn response to unprecedented drought conditions and for the long-term preservation of Southern Nevada’s primary water storage reservoir, Lake Mead, the Southern Nevada Water Authority built a third drinking water intake system (intake 3) capable of extracting Colorado River water when the lake’s elevation is below 1,000 feet (~304 meters) above sea level.
In 2008, construction of the intake 3 commenced, involving the use of a 24-foot (~7,3 meters) diameter tunnel boring machine to excavate a tunnel beneath Lake Mead. This tunnel was subsequently connected to an intake structure anchored to the lakebed with over 1,000 truckloads of concrete.

Currently, Lake Mead stands at a height of 1,066 feet (325 meters), as indicated by: https://mead.uslakes.info/level.asp. This means it is below the level of Intake Number 1. Consequently, Pumping Stations Number 2 and Number 3 are in operation. In a few months, when the Lake Mead elevation rises above Intake Number 1, Pumping Station 3 will be deactivated to conserve energy. This decision is driven by the greater energy consumption of Pumping Station Number 3, attributable to the depth of Intake Number 3.
ConclusionThe challenges encountered by Las Vegas, and more globally South-West, are challenges that will impact other regions around the world in the future. Behavior adaptations such as the replacement of grass, water recycling techniques, strategies to reduce water evaporation, and environmental certifications for hotels are all practices that are expected to become more widespread in the coming years.
Undoubtedly, Las Vegas is a city with high energy demands, and there is still much work to be done. However, over the past several years, awareness has been steadily increasing, giving rise to promising ideas and a genuine commitment to making meaningful improvements.
Climate change is affecting all of us. We need to adapt to build resilience. Climate change in the Southwestern United States will result in a warmer and drier future and if we do not admit that degrowth can be a solution, people will have to be innovative to face these challenges.
L’article Las Vegas, from sin city to green city? est apparu en premier sur dbi Blog.
UNIX/Linux Time Command : Record elapsed time
In a recent post I mentioned using a scratchpad to record everything I do. As part of that process I try to make regular use of the UNIX/Linux “time” command to record elapsed times of long running commands. It’s really simple to use. All you do is put “time” in front of the command and … Continue reading "UNIX/Linux Time Command : Record elapsed time"
The post UNIX/Linux Time Command : Record elapsed time first appeared on The ORACLE-BASE Blog.UNIX/Linux Time Command : Record elapsed time was first posted on September 14, 2023 at 8:21 am.©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
Step by Step Demo of Vertex AI in GCP
This tutorial gets your started with GCP Vertex AI Generative AI service in step by step demo.
Commands Used:
gcloud services enable aiplatform.googleapis.com
gcloud iam service-accounts create <Your Service Account Name>
gcloud projects add-iam-policy-binding <Your Project ID> \
--member=serviceAccount:<Your Service Account Name>@<Your Project ID>.iam.gserviceaccount.com \
--role=roles/aiplatform.admin
from google.auth.transport.requests import Request
from google.oauth2.service_account import Credentials
key_path='<Your Project ID>.json'
credentials = Credentials.from_service_account_file(
key_path,
scopes=['https://www.googleapis.com/auth/cloud-platform'])
if credentials.expired:
credentials.refresh(Request())
PROJECT_ID = '<Your Project ID>'
REGION = 'us-central1'
!pip install -U google-cloud-aiplatform "shapely<2"
import vertexai
# initialize vertex
vertexai.init(project = PROJECT_ID, location = REGION, credentials = credentials)
from vertexai.language_models import TextGenerationModel
generation_model = TextGenerationModel.from_pretrained("text-bison@001")
prompt = "I want to self manage a bathroom renovation project in my home. \
Please suggest me step by step plan to carry out this project."
print(generation_model.predict(prompt=prompt).text)
AdminClient – ADD CREDENTIAL doesn’t do what you expect!
Earlier today, I have been working on a few GoldenGate Obey files that will setup a customer’s environment; that […]
The post AdminClient – ADD CREDENTIAL doesn’t do what you expect! appeared first on DBASolved.
Identify cursors/query with more than x joins involved
How to get unique transaction id of the current transaction?
Using a scratchpad…
Followers of the blog know I’m a big advocate for writing things down. The main reason I do this is because I want a record of everything I do. I rarely type a command directly into the command line. I nearly always type it in a scratchpad first. Currently I have 67,250 lines in my … Continue reading "Using a scratchpad…"
The post Using a scratchpad… first appeared on The ORACLE-BASE Blog.Using a scratchpad… was first posted on September 13, 2023 at 7:45 am.©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
Gradient Tutorial to Fine Tune LLM for Free - Step by Step
This video is tutorial of fine-tuning large language model in Gradient using Python in AWS. With Gradient, you can fine tune and get completions on private LLMs with a simple web API. No infrastructure needed. Build private, SOC2 compliant AI applications instantly.
Commands Used:
!pip install transformer
!pip install gradientai --upgrade
import os
os.environ['GRADIENT_ACCESS_TOKEN'] = "<TOKEN>"
os.environ['GRADIENT_WORKSPACE_ID'] = "<Workspace ID>"
from gradientai import Gradient
def main():
with Gradient() as gradient:
base_model = gradient.get_base_model(base_model_slug="nous-hermes2")
new_model_adapter = base_model.create_model_adapter(
name="My Model"
)
print(f"Model Adapter Id {new_model_adapter.id}")
sample_query = "### Instruction: Who is Fahd Mirza? \n\n### Response:"
print(f"Asking: {sample_query}")
# before fine-tuning
completion = new_model_adapter.complete(query=sample_query, max_generated_token_count=100).generated_output
print(f"Before fine-tuning): {completion}")
samples = [
{
"inputs": "### Instruction: Who is Fahd Mirza? \n\n### Response: Fahd Mirza is a technologist who shares his expertise on YouTube, covering topics such as AI, Cloud, DevOps, and databases."
},
{
"inputs": "### Instruction: Please provide information about Fahd Mirza. \n\n### Response: Fahd Mirza is an experienced cloud engineer, AI enthusiast, and educator who creates educational content on various technical subjects on YouTube."
},
{
"inputs": "### Instruction: What can you tell me about Fahd Mirza? \n\n### Response: Fahd Mirza is a content creator on YouTube, specializing in AI, Cloud, DevOps, and database technologies. He is known for his informative videos."
},
{
"inputs": "### Instruction: Describe Fahd Mirza for me. \n\n### Response: Fahd Mirza is a YouTuber and blogger hailing from Australia, with a strong background in cloud engineering and artificial intelligence."
},
{
"inputs": "### Instruction: Give me an overview of Fahd Mirza. \n\n### Response: Fahd Mirza, based in Australia, is a seasoned cloud engineer and AI specialist who shares his knowledge through YouTube content on topics like AI, Cloud, DevOps, and databases."
},
{
"inputs": "### Instruction: Who exactly is Fahd Mirza? \n\n### Response: Fahd Mirza is an Australian-based content creator known for his YouTube channel, where he covers a wide range of technical subjects, including AI, Cloud, DevOps, and databases."
},
]
num_epochs = 5
count = 0
while count < num_epochs:
print(f"Fine-tuning the model, Epoch iteration => {count + 1}")
new_model_adapter.fine_tune(samples=samples)
count = count + 1
# After fine-tuning
completion = new_model_adapter.complete(query=sample_query, max_generated_token_count=100).generated_output
print(f"After Fine-Tuning: {completion}")
new_model_adapter.delete()
if __name__ == "__main__":
main()
Migration from WE8ISO8859P15 to AL32UTF8
Pages
