Feed aggregator

Oracle FAST=TRUE in sqlplus? Some thoughts about rowprefetch

Yann Neuhaus - Sun, 2020-01-05 16:56

During my time as a Consultant working on Tuning Tasks I had the feeling that many people think that there is an Oracle-parameter “FAST=TRUE” to speed up the performance and throughput of the database calls. Unfortunately such a parameter is not available, but since version 12cR2 Oracle provided the option “-F” or “-FAST” for sqlplus, which looks like a “FAST=TRUE”-setting. Here an excerpt from the documentation:


The FAST option improves general performance. This command line option changes the values of the following default settings:
 
- ARRAYSIZE = 100
- LOBPREFETCH = 16384
- PAGESIZE = 50000
- ROWPREFETCH = 2
- STATEMENTCACHE = 20

I was interested in where the rowprefetch-setting could result in an improvement.

The documentation about rowprefetch is as follows:


SET ROWPREFETCH {1 | n}
 
Sets the number of rows that SQL*Plus will prefetch from the database at one time. The default value is 1.
 
Example
 
To set the number of prefetched rows to 200, enter
 
SET ROWPREFETCH 200
 
If you do not specify a value for n, the default is 1 row. This means that rowprefetching is off.
 
Note: The amount of data contained in the prefetched rows should not exceed the maximum value of 2147483648 bytes (2 Gigabytes). The  setting in the oraaccess.xml file can override the SET ROWPREFETCH setting in SQL*Plus. For more information about oraaccess.xml, see the Oracle Call Interface Programmer's Guide. 

A simple test where rowprefetch can make a difference is the use of hash clusters (see the Buffers column in the execution plan below). E.g.


SQL> create cluster DEMO_CLUSTER(CUST_ID number) size 4096 single table hashkeys 1000 ;
 
Cluster created.
 
SQL> create table DEMO cluster DEMO_CLUSTER(CUST_ID) as select * from CUSTOMERS;
 
Table created.
 
SQL> exec dbms_stats.gather_table_stats(user,'DEMO');
 
PL/SQL procedure successfully completed.
 
SQL> select num_rows,blocks from user_tables where table_name='DEMO';
 
  NUM_ROWS     BLOCKS
---------- ----------
     55500	 1035
 
SQL> show rowprefetch
rowprefetch 1
SQL> select /*+ gather_plan_statistics */ rowid,cust_id from DEMO where cust_id=101;
 
ROWID		      CUST_ID
------------------ ----------
AAAR4qAAMAAAAedAAA	  101
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
-----------------------------------------
SQL_ID	9g2nyr9h2ytk4, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ rowid,cust_id from DEMO where
cust_id=101
 
Plan hash value: 3286081706
 
------------------------------------------------------------------------------------
| Id  | Operation	  | Name | Starts | E-Rows | A-Rows |	A-Time	 | Buffers |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |	 |	1 |	   |	  1 |00:00:00.01 |	 2 |
|*  1 |  TABLE ACCESS HASH| DEMO |	1 |	 1 |	  1 |00:00:00.01 |	 2 |
------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - access("CUST_ID"=101)
 
SQL> set rowprefetch 2
SQL> select /*+ gather_plan_statistics */ rowid,cust_id from DEMO where cust_id=101;
 
ROWID		      CUST_ID
------------------ ----------
AAAR4qAAMAAAAedAAA	  101
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
-----------------------------------------
SQL_ID	9g2nyr9h2ytk4, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ rowid,cust_id from DEMO where
cust_id=101
 
Plan hash value: 3286081706
 
------------------------------------------------------------------------------------
| Id  | Operation	  | Name | Starts | E-Rows | A-Rows |	A-Time	 | Buffers |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |	 |	1 |	   |	  1 |00:00:00.01 |	 1 |
|*  1 |  TABLE ACCESS HASH| DEMO |	1 |	 1 |	  1 |00:00:00.01 |	 1 |
------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - access("CUST_ID"=101)

Due to the prefetch of 2 rows Oracle detects that there actually is only 1 row and avoids the second logical IO (a second fetch).
If cust_id is unique then I would have created a unique (or primary) key constraint here, which would avoid a second fetch as well (because Oracle knows from the constraint that there can be max 1 row per cust_id), but in that case I have to maintain the created index.

I made a couple of tests, which compared the behaviour with different settings of rowprefetch and arraysize in sqlplus (what is actually the difference between the 2 settings?). That will be a subject of a future Blog.

Cet article Oracle FAST=TRUE in sqlplus? Some thoughts about rowprefetch est apparu en premier sur Blog dbi services.

Moving to https://mattypenny.github.io/

Matt Penny - Sun, 2020-01-05 08:10

Much as I like WordPress, I’m moving all of this stuff over to:

https://mattypenny.github.io/

Categories: DBA Blogs

Documentum – Java exception stack on iAPI/iDQL login

Yann Neuhaus - Sun, 2020-01-05 02:00

Recently, I was doing some sanity checks on a Documentum Server and I saw a Java exception stack while logging in using iAPI/iDQL to a Repository. It was reproducible for all Repositories. I’ve never seen something like that before (or at least I don’t remember it) so I was a little bit surprised. Whenever there are errors upon login, it is usually Documentum error messages that are printed and there is no exception stack. Since it took me some efforts finding the root cause, I thought about sharing it.

The exception stack displayed was the following one:

[dmadmin@cs-0 ~]$ echo "quit" | iapi gr_repo -Udmadmin -Pxxx

        EMC Documentum iapi - Interactive API interface
        (c) Copyright EMC Corp., 1992 - 2016
        All rights reserved.
        Client Library Release 7.3.0040.0025

Connecting to Server using docbase gr_repo
DfException:: THREAD: main; MSG: [DM_SESSION_I_SESSION_START]info:  "Session 0112d687800c9214 started for user dmadmin."; ERRORCODE: 100; NEXT: null
        at com.documentum.fc.client.impl.docbase.DocbaseExceptionMapper.newException(DocbaseExceptionMapper.java:57)
        at com.documentum.fc.client.impl.connection.docbase.MessageEntry.getException(MessageEntry.java:39)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseMessageManager.getExceptionForAllMessages(DocbaseMessageManager.java:176)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.getExceptionForAllMessages(DocbaseConnection.java:1518)
        at com.documentum.fc.client.impl.session.Session.getExceptionsForAllMessages(Session.java:1603)
        at com.documentum.fc.client.impl.session.SessionHandle.getExceptionsForAllMessages(SessionHandle.java:1301)
        at com.documentum.dmcl.impl.ApiContext.addMessages(ApiContext.java:423)
        at com.documentum.dmcl.impl.ApiContext.collectExceptionsForReporting(ApiContext.java:370)
        at com.documentum.dmcl.impl.GetMessageHandler.get(GetMessageHandler.java:23)
        at com.documentum.dmcl.impl.DmclApi.get(DmclApi.java:49)
        at com.documentum.dmcl.impl.DmclApiNativeAdapter.get(DmclApiNativeAdapter.java:145)
        at com.documentum.dmcl.impl.DmclApiNativeAdapter.get(DmclApiNativeAdapter.java:130)


Connected to Documentum Server running Release 7.3.0050.0039  Linux64.Oracle
Session id is s0
API> Bye
[dmadmin@cs-0 ~]$

 

The login was successful but still, a strange exception stack appeared. The first thing I did was checking the Repository log file but there was nothing out of the ordinary inside it except for one thing:

[dmadmin@cs-0 ~]$ cd $DOCUMENTUM/dba/log
[dmadmin@cs-0 log]$
[dmadmin@cs-0 log]$ grep -A3 "Agent Exec" gr_repo.log
Wed Sep 11 10:38:29 2019 [INFORMATION] [AGENTEXEC 1477] Detected during program initialization: Agent Exec connected to server gr_repo:  DfException:: THREAD: main; MSG: [DM_SESSION_I_SESSION_START]info:  "Session 0112d687800c8904 started for user dmadmin."; ERRORCODE: 100; NEXT: null
        at com.documentum.fc.client.impl.docbase.DocbaseExceptionMapper.newException(DocbaseExceptionMapper.java:57)
        at com.documentum.fc.client.impl.connection.docbase.MessageEntry.getException(MessageEntry.java:39)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseMessageManager.getExceptionForAllMessages(DocbaseMessageManager.java:176)
        at com.documentu
[dmadmin@cs-0 log]$

 

While starting, the Agent Exec was therefore facing the same behavior with the exact same stack (which is cut at the 4th line but it’s the same stack until then so it’s safe to assume it’s the same). Therefore, to dig deeper and to find when the issue started exactly, I checked the logs from the agentexec/jobs since this will be kept until cleanup from the log purge and since it does login to the Repository:

[dmadmin@cs-0 log]$ cd $DOCUMENTUM/dba/log/gr_repo/agentexec
[dmadmin@cs-0 agentexec]$
[dmadmin@cs-0 agentexec]$ # Check the last file
[dmadmin@cs-0 agentexec]$ cat $(ls -tr job_* | tail -1)
Wed Sep 11 18:00:21 2019 [INFORMATION] [LAUNCHER 3184] Detected while preparing job dm_ConsistencyChecker for execution: Agent Exec connected to server gr_repo:  DfException:: THREAD: main; MSG: [DM_SESSION_I_SESSION_START]info:  "Session 0112d687800c8974 started for user dmadmin."; ERRORCODE: 100; NEXT: null
        at com.documentum.fc.client.impl.docbase.DocbaseExceptionMapper.newException(DocbaseExceptionMapper.java:57)
        at com.documentum.fc.client.impl.connection.docbase.MessageEntry.getException(MessageEntry.java:39)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseMessageManager.getExceptionForAllMessages(DocbaseMessageManager.java:176)
        at com.documentu
[dmadmin@cs-0 agentexec]$
[dmadmin@cs-0 agentexec]$ # Finding the first file with the error
[dmadmin@cs-0 agentexec]$ for f in $(ls -tr); do r=$(grep "_I_SESSION_START.*ERRORCODE" "${f}"); if [[ "${r}" != "" ]]; then echo "${r}"; break; fi; done
Tue Sep 10 18:00:06 2019 [INFORMATION] [LAUNCHER 31113] Detected while preparing job dm_ConsistencyChecker for execution: Agent Exec connected to server gr_repo:  DfException:: THREAD: main; MSG: [DM_SESSION_I_SESSION_START]info:  "Session 0112d687800c8827 started for user dmadmin."; ERRORCODE: 100; NEXT: null
[dmadmin@cs-0 agentexec]$

 

In all the job’s sessions files, there were the same stack (or rather a piece of the stack). At first, I didn’t understand where this was coming from, all I know was that it was linked somehow to the login inside the Repository and that it appeared for the first time on the date returned by my last command above. It was not really an error message since it wasn’t showing any “_E_” messages but it was still printing an exception.

Knowing when it appeared the first time, I looked at all the files that have been modified on that day and among log files, which are expected and can be ignored, there were the dfc.properties file. This provided me the reason for this message: it was actually due to enabling the diagnostic mode on the dfc.properties of the Documentum Server. To be exact, it was due to the “dfc.diagnostics.exception.include_stack=true” entry:

[dmadmin@cs-0 agentexec]$ tail -5 $DOCUMENTUM_SHARED/config/dfc.properties
dfc.session.secure_connect_default=secure
dfc.time_zone=UTC
dfc.diagnostics.resources.enable=true
dfc.diagnostics.exception.include_stack=true
dfc.tracing.print_exception_stack=true
[dmadmin@cs-0 agentexec]$
[dmadmin@cs-0 agentexec]$ echo "quit" | iapi gr_repo -Udmadmin -Pxxx

        EMC Documentum iapi - Interactive API interface
        (c) Copyright EMC Corp., 1992 - 2016
        All rights reserved.
        Client Library Release 7.3.0040.0025

Connecting to Server using docbase gr_repo
DfException:: THREAD: main; MSG: [DM_SESSION_I_SESSION_START]info:  "Session 0112d687800c9235 started for user dmadmin."; ERRORCODE: 100; NEXT: null
        at com.documentum.fc.client.impl.docbase.DocbaseExceptionMapper.newException(DocbaseExceptionMapper.java:57)
        at com.documentum.fc.client.impl.connection.docbase.MessageEntry.getException(MessageEntry.java:39)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseMessageManager.getExceptionForAllMessages(DocbaseMessageManager.java:176)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.getExceptionForAllMessages(DocbaseConnection.java:1518)
        at com.documentum.fc.client.impl.session.Session.getExceptionsForAllMessages(Session.java:1603)
        at com.documentum.fc.client.impl.session.SessionHandle.getExceptionsForAllMessages(SessionHandle.java:1301)
        at com.documentum.dmcl.impl.ApiContext.addMessages(ApiContext.java:423)
        at com.documentum.dmcl.impl.ApiContext.collectExceptionsForReporting(ApiContext.java:370)
        at com.documentum.dmcl.impl.GetMessageHandler.get(GetMessageHandler.java:23)
        at com.documentum.dmcl.impl.DmclApi.get(DmclApi.java:49)
        at com.documentum.dmcl.impl.DmclApiNativeAdapter.get(DmclApiNativeAdapter.java:145)
        at com.documentum.dmcl.impl.DmclApiNativeAdapter.get(DmclApiNativeAdapter.java:130)


Connected to Documentum Server running Release 7.3.0050.0039  Linux64.Oracle
Session id is s0
API> Bye
[dmadmin@cs-0 agentexec]$
[dmadmin@cs-0 agentexec]$ sed -i sed 's,^dfc.diagnostics.exception.include_stack,#&,' $DOCUMENTUM_SHARED/config/dfc.properties
[dmadmin@cs-0 agentexec]$
[dmadmin@cs-0 agentexec]$ echo "quit" | iapi gr_repo -Udmadmin -Pxxx

        EMC Documentum iapi - Interactive API interface
        (c) Copyright EMC Corp., 1992 - 2016
        All rights reserved.
        Client Library Release 7.3.0040.0025

Connecting to Server using docbase gr_repo
[DM_SESSION_I_SESSION_START]info:  "Session 0112d687800c9237 started for user dmadmin."

Connected to Documentum Server running Release 7.3.0050.0039  Linux64.Oracle
Session id is s0
API> Bye
[dmadmin@cs-0 agentexec]$

 

As you can see above, commenting the line “dfc.diagnostics.exception.include_stack=true” (meaning setting it to false, the default value) caused the exception stack to disappear. Since I was curious about this stack and wanted confirmation that this is “expected”, I opened a case with the OpenText Support (#4331438) and they confirmed me after a few days that it wasn’t considered an “ERROR“, it was more of an “INFO” message. It’s a strange way to display informative messages but hey, who am I to judge!

 

Cet article Documentum – Java exception stack on iAPI/iDQL login est apparu en premier sur Blog dbi services.

Documentum – Connection to docbrokers and Repositories inside K8s from an external DFC Client

Yann Neuhaus - Sat, 2020-01-04 02:00

How can you connect an external DFC Client to docbrokers and Repositories hosted on Kubernetes Pods? That seems to be a very simple question yet it might prove difficult… Let’s talk about this challenge in this blog and possible solutions/workarounds.

As you all know, Kubernetes is using containers so just like for a basic Docker container, you won’t be able to access it from the outside by default. On Docker, you will need to expose some ports and then you can interact with whatever is running on that port. For Kubernetes, it’s the same principle but it obviously add other layers in addition which makes it even more complicated. Therefore, if you want to be able to connect to a docbroker inside a K8s Pod from the outside of K8s, then you will need to do a few things:

  • at the container level, to open the ports 1489/1490 (default ones, you can change them obviously)
  • a K8s Service to expose these ports inside K8s
  • an Nginx Ingres Controller for which the TCP ports 1489/1490 have been configured for external accesses (or other ports if these are already used for another namespace for example)
  • a “Load Balancer” K8s Service (still at the Nginx Ingres Controller level) which exposes these ports using an external IP

 

Once you have that, you should be able to communicate with a docbroker that is inside a K8s pod. If you want to have a chance to talk to a Repository, then you will also need to do the same thing but for the Repository ports. When you install a repo, you will specify in the /etc/services the ports it should use (just like for the docbroker).

For this example, let’s start simple with the same ports internally and externally:

  • DFC Client host: vm
  • K8s pod short name (hostname): cs-0
  • K8s pod full name (headless service / full hostname): cs-0.cs.dbi-ns01.svc.cluster.local
  • K8s pod IP: 1.1.1.100
  • K8s pod docbroker port: 1489/1490
  • K8s pod Repositories port: gr_repo=49400/49401    //    REPO1=49402/49403
  • K8s external hostname/lb: k8s-cs-dbi-ns01.domain.com
  • K8s external IP: 2.2.2.200
  • K8s external docbroker port: 1489/1490
  • K8s external Repositories port: gr_repo=49400/49401    //    REPO1=49402/49403

 

Considering the above setup (both the docbroker and Repositories ports configured on K8s), you can already talk to the docbroker properly:

[dmadmin@vm ~]$ grep "dfc.docbroker" dfc.properties
dfc.docbroker.host[0]=k8s-cs-dbi-ns01.domain.com
dfc.docbroker.port[0]=1489
[dmadmin@vm ~]$
[dmadmin@vm ~]$ nc -v k8s-cs-dbi-ns01.domain.com 1489
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connected to 2.2.2.200:1489.
^C
[dmadmin@vm ~]$ 
[dmadmin@vm ~]$ nc -v k8s-cs-dbi-ns01.domain.com 49402
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connected to 2.2.2.200:49402.
^C
[dmadmin@vm ~]$ 
[dmadmin@vm ~]$ dmqdocbroker -t k8s-cs-dbi-ns01.domain.com -p 1489 -c ping
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 16.4.0000.0185
Using specified port: 1489
Successful reply from docbroker at host (cs-0) on port(1490) running software version (16.4.0170.0234  Linux64).
[dmadmin@vm ~]$ 
[dmadmin@vm ~]$ dmqdocbroker -t k8s-cs-dbi-ns01.domain.com -p 1489 -c getdocbasemap
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 16.4.0000.0185
Using specified port: 1489
**************************************************
**     D O C B R O K E R    I N F O             **
**************************************************
Docbroker host            : cs-0
Docbroker port            : 1490
Docbroker network address : INET_ADDR: 01 2d3 01010164 cs-0 1.1.1.100
Docbroker version         : 16.4.0170.0234  Linux64
**************************************************
**     D O C B A S E   I N F O                  **
**************************************************
--------------------------------------------
Docbase name        : gr_repo
Docbase id          : 1234567
Docbase description : dbi-ns01 dev k8s gr
Govern docbase      :
Federation name     :
Server version      : 16.4.0170.0234  Linux64.Oracle
Docbase Roles       : Global Registry
Docbase Dormancy Status     :
--------------------------------------------
Docbase name        : REPO1
Docbase id          : 1234568
Docbase description : dbi-ns01 dev k8s repo1
Govern docbase      :
Federation name     :
Server version      : 16.4.0170.0234  Linux64.Oracle
Docbase Roles       :
Docbase Dormancy Status     :
--------------------------------------------
[dmadmin@vm ~]$

 

So as you can see above, the docbroker does respond properly with the list of Repositories that it is aware of (Repo name, ID, hostname, …) and for that purpose, there is no need for the Repositories’ ports to be opened, only the docbroker is enough. However, as soon as you want to go further and start talking to the Repositories, you will obviously need to open these additional ports as well. Above, I used 49402/49403 for the REPO1 Repository (both internal and external). When trying to login to a target Repository, it will fail:

[dmadmin@vm ~]$ echo "quit" | iapi REPO1 -Udmadmin -P${dm_pw}

        OpenText Documentum iapi - Interactive API interface
        Copyright (c) 2018. OpenText Corporation
        All rights reserved.
        Client Library Release 16.4.0000.0185

Connecting to Server using docbase REPO1
^C[dmadmin@vm ~]$
[dmadmin@vm ~]$

 

Why is that? Well the reason is that to connect to a docbroker, a DFC Client will use the value from the well-known “dfc.properties” file. By reading it, it will know where the docbroker can be found: in our case, it’s “k8s-cs-dbi-ns01.domain.com:1489“. When that is done, the docbroker replies with the list of Repositories known and it will also reply with the “host” that should be used to communicate with the Repositories. That’s because the Repositories might not be on the same host as the docbroker and therefore it needs to provides the information to the DFC Client. However, that “host” is actually an IP! When a Repository register itself with a docbroker, the docbroker records the source IP of the request and it will then forward this IP to the DFC Client that wants to talk to this Repository.

The problem here is that the Repositories are installed on K8s Pods and therefore the IP that the docbroker knows is actually the IP of the K8s Pod… Which is, therefore, not reachable from outside of K8s!

 

1. IP Forwarding, a solution?

If you want to validate a setup or do some testing, it’s pretty simple on Linux, you can quickly setup an IP Forwarding between the IP of the K8s Pod (which points to nothing) and the IP of the K8s LB Service that you configured previously for the docbroker and Repositories ports. Here is an example:

[dmadmin@vm ~]$ nslookup k8s-cs-dbi-ns01.domain.com
Server: 1.1.1.10
Address: 1.1.1.10#53

k8s-cs-dbi-ns01.domain.com     canonical name = k8s-cluster-worker2.domain.com.
Name:   k8s-cluster-worker2.domain.com
Address: 2.2.2.200
[dmadmin@vm ~]$
[dmadmin@vm ~]$ external_ip=2.2.2.200
[dmadmin@vm ~]$ ping -c 1 ${external_ip}
PING 2.2.2.200 (2.2.2.200) 56(84) bytes of data.
64 bytes from 2.2.2.200: icmp_seq=1 ttl=63 time=0.980 ms

--- 2.2.2.200 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.980/0.980/0.980/0.000 ms
[dmadmin@vm ~]$
[dmadmin@vm ~]$ internal_ip=1.1.1.100
[dmadmin@vm ~]$ ping -c 1 ${internal_ip}
PING 1.1.1.100 (1.1.1.100) 56(84) bytes of data.

--- 1.1.1.100 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms
[dmadmin@vm ~]$
[dmadmin@vm ~]$ echo "quit" | iapi REPO1 -Udmadmin -P${dm_pw}

        OpenText Documentum iapi - Interactive API interface
        Copyright (c) 2018. OpenText Corporation
        All rights reserved.
        Client Library Release 16.4.0000.0185

Connecting to Server using docbase REPO1
^C[dmadmin@vm ~]$
[dmadmin@vm ~]$
[dmadmin@vm ~]$
[dmadmin@vm ~]$ sudo sysctl -w net.ipv4.ip_forward=1
net.ipv4.ip_forward = 1
[dmadmin@vm ~]$ 
[dmadmin@vm ~]$ sudo iptables -t nat -A OUTPUT -d ${internal_ip} -j DNAT --to-destination ${external_ip}
[dmadmin@vm ~]$ 
[dmadmin@vm ~]$ echo "quit" | iapi REPO1 -Udmadmin -P${dm_pw}

        OpenText Documentum iapi - Interactive API interface
        Copyright (c) 2018. OpenText Corporation
        All rights reserved.
        Client Library Release 16.4.0000.0185

Connecting to Server using docbase REPO1
[DM_SESSION_I_SESSION_START]info:  "Session 0112d6888000152a started for user dmadmin."

Connected to OpenText Documentum Server running Release 16.4.0170.0234  Linux64.Oracle
Session id is s0
API> Bye
[dmadmin@vm ~]$

 

As you can see above, as soon as you configure an IP Forwarding from the Pod IP to the K8s LB IP, then the Repository connection is successful. Here, I just executed a “quit” command to close the iAPI session but it shows that the session creation is working so you are sure that the end-to-end communication is fine.

Obviously, that is just for testing… Indeed, your Pod IP is going to change in the future (after each restart of the pod for example) which means that the IP Forwarding will be broken at that time. This also requires being setup on the client directly because the DFC Client will try to communicate with a specific IP… But this IP most probably doesn’t point to anything and therefore the only way to make it happen correctly is either setting that up on the client or on the network layer, which is super annoying and isn’t really reliable anyway so this isn’t a solution.

 

2. Docbroker Translation, a solution?

Several years ago, a feature has been introduced in the docbroker that was initially planned to handle blocking rules on a FireWall: IP and Port Translation. I believe it was introduced for Documentum 6.5 but I might be wrong, it was a long time ago… Since the issue here for K8s is pretty similar to what would happen with a FireWall blocking the IP, we can actually use this feature to help us. Contrary to the IP Forwarding, which is done on the client side, the Translation is done on the server side which is therefore global for all clients. This has an obvious advantage that you can just do it once for all clients (or rather you will need to re-do this configuration at each start of your K8s Pod since the IP will be changed). However, it also has a drawback which is that there is no exception: all communications will be translated, even K8s internal communications… So this might be a problem. There is a KB to describe how it works (KB7701941) and you can also look at the documentation as well. However, the documentation might not be really correct. Indeed, if you look at the CS 7.1 documentation, you will find this definition:

[TRANSLATION]
port=inside_firewall_port=outside_firewall_port
{,inside_firewall_port=outside_firewall_port}
host=inside_firewall_IP=outside_firewall_IP
{,inside_firewall_IP=outside_firewall_IP}

 

If you look at the CS 16.4 documentation, you will find this definition:

[TRANSLATION]
port=inside_firewall_port=outside_firewall_port
{,inside_firewall_port=outside_firewall_port}
host=outside_firewall_IP=inside_firewall_IP
{,outside_firewall_IP=inside_firewall_IP}

 

Finally, if you look at the CS 16.7 documentation, you will find yet another definition:

[TRANSLATION]port=["]outside_firewall_port=inside_firewall_port
{,outside_firewall_port=inside_firewall_port}["]
host=["]outside_firewall_ip=inside_firewall_ip
{,outside_firewall_ip=inside_firewall_ip}["]

 

Three documentations on the same feature, three different definitions :D. In addition to that, there is an example in the documentation which is also wrong, on the three documentations. The real definition is the last one, after fixing the formatting errors that is… So in short, this is what you can do with the docbroker translation:

[TRANSLATION]
port=["]ext_port_1=int_port_1{,ext_port_2=int_port_2}{,ext_port_3=int_port_3}{,...}["]
host=["]ext_ip_1=int_ip_1{,ext_ip_2=int_ip_2}{,ext_ip_3=int_ip_3}{,...}["]

 

From what I could see, the double quotes aren’t mandatory but you can use them if you want to…

Let’s test all that after removing the IP Forwarding, obviously:

[dmadmin@vm ~]$ dmqdocbroker -t k8s-cs-dbi-ns01.domain.com -p 1489 -c getdocbasemap
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 16.4.0000.0185
Using specified port: 1489
**************************************************
**     D O C B R O K E R    I N F O             **
**************************************************
Docbroker host            : cs-0
Docbroker port            : 1490
Docbroker network address : INET_ADDR: 01 2d3 01010164 cs-0 1.1.1.100
Docbroker version         : 16.4.0170.0234  Linux64
**************************************************
**     D O C B A S E   I N F O                  **
**************************************************
--------------------------------------------
Docbase name        : gr_repo
Docbase id          : 1234567
Docbase description : dbi-ns01 dev k8s gr
Govern docbase      :
Federation name     :
Server version      : 16.4.0170.0234  Linux64.Oracle
Docbase Roles       : Global Registry
Docbase Dormancy Status     :
--------------------------------------------
Docbase name        : REPO1
Docbase id          : 1234568
Docbase description : dbi-ns01 dev k8s repo1
Govern docbase      :
Federation name     :
Server version      : 16.4.0170.0234  Linux64.Oracle
Docbase Roles       :
Docbase Dormancy Status     :
--------------------------------------------
[dmadmin@vm ~]$
[dmadmin@vm ~]$ echo "quit" | iapi REPO1 -Udmadmin -P${dm_pw}

        OpenText Documentum iapi - Interactive API interface
        Copyright (c) 2018. OpenText Corporation
        All rights reserved.
        Client Library Release 16.4.0000.0185

Connecting to Server using docbase REPO1
^C[dmadmin@vm ~]$
[dmadmin@vm ~]$

 

On the docbroker side (k8s), let’s configure the translation properly and restart for the new configuration to be applied:

[dmadmin@cs-0 ~]$ cd $DOCUMENTUM/dba
[dmadmin@cs-0 dba]$ cat Docbroker.ini
[DOCBROKER_CONFIGURATION]
secure_connect_mode=dual
[dmadmin@cs-0 dba]$
[dmadmin@cs-0 dba]$ external_ip=2.2.2.200
[dmadmin@cs-0 dba]$ external_port=1489
[dmadmin@cs-0 dba]$ internal_ip=1.1.1.100
[dmadmin@cs-0 dba]$ internal_port=1489
[dmadmin@cs-0 dba]$
[dmadmin@cs-0 dba]$ echo "[TRANSLATION]" >> Docbroker.ini
[dmadmin@cs-0 dba]$ echo "port=${external_port}=${internal_port}" >> Docbroker.ini
[dmadmin@cs-0 dba]$ echo "host=${external_ip}=${internal_ip}" >> Docbroker.ini
[dmadmin@cs-0 dba]$
[dmadmin@cs-0 dba]$ cat Docbroker.ini
[DOCBROKER_CONFIGURATION]
secure_connect_mode=dual
[TRANSLATION]
port=1489=1489
host=2.2.2.200=1.1.1.100
[dmadmin@cs-0 dba]$
[dmadmin@cs-0 dba]$ ./dm_stop_Docbroker; sleep 1; ./dm_launch_Docbroker
./dmshutdown 16.4.0000.0248  Linux64 Copyright (c) 2018. OpenText Corporation.
Shutdown request was processed by Docbroker on host cs-0 (INET_ADDR: 01 2d3 01010164 cs-0 1.1.1.100)
Reply status indicates a success: OK
starting connection broker on current host: [cs-0.cs.dbi-ns01.svc.cluster.local]
with connection broker log: [$DOCUMENTUM/dba/log/docbroker.cs-0.cs.dbi-ns01.svc.cluster.local.1489.log]
connection broker pid: 18219
[dmadmin@cs-0 dba]$
[dmadmin@cs-0 dba]$ head -7 log/docbroker.cs-0.cs.dbi-ns01.svc.cluster.local.1489.log
OpenText Documentum Connection Broker (version 16.4.0170.0234  Linux64)
Copyright (c) 2018. OpenText Corporation
HOST TRANSLATION TABLE:
    [1] From(1.1.1.100), to(2.2.2.200)
PORT TRANSLATION TABLE:
    [1] From(1489), to(1489)
2019-12-15T10:25:22.307379 [DM_DOCBROKER_I_START]info:  "Docbroker has started.  Process id: 18219"
[dmadmin@cs-0 dba]$

 

Once that is done, back on the DFC Client side, trying to connect to the Repository:

[dmadmin@vm ~]$ dmqdocbroker -t k8s-cs-dbi-ns01.domain.com -p 1489 -c getdocbasemap
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 16.4.0000.0185
Using specified port: 1489
**************************************************
**     D O C B R O K E R    I N F O             **
**************************************************
Docbroker host            : cs-0
Docbroker port            : 1490
Docbroker network address : INET_ADDR: 01 2d3 01010164 cs-0 1.1.1.100
Docbroker version         : 16.4.0170.0234  Linux64
**************************************************
**     D O C B A S E   I N F O                  **
**************************************************
--------------------------------------------
Docbase name        : gr_repo
Docbase id          : 1234567
Docbase description : dbi-ns01 dev k8s gr
Govern docbase      :
Federation name     :
Server version      : 16.4.0170.0234  Linux64.Oracle
Docbase Roles       : Global Registry
Docbase Dormancy Status     :
--------------------------------------------
Docbase name        : REPO1
Docbase id          : 1234568
Docbase description : dbi-ns01 dev k8s repo1
Govern docbase      :
Federation name     :
Server version      : 16.4.0170.0234  Linux64.Oracle
Docbase Roles       :
Docbase Dormancy Status     :
--------------------------------------------
[dmadmin@vm ~]$

 

As you can see above, the dmqdocbroker will still print the Internal IP (1.1.1.100), that’s fine/normal. However the Repository connection should now work:

[dmadmin@vm ~]$ echo "quit" | iapi REPO1 -Udmadmin -P${dm_pw}

        OpenText Documentum iapi - Interactive API interface
        Copyright (c) 2018. OpenText Corporation
        All rights reserved.
        Client Library Release 16.4.0000.0185

Connecting to Server using docbase REPO1
[DM_SESSION_I_SESSION_START]info:  "Session 0112d6888000175b started for user dmadmin."

Connected to OpenText Documentum Server running Release 16.4.0170.0234  Linux64.Oracle
Session id is s0
API> Bye
[dmadmin@vm ~]$

 

So as you can see above, using the docbroker translation mechanisms is indeed a solution to be able to connect to a Repository that is inside a K8s pod. There are drawbacks as mentioned above but at least, that’s a valid workaround.

 

3. Using different ports externally

Above, I have always been using the same ports internally and externally. However, in a real case, you will probably have, in the end, hundreds or even thousands of CS pods. So how do you manage that? Well you saw above that the docbroker translation can be used to translate an external port into an internal port but it’s not just for the docbroker port! You can actually use that for the Repository ports as well.

Let’s say for this example that I have a second namespace (dbi-ns02) with the following:

  • DFC Client Host: vm
  • K8s pod short name (hostname): cs-0
  • K8s pod full name (headless service / full hostname): cs-0.cs.dbi-ns02.svc.cluster.local
  • K8s pod IP: 1.1.1.200
  • K8s pod docbroker port: 1489/1490
  • K8s pod Repositories port: gr_repo=49400/49401    //    REPO2=49402/49403
  • K8s external hostname/lb: k8s-cs-dbi-ns02.domain.com
  • K8s external IP: 2.2.2.200
  • K8s external docbroker port: 1491/1492
  • K8s external Repositories port: gr_repo=49404/49405    //    REPO2=49406/49407

 

The external IP is still the same because it’s the same K8s Cluster but the external ports are now different. The internal IP is also different because it’s another namespace. So with the default docbroker configuration (no translation), then we have the same issue, obviously, where the iAPI session will hang and never respond because of the IP that doesn’t exist.

So if we try to setup the basic docbroker translation just like what we did above, then on the K8s pod, we will have the following:

[dmadmin@cs-0 ~]$ cd $DOCUMENTUM/dba
[dmadmin@cs-0 dba]$
[dmadmin@cs-0 dba]$ ifconfig | grep inet | grep -v 127.0.0.1
        inet 1.1.1.200  netmask 255.255.255.255  broadcast 0.0.0.0
[dmadmin@cs-0 dba]$
[dmadmin@cs-0 dba]$ cat Docbroker.ini
[DOCBROKER_CONFIGURATION]
secure_connect_mode=dual
[TRANSLATION]
port=1491=1489
host=2.2.2.200=1.1.1.200
[dmadmin@cs-0 dba]$

 

With this configuration, if you are trying to connect from an external DFC Client, then it will be able to talk to the docbroker (assuming you have all the K8s stuff in place for redirecting the ports properly) but won’t be able to talk to the Repository:

[dmadmin@vm ~]$ dmqdocbroker -t k8s-cs-dbi-ns02.domain.com -p 1491 -c getdocbasemap
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 16.4.0000.0185
Using specified port: 1491
**************************************************
**     D O C B R O K E R    I N F O             **
**************************************************
Docbroker host            : cs-0
Docbroker port            : 1490
Docbroker network address : INET_ADDR: 02 3d4 02020286 cs-0 1.1.1.200
Docbroker version         : 16.4.0170.0234  Linux64
**************************************************
**     D O C B A S E   I N F O                  **
**************************************************
--------------------------------------------
Docbase name        : gr_repo
Docbase id          : 1234569
Docbase description : dbi-ns02 dev k8s gr
Govern docbase      :
Federation name     :
Server version      : 16.4.0170.0234  Linux64.Oracle
Docbase Roles       : Global Registry
Docbase Dormancy Status     :
--------------------------------------------
Docbase name        : REPO2
Docbase id          : 1234570
Docbase description : dbi-ns02 dev k8s repo2
Govern docbase      :
Federation name     :
Server version      : 16.4.0170.0234  Linux64.Oracle
Docbase Roles       :
Docbase Dormancy Status     :
--------------------------------------------
[dmadmin@vm ~]$
[dmadmin@vm ~]$ echo "quit" | iapi REPO2 -Udmadmin -P${dm_pw}

        OpenText Documentum iapi - Interactive API interface
        Copyright (c) 2018. OpenText Corporation
        All rights reserved.
        Client Library Release 16.4.0000.0185

Connecting to Server using docbase REPO2
:Wrong docbase id: (1234570) expecting: (1234568)

Could not connect
[dmadmin@vm ~]$

 

The reason for that is that I have been talking to the docbroker on the external port 1491, which is therefore the docbroker 1489 of the second namespace (“dbi-ns02“). This docbroker replied to the DFC Client that the Repository is using the port 49402/49403, which is true but only internally… Therefore, my DFC Client has been trying to connect to the Repository REPO2 (from the second namespace) using the port which is actually the one used by the REPO1 (from the first namespace) and therefore there is a mismatch in the Repository ID.

For that purpose, you can update the docbroker translation to include the Repositories ports as well:

[dmadmin@cs-0 dba]$ cat Docbroker.ini
[DOCBROKER_CONFIGURATION]
secure_connect_mode=dual
[TRANSLATION]
port=1491=1489,49404=49400,49405=49401,49406=49402,49407=49403
host=2.2.2.200=1.1.1.200
[dmadmin@cs-0 dba]$

 

With this new docbroker translation configuration, the external DFC Client should be able to communicate properly with the repository:

[dmadmin@vm ~]$ echo "quit" | iapi REPO2 -Udmadmin -P${dm_pw}

        OpenText Documentum iapi - Interactive API interface
        Copyright (c) 2018. OpenText Corporation
        All rights reserved.
        Client Library Release 16.4.0000.0185

Connecting to Server using docbase REPO2
[DM_SESSION_I_SESSION_START]info:  "Session 0112d68a80001403 started for user dmadmin."

Connected to OpenText Documentum Server running Release 16.4.0170.0234  Linux64.Oracle
Session id is s0
API> Bye
[dmadmin@vm ~]$

 

Alternatively to all that, you might want to take a look at Traefik or Istio which might also help you to configure the correct communications from the outside of K8s to the inside. I had a case opened with the OpenText Support so that they could correct the documentation for all versions.

 

Cet article Documentum – Connection to docbrokers and Repositories inside K8s from an external DFC Client est apparu en premier sur Blog dbi services.

push_having_to_gby() – 2

Jonathan Lewis - Fri, 2020-01-03 05:31

The problem with finding something new and fiddling with it and checking to see how you can best use it to advantage is that you sometimes manage to “break” it very quickly. In yesterday’s blog note I introduced the /*+ push_having_to_gby(@qbname) */ hint and explained why it was a useful little enhancement. I also showed a funny little glitch with a missing predicate in the execution plan.

Today I thought I’d do something a little more complex with the example I produced yesterday, and I’ve ended up with a little note that’s not actually about the hint, it’s about something that appeared in my initial testing of the hint, and then broke when I pushed it a little further. Here’s a script to create data for the new test:

rem
rem     Script:         push_having_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Dec 2019
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem

create table t1
nologging
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        lpad(rownum,10,'0')             v1,
        lpad('x',50,'x')                padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6 -- > comment to avoid WordPress format issue
;

insert into t1 values (2, lpad(2,10,'0'), lpad('x',50,'x'));
commit;

alter table t1 modify id not null;
create index t1_i1 on t1(id) nologging;

create table t2 as select * from t1;
create index t2_i1 on t2(id) nologging;

I’ve created two tables here, one a clone of the other, with one id value out of 1 million having two rows. As we saw yesterday it’s quite simple to write some SQL that uses an index full scan on the t1_i1 index to check for duplicate id values without doing a massive sort or hash aggregation:


set serveroutput off
alter session set statistics_level = all;

select
        /*+
                qb_name(driver)
                index(@driver t1@driver)
        */
        id 
from
        t1
where   id is not null
group by 
        id 
having  
        count(1) > 1
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));


-------------------------------------------------------------------------------------------------
| Id  | Operation            | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |       |      1 |        |      1 |00:00:00.87 |    2229 |   2228 |
|   1 |  SORT GROUP BY NOSORT|       |      1 |  50000 |      1 |00:00:00.87 |    2229 |   2228 |
|   2 |   INDEX FULL SCAN    | T1_I1 |      1 |   1000K|   1000K|00:00:00.40 |    2229 |   2228 |
-------------------------------------------------------------------------------------------------

As we saw yesterday this plan simply walks the index in order keeping track of a “running count” and doesn’t allocate a large PGA to sort a million rows of data, but there’s no asterisk by any operation telling us that there’s a predicate being checked, and no Predicate Information section to report the “count(1) > 1” predicate that we know exists (and is used, since the query produces the right answer).

Having ascertained that there is one duplicated id in the table, let’s join to the (clone) t2 table to list the rows for that id – and lets use the initial query as an inline view:

select
        /*+ 
                qb_name(main)
        */
        t2.v1
from    (
        select
                /*+
                        qb_name(driver)
                        index(@driver t1@driver)
                        no_use_hash_aggregation(@driver)
                */
                id 
        from
                t1
        where   id is not null
        group by 
                id 
        having  
                count(1) > 1
        )                       v1,
        t2
where
        t2.id = v1.id
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |      1 |        |      2 |00:00:00.76 |    2234 |     87 |       |       |          |
|   1 |  NESTED LOOPS                |       |      1 |  50000 |      2 |00:00:00.76 |    2234 |     87 |       |       |          |
|   2 |   NESTED LOOPS               |       |      1 |        |      2 |00:00:00.75 |    2232 |     28 |       |       |          |
|   3 |    VIEW                      |       |      1 |  50000 |      1 |00:00:00.75 |    2228 |      0 |       |       |          |
|*  4 |     SORT GROUP BY            |       |      1 |  50000 |      1 |00:00:00.75 |    2228 |      0 |    53M|  2539K|   47M (0)|
|   5 |      INDEX FULL SCAN         | T1_I1 |      1 |   1000K|   1000K|00:00:00.26 |    2228 |      0 |       |       |          |
|*  6 |    INDEX RANGE SCAN          | T2_I1 |      1 |        |      2 |00:00:00.01 |       4 |     28 |       |       |          |
|   7 |   TABLE ACCESS BY INDEX ROWID| T2    |      2 |      1 |      2 |00:00:00.01 |       2 |     59 |       |       |          |
------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter(COUNT(*)>1)
   6 - access("T2"."ID"="V1"."ID")

As you can see from this plan, I didn’t get the “sort group by nosort” that I wanted – even though the inline view was not merged. In fact, you’ll notice the /*+ no_use_hash_aggregation() */ hint I had to include to get a sort group by rather than a hash group by. The logic behind resolving this query block changed significantly when it went into a more complex query.

Having tried adding several other hints (blocking nlj_prefetch, nlj_batching, batched index access, setting cardinality to 1, first_rows(1) optimisation) I finally came down to using a materialized CTE (common table expression / “with” subquery):

with v1 as (
        select
                /*+
                        qb_name(driver)
                        index(@driver t1@driver)
                        materialize
                */
                id 
        from
                t1
        where
                id is not null
        group by 
                id 
        having  
                count(1) > 1
)
select
        /*+ 
                qb_name(main)
        */
        t2.v1
from    
        v1,
        t2
where
        t2.id = v1.id
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

---------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name                       | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |                            |      1 |        |      2 |00:00:00.86 |    2236 |
|   1 |  TEMP TABLE TRANSFORMATION               |                            |      1 |        |      2 |00:00:00.86 |    2236 |
|   2 |   LOAD AS SELECT (CURSOR DURATION MEMORY)| SYS_TEMP_0FD9D66F8_E3B235A |      1 |        |      0 |00:00:00.86 |    2229 |
|   3 |    SORT GROUP BY NOSORT                  |                            |      1 |  50000 |      1 |00:00:00.86 |    2228 |
|   4 |     INDEX FULL SCAN                      | T1_I1                      |      1 |   1000K|   1000K|00:00:00.39 |    2228 |
|   5 |   NESTED LOOPS                           |                            |      1 |  50000 |      2 |00:00:00.01 |       6 |
|   6 |    NESTED LOOPS                          |                            |      1 |        |      2 |00:00:00.01 |       4 |
|   7 |     VIEW                                 |                            |      1 |  50000 |      1 |00:00:00.01 |       0 |
|   8 |      TABLE ACCESS FULL                   | SYS_TEMP_0FD9D66F8_E3B235A |      1 |  50000 |      1 |00:00:00.01 |       0 |
|*  9 |     INDEX RANGE SCAN                     | T2_I1                      |      1 |        |      2 |00:00:00.01 |       4 |
|  10 |    TABLE ACCESS BY INDEX ROWID           | T2                         |      2 |      1 |      2 |00:00:00.01 |       2 |
---------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   9 - access("T2"."ID"="V1"."ID")

You’ll notice that the hinting is back to the bare minimum – with only the addition of the /*+ materialize */ hint in the CTE. You’ll also notice that the “count(1) > 1” predicate is still missing. But critically we do have the index full scan leading into a sort group by nosort and no huge memory allocation.

The price we have to pay is that we do direct path writes to the temporary tablespace to materialize the CTE and db file scattered reads to read the data back. But since this example is aimed at a large data set returning a small result set this may be a highly appropriate trade off.

It’s possible that a detailed examination of the 10053 trace file would give us a clue about why Oracle can find the sort group by nosort when the query block is a materialized CTE but not when it’s an inline view – but I’m happy to leave that investigation to someone else and just leave this here as a warning that sometimes (even in 19c) there’s a difference between a non-merged view path and a materizlied subquery path.

 

WebLogic Server – Automatic/Silent setup of a SAML2 SSO

Yann Neuhaus - Thu, 2020-01-02 15:31

In a previous blog, I explained how it is possible to create an LDAP/LDAPs Authentication Provider on WebLogic. My initial goal wasn’t just to setup an LDAP/LDAPs on WebLogic Server. That was only a first step needed in order to automate the setup of a SAML2 Single Sign-On linked with the authentication from a LDAPs. Therefore, in this blog, we will take a look at that second part. Just like for the LDAP Authentication Provider, there are plenty of examples on the internet to do just that but they are all always using the GUI. When I searched for it, I didn’t find even a single one explaining how it could be done without. Maybe there are some but if so, it looks like they are pretty well hidden. In addition to that, you might think about just recording the steps in the WebLogic Administration Console so that it would create you the needed WLST scripts (just like for the LDAPs provider creation). Unfortunately, it’s not that simple. Indeed, it doesn’t work for everything and in addition to that, most of the steps that will be needed are outside of an edit session and therefore can’t be recorded.

In this blog, I will SAML 2.0 and I will assume that there is already an Identity Provider (“Server side“) that has been configured and I will configure a WebLogic Server (“Client side” = Service Provider) to use this Identity Provider using a WebSSO partner. In the WebLogic examples provided with the OFM full installation, there is a complete example for SAML2 on both Server and Client sides. For the Client side, they are however using a manual creation of the IdP Partner, importing the SSL Certificate, defining the URLs, aso… A simpler & faster approach is to use a metadata file that can be extracted/exported from the Server side which contains all these information and then imported into the Client side. That’s what I will show below, so it is pretty different to what is done in the example.

Alright, so the first thing to be done is to create a new Authentication Provider using the SAML2IdentityAsserter type. Because this change requires a full restart of the WebLogic Server, I usually do it with the LDAP Authentication Provider but for this example, I will split things and only talk about the SAML2 part. Just like in the previous blog, I will use a properties file and a WLST script. You can disregard the LDAP Authentication Providers parameters, they are only used for the LDAP part in the other blog, except ATN_NAME which I still used below but that’s only in case you do have a LDAP/LDAPs Authentication Provider in addition to the SAML2 one that you want to create:

[weblogic@weblogic-server-0 ~]$ cat domain.properties
# AdminServer parameters
CONFIG_FILE=/home/weblogic/secure/configfile.secure
KEY_FILE=/home/weblogic/secure/keyfile.secure
ADMIN_URL=t3s://weblogic-server-0.domain.com:8443
# LDAP Authentication Providers parameters
ATN_NAME=Internal_LDAP
ATN_FLAG=SUFFICIENT
ATN_HOST=ldap.domain.com
ATN_PORT=636
ATN_PRINCIPAL=ou=APP,ou=applications,ou=intranet,dc=dbi services,dc=com
ATN_CREDENTIAL=T3stP4ssw0rd
ATN_SSL=true
ATN_BASE_DN=ou=people,ou=intranet,dc=dbi services,dc=com
ATN_USER_FILTER=(&(uid=%u)(objectclass=person))
ATN_USER_CLASS=person
ATN_USER_AS_PRINCIPAL=true
ATN_GROUP_FILTER=(&(cn=%g)(objectclass=groupofuniquenames))
ATN_TIMEOUT=30
# IdP Partner parameters
IDA_NAME=APP_SAML2_IDAsserter
IDP_NAME=APP_SAML2_IDPartner
IDP_METADATA=/home/weblogic/idp_metadata.xml
IDP_ENABLED=true
IDP_REDIRECT_URIS=['/D2-01/*','/D2-02/*']
# Managed Servers SSO parameters
SSO_MS=msD2-01,msD2-02
SSO_URLS=https://lb_url1/saml2,https://lb_url2/saml2
SSO_ENTITY_IDS=APP_SAML2_Entity_ID_01,APP_SAML2_Entity_ID_02
SSO_SP_ENABLED=true
SSO_SP_BINDING=HTTP/POST
[weblogic@weblogic-server-0 ~]$
[weblogic@weblogic-server-0 ~]$
[weblogic@weblogic-server-0 ~]$ cat createSAML2AuthenticationProviders.wlst
##################################################################
#
# Authors: Morgan Patou    
# Version: 1.4 - 30/08/2019
#
# File: createSAML2AuthenticationProviders.wlst
# Purpose: Script to create SAML2 Authentication Providers
# Parameters: input properties file (optional)
# Output:
#
##################################################################

# Get operating system (for vars)
import os

# Read the domain properties file
try:
  if len(sys.argv) == 2:
    domainProperties=sys.argv[1]
  else:
    domainProperties=os.path.realpath(os.path.dirname(sys.argv[0])) + "/domain.properties"
  loadProperties(domainProperties)
  print ">>> Loaded the properties file: " + domainProperties
  print

except:
  exit(exitcode=1)

try:
  redirect('/dev/null','false')
  # Connect to AdminServer
  connect(userConfigFile=CONFIG_FILE,userKeyFile=KEY_FILE,url=ADMIN_URL)
  print ">>> Connected to the AdminServer."

  # Start Edit Session
  edit()
  startEdit()
  stopRedirect()
  print ">>> Edit Session started."

  # Get default Realm
  realm=cmo.getSecurityConfiguration().getDefaultRealm()

  # Create Authentication Providers
  saml2IdA=realm.lookupAuthenticationProvider(IDA_NAME)
  if saml2IdA != None:
    realm.destroyAuthenticationProvider(saml2IdA)
  saml2IdA=realm.createAuthenticationProvider(IDA_NAME,'com.bea.security.saml2.providers.SAML2IdentityAsserter')
  print ">>> Authentication Provider created."

  # Reorder Authentication Providers
  defaultAtn=realm.lookupAuthenticationProvider('DefaultAuthenticator')
  defaultIdA=realm.lookupAuthenticationProvider('DefaultIdentityAsserter')
  iplanetAtn=realm.lookupAuthenticationProvider(ATN_NAME)
  realm.setAuthenticationProviders(jarray.array([saml2IdA,iplanetAtn,defaultAtn,defaultIdA],weblogic.management.security.authentication.AuthenticationProviderMBean))
  print ">>> Authentication Providers re-ordered."

except Exception, e:
  print "ERROR... check error messages for cause."
  print e
  exit(exitcode=1)

redirect('/dev/null','false')
save()
activate()
disconnect()
exit(exitcode=0)
[weblogic@weblogic-server-0 ~]$

 

So let’s execute this script then:

[weblogic@weblogic-server-0 ~]$ ls
configServiceProviders.wlst  createSAML2AuthenticationProviders.wlst  createWebSSOIdPPartners.wlst  domain.properties  idp_metadata.xml
[weblogic@weblogic-server-0 ~]$
[weblogic@weblogic-server-0 ~]$ $ORACLE_HOME/oracle_common/common/bin/wlst.sh createSAML2AuthenticationProviders.wlst

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

>>> Loaded the properties file: /home/weblogic/domain.properties
>>> Connected to the AdminServer.
>>> Edit Session started.
>>> Authentication Provider created.
>>> Authentication Providers re-ordered.
[weblogic@weblogic-server-0 ~]$

 

As mentioned previously, you will need to restart the WebLogic Domain at this point. Once done, you can continue with the next part which is to create the IdP Partner. Using the same properties file and another WLST script:

[weblogic@weblogic-server-0 ~]$ cat createWebSSOIdPPartners.wlst
##################################################################
#
# Authors: Morgan Patou    
# Version: 1.4 - 30/08/2019
#
# File: createWebSSOIdPPartners.wlst
# Purpose: Script to create a WebSSO IdP Partner
# Parameters: input properties file (optional)
# Output:
#
##################################################################

# Get operating system (for vars)
import os

# Read the domain properties file
try:
  if len(sys.argv) == 2:
    domainProperties=sys.argv[1]
  else:
    domainProperties=os.path.realpath(os.path.dirname(sys.argv[0])) + "/domain.properties"
  loadProperties(domainProperties)
  print ">>> Loaded the properties file: " + domainProperties
  print

except:
  exit(exitcode=1)

try:
  redirect('/dev/null','false')
  # Connect to AdminServer
  connect(userConfigFile=CONFIG_FILE,userKeyFile=KEY_FILE,url=ADMIN_URL)
  print ">>> Connected to the AdminServer."
  stopRedirect()

  # Get default Realm
  realm=cmo.getSecurityConfiguration().getDefaultRealm()

  # Config Web SSO IdP Partner
  saml2IdA=realm.lookupAuthenticationProvider(IDA_NAME)
  if saml2IdA != None:
    if saml2IdA.idPPartnerExists(IDP_NAME):
      saml2IdA.removeIdPPartner(IDP_NAME)
    idpPartner=saml2IdA.consumeIdPPartnerMetadata(IDP_METADATA)
    idpPartner.setName(IDP_NAME)
    idpPartner.setEnabled(Boolean(IDP_ENABLED))
    idpPartner.setRedirectURIs(array(eval(IDP_REDIRECT_URIS),java.lang.String))
    saml2IdA.addIdPPartner(idpPartner)
  print ">>> Web SSO IdP Partner created."

except Exception, e:
  print "ERROR... check error messages for cause."
  print e
  exit(exitcode=1)

redirect('/dev/null','false')
disconnect()
exit(exitcode=0)
[weblogic@weblogic-server-0 ~]$

 

As you can see above, this one doesn’t require an edit session and therefore can’t be recorded. The key part above is the “consumeIdPPartnerMetadata(…)” method which is loading the metadata file that was generated by the Identity Provider (“Server side“). It will take care of setting up the SSL Certificate for the Identity Provider as well as all the usable URLs, aso… The path and name of this input metadata file can be found in the properties file. The execution of the WLST is simple and smooth:

[weblogic@weblogic-server-0 ~]$ $ORACLE_HOME/oracle_common/common/bin/wlst.sh createWebSSOIdPPartners.wlst

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

>>> Loaded the properties file: /home/weblogic/domain.properties
>>> Connected to the AdminServer.
>>> Web SSO IdP Partner created.
[weblogic@weblogic-server-0 ~]$

 

The next step is then to configure your Managed Servers by creating the Service Providers, defining the published URL, the Entity ID and other parameters. Then generate an output metadata file for each of your Managed Servers. These output metadata files will need to be imported into the Identity Provider to close the SAML2 SSO chain. Again a new WLST script for this last part:

[weblogic@weblogic-server-0 ~]$ cat configServiceProviders.wlst
##################################################################
#
# Authors: Morgan Patou    
# Version: 1.4 - 30/08/2019
#
# File: configServiceProviders.wlst
# Purpose: Script to configure SSO Service Providers
# Parameters: input properties file (optional)
# Output:
#
##################################################################

# Get operating system (for vars)
import os

# Read the domain properties file
try:
  if len(sys.argv) == 2:
    domainProperties=sys.argv[1]
  else:
    domainProperties=os.path.realpath(os.path.dirname(sys.argv[0])) + "/domain.properties"
  loadProperties(domainProperties)
  print ">>> Loaded the properties file: " + domainProperties
  print

except:
  exit(exitcode=1)

try:
  redirect('/dev/null','false')
  # Connect to AdminServer
  connect(userConfigFile=CONFIG_FILE,userKeyFile=KEY_FILE,url=ADMIN_URL)
  print ">>> Connected to the AdminServer."

  # Start Edit Session
  edit()
  startEdit()
  stopRedirect()
  print ">>> Edit Session started."

  # Config SSO Service Providers
  publishedSiteURLs=SSO_URLS.split(',')
  entityIDs=SSO_ENTITY_IDS.split(',')
  id=0
  for ssoServerName in SSO_MS.split(','):
    ssoServer=cmo.lookupServer(ssoServerName)
    ssoService=ssoServer.getSingleSignOnServices()
    ssoService.setPublishedSiteURL(publishedSiteURLs[id])
    ssoService.setEntityID(entityIDs[id])
    ssoService.setServiceProviderEnabled(Boolean(SSO_SP_ENABLED))
    ssoService.setServiceProviderPreferredBinding(SSO_SP_BINDING)
    id=id+1
  print ">>> SSO Service Providers configured."

except Exception, e:
  print "ERROR... check error messages for cause."
  print e
  exit(exitcode=1)

redirect('/dev/null','false')
save()
activate()

try:
  # Start Runtime Session
  domainRuntime()
  stopRedirect()
  print ">>> Runtime Session started."

  # Export Service Providers metadata
  for ssoServerName in SSO_MS.split(','):
    cd('/ServerRuntimes/'+ssoServerName)
    cmo.getSingleSignOnServicesRuntime().publish('/tmp/'+ssoServerName+'_sp_metadata.xml',false)
  print ">>> Service Providers metadata files exported."

except Exception, e:
  print "ERROR... check error messages for cause."
  print e
  exit(exitcode=1)

redirect('/dev/null','false')
disconnect()
exit(exitcode=0)
[weblogic@weblogic-server-0 ~]$

 

So as mentioned above, the first section is looping on the Managed Servers list from the parameters to configure the SAML2 SSO for all of them. This part requires an edit session. The second section is doing the export of the Service Providers metadata files under /tmp and this doesn’t need any edit session, it needs to be done with a runtime session instead. Again, the execution:

[weblogic@weblogic-server-0 ~]$ $ORACLE_HOME/oracle_common/common/bin/wlst.sh configServiceProviders.wlst

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

>>> Loaded the properties file: /home/weblogic/domain.properties
>>> Connected to the AdminServer.
>>> Edit Session started.
>>> SSO Service Providers configured.
>>> Runtime Session started.
>>> Service Providers metadata files exported.
[weblogic@weblogic-server-0 ~]$
[weblogic@weblogic-server-0 ~]$ ls /tmp/*metadata.xml
/tmp/msD2-01_sp_metadata.xml  /tmp/msD2-02_sp_metadata.xml
[weblogic@weblogic-server-0 ~]$

 

At that point, the WebLogic Server acting as Service Provider is fully configured. You can now transfer these metadata files to the Identity Provider side and import them there.

There is one last thing that I didn’t talk about and that’s the configuration of the Application itself, if needed, to use the SAML2 SSO. In the case of Documentum D2, it does support the LDAP + SAML2 SSO, you just have to have some basic configuration in the web.xml and weblogic.xml. There is an example I wrote a little bit more than two years ago: here.

 

Cet article WebLogic Server – Automatic/Silent setup of a SAML2 SSO est apparu en premier sur Blog dbi services.

WebLogic Server – Automatic/Silent creation of an LDAP Authentication Provider

Yann Neuhaus - Thu, 2020-01-02 15:20

In a previous blog, I explained how it is possible to create an LDAP/LDAPs connection from a Documentum Content Server automatically/silently (without any need for a GUI). So I thought I would do the same thing but from a WebLogic Server to have the full chain from the Application to the backend, all connected to the LDAP/LDAPs. This blog isn’t linked to Documentum, it is really just WebLogic Server specific so if you want to do the same but for another application, that’s also what you need to do. There are plenty of blogs on the internet about how to configure WebLogic but they are (almost?) always using the GUI… Which is good because it’s simple, but it is also annoying because you cannot really automate that.

As mentioned in this subsequent blog, my goal was a little bit more than just an LDAP setup so I first searched about any hints on what would be needed to setup everything. The only thing I found that was a little bit helpful was actually the examples that are shipped with the OFM (if you included them). We usually install only the minimal requirements so we don’t have the examples but you can choose to have the examples as well when you install the binaries. In the silent properties file, you can just set the install type to include “… With Examples“. Inside these examples, there is a SAML2 SSO one which seems pretty complex. There is a plethora of files for the purpose of the example obviously but most of that is completely useless outside of this scope. Also, from what I could see, it was designed for a WebLogic Server 9 so that seemed to be pretty old… Since I was using WLS 12c, I obviously expected a lot of things going wrong. It was nonetheless a good starting point to have some details about where can you find the needed elements in WLST but you will still need a lot of knowledge in WLS and WLST to be able to make something out of it. That’s where this blog comes in.

For the LDAP Authentication Provider creation, you can also record the execution from the Administration Console, it will gives you good information about what needs to be done (at least for this part).

The first thing to do to setup a LDAPs (it doesn’t apply to a plain LDAP) is to add the LDAPs SSL Certificate chain into the WebLogic Server’s trust store:

[weblogic@weblogic-server-0 ~]$ cert_location="/tmp/certs"
[weblogic@weblogic-server-0 ~]$ ssl_ldap_root_ca_file="LDAP_Root_CA.cer"
[weblogic@weblogic-server-0 ~]$ ssl_ldap_int_ca_file="LDAP_Int_CA.cer"
[weblogic@weblogic-server-0 ~]$ tks_file="$DOMAIN_HOME/certs/trust.jks"
[weblogic@weblogic-server-0 ~]$ tks_pwd="MyP4ssw0rd"
[weblogic@weblogic-server-0 ~]$
[weblogic@weblogic-server-0 ~]$ $JAVA_HOME/bin/keytool -import -trustcacerts -alias ssl_ldap_root_ca -file ${cert_location}/${ssl_ldap_root_ca_file} -keystore ${tks_file} -storepass ${tks_pwd} -noprompt
Certificate was added to keystore
[weblogic@weblogic-server-0 ~]$ $JAVA_HOME/bin/keytool -import -trustcacerts -alias ssl_ldap_int_ca -file ${cert_location}/${ssl_ldap_int_ca_file} -keystore ${tks_file} -storepass ${tks_pwd} -noprompt
Certificate was added to keystore
[weblogic@weblogic-server-0 ~]$

 

Once that is done, you can start the creation of the LDAP Authentication Provider. To be able to automate that, the best for me is to use a WLST script. Make sure the AdminServer is up, running and reachable before trying to execute a WLST script. I put all my parameters in a properties file and I’m loading this file in the WLST so that it creates the correct object with all the needed parameters. Here are the properties and the WLST script to create the LDAP (you can disregard the IdP and Managed Servers parameters, they are only used for the SAML2 SSO part in the other blog):

[weblogic@weblogic-server-0 ~]$ cat domain.properties
# AdminServer parameters
CONFIG_FILE=/home/weblogic/secure/configfile.secure
KEY_FILE=/home/weblogic/secure/keyfile.secure
ADMIN_URL=t3s://weblogic-server-0.domain.com:8443
# LDAP Authentication Providers parameters
ATN_NAME=Internal_LDAP
ATN_FLAG=SUFFICIENT
ATN_HOST=ldap.domain.com
ATN_PORT=636
ATN_PRINCIPAL=ou=APP,ou=applications,ou=intranet,dc=dbi services,dc=com
ATN_CREDENTIAL=T3stP4ssw0rd
ATN_SSL=true
ATN_BASE_DN=ou=people,ou=intranet,dc=dbi services,dc=com
ATN_USER_FILTER=(&(uid=%u)(objectclass=person))
ATN_USER_CLASS=person
ATN_USER_AS_PRINCIPAL=true
ATN_GROUP_FILTER=(&(cn=%g)(objectclass=groupofuniquenames))
ATN_TIMEOUT=30
# IdP Partner parameters
IDA_NAME=APP_SAML2_IDAsserter
IDP_NAME=APP_SAML2_IDPartner
IDP_METADATA=/home/weblogic/idp_metadata.xml
IDP_ENABLED=true
IDP_REDIRECT_URIS=['/D2-01/*','/D2-02/*']
# Managed Servers SSO parameters
SSO_MS=msD2-01,msD2-02
SSO_URLS=https://lb_url1/saml2,https://lb_url2/saml2
SSO_ENTITY_IDS=APP_SAML2_Entity_ID_01,APP_SAML2_Entity_ID_02
SSO_SP_ENABLED=true
SSO_SP_BINDING=HTTP/POST
[weblogic@weblogic-server-0 ~]$
[weblogic@weblogic-server-0 ~]$
[weblogic@weblogic-server-0 ~]$ cat createLDAPAuthenticationProviders.wlst
##################################################################
#
# Authors: Morgan Patou    
# Version: 1.4 - 30/08/2019
#
# File: createLDAPAuthenticationProviders.wlst
# Purpose: Script to create LDAP/LDAPs Authentication Providers
# Parameters: input properties file (optional)
# Output:
#
##################################################################

# Get operating system (for vars)
import os

# Read the domain properties file
try:
  if len(sys.argv) == 2:
    domainProperties=sys.argv[1]
  else:
    domainProperties=os.path.realpath(os.path.dirname(sys.argv[0])) + "/domain.properties"
  loadProperties(domainProperties)
  print ">>> Loaded the properties file: " + domainProperties
  print

except:
  exit(exitcode=1)

try:
  redirect('/dev/null','false')
  # Connect to AdminServer
  connect(userConfigFile=CONFIG_FILE,userKeyFile=KEY_FILE,url=ADMIN_URL)
  print ">>> Connected to the AdminServer."

  # Start Edit Session
  edit()
  startEdit()
  stopRedirect()
  print ">>> Edit Session started."

  # Get default Realm
  realm=cmo.getSecurityConfiguration().getDefaultRealm()

  # Create Authentication Providers
  iplanetAtn=realm.lookupAuthenticationProvider(ATN_NAME)
  if iplanetAtn != None:
    realm.destroyAuthenticationProvider(iplanetAtn)
  iplanetAtn=realm.createAuthenticationProvider(ATN_NAME,'weblogic.security.providers.authentication.IPlanetAuthenticator')
  print ">>> Authentication Provider created."

  # Config Authentication Providers
  iplanetAtn.setControlFlag(ATN_FLAG)
  iplanetAtn.setHost(ATN_HOST)
  iplanetAtn.setPort(int(ATN_PORT))
  iplanetAtn.setPrincipal(ATN_PRINCIPAL)
  iplanetAtn.setCredential(ATN_CREDENTIAL)
  iplanetAtn.setSSLEnabled(Boolean(ATN_SSL))
  iplanetAtn.setUserBaseDN(ATN_BASE_DN)
  iplanetAtn.setUserFromNameFilter(ATN_USER_FILTER)
  iplanetAtn.setUserObjectClass(ATN_USER_CLASS)
  iplanetAtn.setUseRetrievedUserNameAsPrincipal(Boolean(ATN_USER_AS_PRINCIPAL))
  iplanetAtn.setGroupBaseDN(ATN_PRINCIPAL)
  iplanetAtn.setGroupFromNameFilter(ATN_GROUP_FILTER)
  iplanetAtn.setConnectTimeout(int(ATN_TIMEOUT))
  print ">>> Authentication Provider configured."

  # Reorder Authentication Providers
  defaultAtn=realm.lookupAuthenticationProvider('DefaultAuthenticator')
  defaultIdA=realm.lookupAuthenticationProvider('DefaultIdentityAsserter')
  realm.setAuthenticationProviders(jarray.array([iplanetAtn,defaultAtn,defaultIdA],weblogic.management.security.authentication.AuthenticationProviderMBean))
  print ">>> Authentication Providers re-ordered."

except Exception, e:
  print "ERROR... check error messages for cause."
  print e
  exit(exitcode=1)

redirect('/dev/null','false')
save()
activate()
disconnect()
exit(exitcode=0)
[weblogic@weblogic-server-0 ~]$

 

With the above, you have everything needed to simply create an LDAP Authentication Provider. I won’t really describe what the WLST script is doing, I believe it is pretty self-explanatory and there is a commented line before each section which describes the use of the commands. If you have any questions, please feel free to ask them in the comments below! I used an IPlanet Authenticator but you can obviously choose something else. I also set the group base dn as my principal because I don’t need the groups but you can set whatever you want/need. There are other properties as well that you can set, just check them in WLST to have the correct method name (or use the record method as mentioned previously). In the above WLST script, the last thing done is also to re-order the Authentication Providers so that the newly created LDAP one is the first to be checked. The control flag is set as “SUFFICIENT“, meaning that if the authentication is successful for the LDAP, then WebLogic can proceed. For the LDAP user’s principal and password, you can also use an encrypted file containing the username and password with the “setEncrypted(…)” method instead.

To execute the WLST script and therefore create the LDAP Authentication Provider, just execute the script:

[weblogic@weblogic-server-0 ~]$ ls
createLDAPAuthenticationProviders.wlst  domain.properties
[weblogic@weblogic-server-0 ~]$
[weblogic@weblogic-server-0 ~]$ $ORACLE_HOME/oracle_common/common/bin/wlst.sh createLDAPAuthenticationProviders.wlst

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

>>> Loaded the properties file: /home/weblogic/domain.properties
>>> Connected to the AdminServer.
>>> Edit Session started.
>>> Authentication Provider created.
>>> Authentication Provider configured.
>>> Authentication Providers re-ordered.
[weblogic@weblogic-server-0 ~]$

 

As shown above, you can put a parameter to the script with the full path and name of the properties file to be loaded. Alternatively, if you do not provide any parameter, it will assume that the properties file is located just beside the WLST script with a certain name (“domain.properties” by default). In all cases, once the LDAP Authentication Provider has been created, you will need to restart the full Domain. That’s all there is to do to create an LDAP/LDAPs connection on WebLogic Server.

 

Cet article WebLogic Server – Automatic/Silent creation of an LDAP Authentication Provider est apparu en premier sur Blog dbi services.

push_having_to_gby()

Jonathan Lewis - Thu, 2020-01-02 09:36

I came across an interesting new hint recently when checking the Outline Data for an execution plan: /*+ push_having_to_gby() */  It’s an example of a “small” change designed to reduce CPU usage by reducing the volume of data that passes through the layers of calls that an execution plan represents. The hint appeared in 18.3 but I’ve run the following on 19.3 as a demonstration of what it does and why it’s a good thing:

rem
rem     Script:         push_having.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Dec 2019
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem
rem     Notes:
rem     New (18c) push_having_to_gby() hint
rem     Avoids one pipeline (group by to filter) in
rem     execution plans.
rem

create table t1
nologging
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        lpad(rownum,10,'0')             v1,
        lpad('x',50,'x')                padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6 -- > comment to avoid WordPress format issue
;

alter table t1 modify id not null;
create index t1_i1 on t1(id) nologging;

set serveroutput off
alter session set statistics_level = all;

select
        /*+
                qb_name(driver)
        */
        id 
from
        t1
where   id is not null
group by 
        id 
having  
        count(1) > 1
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

There aren’t very many options for the execution path for this query, and the default path taken on my database was an imdex fast full scan with hash aggregation:


-----------------------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |       |      1 |        |      0 |00:00:00.68 |    2238 |   2230 |       |       |          |
|*  1 |  HASH GROUP BY        |       |      1 |  50000 |      0 |00:00:00.68 |    2238 |   2230 |    55M|  7913K|   57M (0)|
|   2 |   INDEX FAST FULL SCAN| T1_I1 |      1 |   1000K|   1000K|00:00:00.20 |    2238 |   2230 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(COUNT(*)>1)

You’ll notice that the query should return no rows – the way I’ve generated the id means it’s unique even though I haven’t declared a unique constraint/index. DId you also notice the common guess (5%) that the optimizer has used for the selectivity of the having clause ? But have you spotted the 18c enhancement yet ? If not, we’ll get to it in a moment.

It just so happens that I know there is a better execution path than this for this specific query with my specific data set, so I’m going to put in a minimalist hint to tell the optimizer about it, just to see what happens. The data is very well organized, so using an index scan with running total will be significantly more efficient than a big hash group by:


select
        /*+
                qb_name(driver)
                index(@driver t1@driver)
        */
        id 
from
        t1
where   id is not null
group by 
        id 
having  
        count(1) > 1
;

----------------------------------------------------------------------------------------
| Id  | Operation            | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |       |      1 |        |      0 |00:00:00.80 |    2228 |
|   1 |  SORT GROUP BY NOSORT|       |      1 |  50000 |      0 |00:00:00.80 |    2228 |
|   2 |   INDEX FULL SCAN    | T1_I1 |      1 |   1000K|   1000K|00:00:00.40 |    2228 |
----------------------------------------------------------------------------------------

Notice how the optimizer has obeyed my /*+ index(t1) */ hint and used an index full scan to walk through the t1_i1 index in order, doing a “sort group by” which doesn’t need to do any sorting, so its effectively using a simple running total to count repetitions. The timing (A-time) difference isn’t really something to trust closely when dealing with brief time periods and rowsource_execution_statistics, but eliminating 57M of PGA allocation for the hash join SQL workarea might be a significant benefit. But there’s something else to be seen in this plan – if you can manage to see the things that aren’t there.

So let’s push this query back to its 12c plan:

select
        /*+
                qb_name(driver)
                index(@driver t1@driver)
                optimizer_features_enable('12.2.0.1')
        */
        id 
from
        t1
where   id is not null
group by 
        id 
having  
        count(1) > 1
;

-----------------------------------------------------------------------------------------
| Id  | Operation             | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |       |      1 |        |      0 |00:00:01.02 |    2228 |
|*  1 |  FILTER               |       |      1 |        |      0 |00:00:01.02 |    2228 |
|   2 |   SORT GROUP BY NOSORT|       |      1 |  50000 |   1000K|00:00:00.93 |    2228 |
|   3 |    INDEX FULL SCAN    | T1_I1 |      1 |   1000K|   1000K|00:00:00.45 |    2228 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(COUNT(*)>1)

Notice the FILTER that appears as operation 1. Oracle generates the aggregate data (which happens to total 1M rows) at the Sort Group By (whether it’s the Nosort option from the index full scan, or the “proper” hash group by from the index fast full scan) and passes the 1M row result set (estimated at 50K rows) up to the parent operation where the filter takes place. In 18c onwards the separate filter operation disappears and the filtering takes place as part of the aggregation. This is probably a good thing but if you ever want to disable it without switching everything back to the 12c optimizer features then there’s a dedicated hint: (no_)push_having_to_gby():


select
        /*+
                qb_name(driver)
                index(@driver t1@driver)
                no_push_having_to_gby(@driver)
        */
        id 
from
        t1
where   id is not null
group by 
        id 
having  
        count(1) > 1
;

-----------------------------------------------------------------------------------------
| Id  | Operation             | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |       |      1 |        |      0 |00:00:01.05 |    2228 |
|*  1 |  FILTER               |       |      1 |        |      0 |00:00:01.05 |    2228 |
|   2 |   SORT GROUP BY NOSORT|       |      1 |  50000 |   1000K|00:00:00.91 |    2228 |
|   3 |    INDEX FULL SCAN    | T1_I1 |      1 |   1000K|   1000K|00:00:00.36 |    2228 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(COUNT(*)>1)

If you were looking very carefully (especially after my comment about “seeing things that aren’t there”) you may have noticed that there’s an odd detail in the example where I’ve hinted the index without blocking the push. There’s no Predicate Information section in that execution plan – and that wasn’t a mistake on my part – it simply doesn’t exist – and you’ll note that there’s no asterisk by any of the Operation lines to remind you that there should be should be some Predicate Information! The “count(1) > 1” just doesn’t get reported (even though it does get reported if you use dbms_xplan.display() after a call to explain plan). Fortunately, however, when I modified the model to duplicate one of the rows I did get the correct result – so even though the predicate is not reported it is still applied.  [Side Note: did you notice that my original count(1) changes to count(*) when it gets to the Predicate Information. People still ask which is faster, count(1) or count(*) – the argument should have died back in Oracle 7 days.]

Summary

18c introduced a new optimisation that pushes a “having” predicate down into a group by operation. This reduces CPU usage by eliminating the need to pass a potentially large result from a child operation up to a parent filter operation. Unfortunately you may find that the predicate becomes invisible when you pull the execution plan from memory.

In the unlikely event that you manage to find a case where this optimisation is used when it would have been better to bypass it then there is a hint /*+ no_push_having_to_gby(@qbname) */ to block it, and if it doesn’t appear when you think it should then the opposite hint /*+ push_having_to_gby(@qbname) */ is available.

 

 

VirtualBox 6.1 : No compatible version of Vagrant yet! (or is there?)

Tim Hall - Wed, 2020-01-01 06:10

VirtualBox 6.1 was released on the 11th of December and I totally missed it.

The downloads and changelog are in the usual places.

I spotted it this morning, downloaded it and installed in straight away. I had no installation dramas on Windows 10, macoS Catalina and Oracle Linux 7 hosts.

The problem *for me* was the current version of Vagrant (2.2.6) doesn’t support VirtualBox 6.1 yet. I can’t live without Vagrant these days, so I installed VirtualBOx 6.0.14 again and normal life resumed. See Update.

I’m sure there will be a new release of Vagrant soon that supports VirtualBox 6.1, but for now if you use Vagrant, don’t upgrade to VirtualBox 6.1 yet. I’m sure you won’t have to wait long… See Update.

Cheers

Tim…

Update 1 : A couple of people Peter Wahl and Andrea Cremonesi pointed me at this post by Simon Coter, which contains config changes to allow Vagrant 2.2.6 to run with VirtualBox 6.1.

Update 2 : I’ve followed Simon’s post and it worked fine. If you are using Windows 10 as the host and have done a default installation of Vagrant, the files he’s discussing are in these directories.

C:\HashiCorp\Vagrant\embedded\gems\2.2.6\gems\vagrant-2.2.6\plugins\providers\virtualbox\driver\

C:\HashiCorp\Vagrant\embedded\gems\2.2.6\gems\vagrant-2.2.6\plugins\providers\virtualbox\

Update 3 : I updated by work PC also. It required a couple of reboots to get things working. I think it may be something to do with the way we do security here. It’s working fine now.

VirtualBox 6.1 : No compatible version of Vagrant yet! (or is there?) was first posted on January 1, 2020 at 1:10 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Happy New Year 2020

Senthil Rajendran - Tue, 2019-12-31 22:02

How To Connect Autonomous Database With SQL Developer Web

Online Apps DBA - Tue, 2019-12-31 01:39

How to connect Oracle Autonomous Database using SQL Developer Web. Once you create autonomous database on Oracle Cloud easiest way to connect is using Oracle SQL Developer Web Check out: https://k21academy.com/clouddba49 The blog post discusses the: ✦ Oracle SQL Developer Web Overview ✦ Two ways to Connect SQL Developer Web in Autonomous Database as ADMIN […]

The post How To Connect Autonomous Database With SQL Developer Web appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Configure Distribution Service between two Secure GoldenGate Microservices Architectures

DBASolved - Mon, 2019-12-30 14:46

Once you configure an Oracle GoldenGate Microservices environment to be secure behind the Nginx reverse proxy, the next thing you have to do is tackle how to connect one environment to the other using the Distribution Server.  In using the Distribution Server, you will be creating what is called a Distribution Path. Distribution Paths are […]

The post Configure Distribution Service between two Secure GoldenGate Microservices Architectures appeared first on DBASolved.

Categories: DBA Blogs

Oracle Visual Builder Cloud Service (VBCS) Overview & Features

Online Apps DBA - Mon, 2019-12-30 05:23

Oracle Visual Builder Cloud Service (VBCS) Overview & Features Oracle Visual Builder provides an easy way to create and host web and mobile applications in a secure Cloud environment. If you are working on Oracle Integration Cloud then must have heard about the term VBCS, check at http://k21academy.com/oic19 which covers: ▪What is VBCS? ▪Why Oracle […]

The post Oracle Visual Builder Cloud Service (VBCS) Overview & Features appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Scalar Subq Bug

Jonathan Lewis - Mon, 2019-12-30 03:30

This is an observation that came up on the Oracle Developer Forum a couple of days ago, starting life as the fairly common problem:

I have a “select” that runs quickly  but when I use in a “create as select” it runs very slowly.

In many cases this simply means that the query was a distributed query and the plan changed because the driving site changed from the remote to the local server. There are a couple of other reasons, but distributed DML is the one most commonly seen.

In this example, though, the query was not a distributed query, it was a fully local query. There were three features to the query that were possibly suspect, though:

  • “ANSI” syntax
  • scalar subqueries in the select list
  • redundant “order by” clauses in inline views

The OP had supplied the (horrible) SQL in a text format along with images from the Enterprise Manager SQL Monitor screen showing the two execution plans and two things were  obvious from the plans – first that the simple select had eliminated the scalar subqueries (which were redundant) while the CTAS had kept them in the plan, and secondly most of the elapsed time for the CTAS was spent in kits if executions of the scalar subqueries.

My first thought was that the problem was probably a quirk of how the optimizer translates “ANSI” SQL to Oracle-standard SQL, so I created a model that captured the key features of the problem – starting with 3 tables:

rem
rem     Script:         ctas_scalar_subq.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Dec 2019
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem             11.2.0.4
rem

create table t1 as
select * from all_objects
where rownum <= 10000 -- > comment to avoid wordpress format issue
;

alter table t1 add constraint t1_pk primary key(object_id);

create table t2 as
select * from t1
;

alter table t2 add constraint t2_pk primary key(object_id);

create table t3 as
select * from all_objects
where rownum <= 500 -- > comment to avoid wordpress format issue
;

alter table t3 add constraint t3_pk primary key(object_id);

begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T1',
                method_opt  => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T2',
                method_opt  => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T3',
                method_opt  => 'for all columns size 1'
        );
end;
/

I’m going to use the small t3 table as the target for a simple scalar subquery in the select list of a query that selects some columns from t2; then I’m going to use that query as an inline view in a join to t1 and select some columns from the result. Here’s the starting query that’s going to become an inline view:


select 
        t2.*,
        (
        select  t3.object_type 
        from    t3 
        where   t3.object_id = t2.object_id
        )       t3_type
from
        t2
order by
        t2.object_id
;

And here’s how I join the result to t1:


explain plan for
        select
                v2.*
        from    (
                select
                        t1.object_id,
                        t1.object_name  t1_name,
                        v1.object_name  t2_name,
                        t1.object_type  t1_type,
                        v1.object_type  t2_type
                from
                        t1
                join (
                        select 
                                t2.*,
                                (
                                select  t3.object_type 
                                from    t3 
                                where   t3.object_id = t2.object_id
                                )       t3_type
                        from
                                t2
                        order by
                                t2.object_id
                )       v1
                on
                        v1.object_id = t1.object_id
                and     v1.object_type = 'TABLE'
                )       v2
;

select * from table(dbms_xplan.display(null,null,'outline alias'));

The initial t2 query becomes an inline view called v1, and that becomes the second table in a join with t1. I’ve got the table and view in this order because initially the OP had an outer (left) join preserving t1 and I thought that that might be significant, but it turned out that it wasn’t.

Having joined t1 and v1 I’ve selected a small number of columns from the t1 and t2 tables and ignored the column that was generated by the inline scalar subquery. (This may seem a little stupid – but the same problem appears when the inline view is replaced with a stored view, which is a more realistic possibility.) Here’s the resulting execution plan (taken from 11.2.0.4 in this case):


-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |   476 | 31416 |    45  (12)| 00:00:01 |
|*  1 |  HASH JOIN           |      |   476 | 31416 |    45  (12)| 00:00:01 |
|   2 |   VIEW               |      |   476 | 15708 |    23  (14)| 00:00:01 |
|   3 |    SORT ORDER BY     |      |   476 | 41888 |    23  (14)| 00:00:01 |
|*  4 |     TABLE ACCESS FULL| T2   |   476 | 41888 |    22  (10)| 00:00:01 |
|   5 |   TABLE ACCESS FULL  | T1   | 10000 |   322K|    21   (5)| 00:00:01 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("V1"."OBJECT_ID"="T1"."OBJECT_ID")
   4 - filter("T2"."OBJECT_TYPE"='TABLE')

I was a little surprised by this plan as I had expected the optimizer to eliminate the in-line “order by” in view v1 – but even when I changed the code to traditional Oracle join syntax the redundant and wasteful sort at operaton 3 still took place. (You might note that the data will be reported in an order dictated by the order of the data arriving from the t1 tablescan thanks to the mechanism of the hash join, so the sort is a total waste of effort.)

The plus point, of course, is that the optimizer had been smart enough to eliminate the scalar subquery referencing t3. The value returned from t3 is not needed anywhere in the course of the execution, so it simply disappears.

Now we change from a simple select to a Create as Select which I’ve run, with rowsource execution stats enabled, using Oracle 19.3 for this output:

set serveroutput off
set linesize 156
set trimspool on
set pagesize 60

alter session set statistics_level = all;

create table t4 as
        select  
                v2.*
        from    (
                select
                        t1.object_id,
                        t1.object_name  t1_name,
                        v1.object_name  t2_name,
                        t1.object_type  t1_type,
                        v1.object_type  t2_type
                from
                        t1
                join (
                        select 
                                t2.*,
                                (
                                select  t3.object_type 
                                from    t3 
                                where   t3.object_id = t2.object_id
                                )       t3_type
                        from
                                t2
                        order by 
                                t2.object_id
                )       v1
                on
                        v1.object_id = t1.object_id
                and     v1.object_type = 'TABLE'
                )       v2
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

alter session set statistics_level = typical;

And here’s the run-time execution plan – showing the critical error and statistics to prove that it really happened:

----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                        | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Writes |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------
|   0 | CREATE TABLE STATEMENT           |       |      1 |        |      0 |00:00:00.01 |     471 |      3 |       |       |          |
|   1 |  LOAD AS SELECT                  | T4    |      1 |        |      0 |00:00:00.01 |     471 |      3 |  1042K|  1042K| 1042K (0)|
|   2 |   OPTIMIZER STATISTICS GATHERING |       |      1 |    435 |    294 |00:00:00.01 |     414 |      0 |   256K|   256K|  640K (0)|
|*  3 |    HASH JOIN                     |       |      1 |    435 |    294 |00:00:00.01 |     414 |      0 |  1265K|  1265K| 1375K (0)|
|   4 |     VIEW                         |       |      1 |    435 |    294 |00:00:00.01 |     234 |      0 |       |       |          |
|   5 |      TABLE ACCESS BY INDEX ROWID | T3    |    294 |      1 |     50 |00:00:00.01 |      54 |      0 |       |       |          |
|*  6 |       INDEX UNIQUE SCAN          | T3_PK |    294 |      1 |     50 |00:00:00.01 |       4 |      0 |       |       |          |
|   7 |      SORT ORDER BY               |       |      1 |    435 |    294 |00:00:00.01 |     234 |      0 | 80896 | 80896 |71680  (0)|
|*  8 |       TABLE ACCESS FULL          | T2    |      1 |    435 |    294 |00:00:00.01 |     180 |      0 |       |       |          |
|   9 |     TABLE ACCESS FULL            | T1    |      1 |  10000 |  10000 |00:00:00.01 |     180 |      0 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("V1"."OBJECT_ID"="T1"."OBJECT_ID")
   6 - access("T3"."OBJECT_ID"=:B1)
   8 - filter("T2"."OBJECT_TYPE"='TABLE')

You’ll notice that the VIEW at operation 4 reports the inline scalar subquery as operations 5 and 6, and the Starts column show that the scalar subquery executes 294 times – which is the number of rows returned by the scan of table t2. Although my first thought was that this was an artefact of the transformation from ANSI to Oracle syntax it turned out that when I modified the two statements to use traditional Oracle syntax the same difference appeared. Finally I re-ran the CTAS after removing the order by clause in the in-line view and the redundant subquery disappeared from the execution plan.

Tiny Geek bit

It’s not immediately obvious why there should be such a difference between the select and the CTAS in this case, but the 10053 trace files do give a couple of tiny clues the CTAS trace file includes the lines:

ORE: bypassed - Top query block of a DML.
TE: Bypassed: Top query block of a DML.
SQT:    SQT bypassed: in a transaction.

The first two suggest that we should expect some cases where DML statement optimise differently from simple queries. The last one is a further indication that differences may appear. (SQT – might this be subquery transformation, it doesn’t appear in the list of abbreviations in the trace file).

Unfortunately the SELECT trace file also included the line:


SQT:     SQT bypassed: Disabled by parameter.

So “SQT” – whatever that is – being in or out of a transaction may not have anything to do with the difference.

Summary

There are cases where optimising a select statement is not sufficient as a strategy for optimising a CTAS statement. In this case it looks as if an inline view which was non-mergable (thanks to a redundant order by clause) produced the unexpected side-effect that a completely redundant scalar subquery in the select list of the inline view was executed during the CTAS even though it was transformed out of existence for the simple select.

There are some unexpected performance threats in “cut-and-paste” coding. and in re-using stored views if you haven’t checked carefully what they do and how they’re supposed to be used.

 

 

Installing Nginx

DBASolved - Sat, 2019-12-28 22:03

With Oracle GoldenGate Microservices, you have the option of using a reverse proxy or not.  In reality, it is a best practice to install the recommended reverse proxy for the architecture.  The main benefit here is the security aspect of using it.  In Oracle GoldenGate Microservices, depending on the number of deployments you have per […]

The post Installing Nginx appeared first on DBASolved.

Categories: DBA Blogs

[Troubleshoot] instance ocid1.instance.oc1.iad.XX Not Found While Deploying EBS Cloud Manager: config.pl

Online Apps DBA - Sat, 2019-12-28 07:39

[Troubleshoot] instance ocid1.instance.oc1.iad.XX Not Found While Deploying EBS Cloud Manager: config.pl While running the “config.pl” to configure the EBS Cloud Manager did you encounter the ⚠”instance ocid1.instance.oc1.iad.XX Not Found” issue? If yes, check the blog post at https://k21academy.com/ebscloud35 that covers the root cause & fixes of the issue encountered while running the “config.pl” and things […]

The post [Troubleshoot] instance ocid1.instance.oc1.iad.XX Not Found While Deploying EBS Cloud Manager: config.pl appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

[New Feature] Share Block Volume with Multiple Instances In Oracle Cloud (OCI)

Online Apps DBA - Sat, 2019-12-28 04:10

[New Feature] Share Block Volume with Multiple Instances In Oracle Cloud (OCI) Oracle released Block Volume sharing in Reading/Write Mode that will definitely ease design and lower the storage cost too. To read more about it, check out our blog post at https://k21academy.com/1z0107213 that discusses: ✦ Storage options in ☁Oracle Cloud ✦New Feature Updated By […]

The post [New Feature] Share Block Volume with Multiple Instances In Oracle Cloud (OCI) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Use Conda to Generate Requirements.txt for Docker Containers

Pakistan's First Oracle Blog - Fri, 2019-12-27 00:01
pip is a standard package manager. Requirements.txt can be generated in one environment and installed by pip in a new environment. Conda replicates own installation. Pip produces a list of packages that were installed on top of standard library to make the package you wrote work.

Following are the steps to generate requirements.txt file to be used insdie Dockerfile for docker containers:



Go to your project environment conda activate

conda list gives you list of packages used for the environment

conda list -e > requirements.txt save all the info about packages to your folder

conda env export > .yml

pip freeze

Hope that helps.
Categories: DBA Blogs

[AZ-103] Microsoft Azure Administrator Certification Exam: Everything You Need To Know

Online Apps DBA - Thu, 2019-12-26 04:33

AZ-103 | Microsoft Azure Administrator Associate The AZ-100 and AZ-101 certifications are being replaced by the new AZ-103 Microsoft Azure Administrator certification exam. Check out our blog post at https://k21academy.com/az10311 which covers: ▪ What is the AZ-103 Certification? ▪ Who This Certification Is For? ▪ Why Should You Go For It? ▪ Exam Details ▪ […]

The post [AZ-103] Microsoft Azure Administrator Certification Exam: Everything You Need To Know appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

1Z0-932 V/S 1Z0-1072: Oracle Cloud Infra Architect Associate Certification

Online Apps DBA - Thu, 2019-12-26 01:56

1Z0-932 V/S 1Z0-1072: Oracle Cloud Infra Architect Associate Certification Oracle has recently introduced a new certification for Oracle Cloud Infrastructure Architect Associate i.e. 1Z0-1072 In our FREE Masterclass, https://k21academy.com/1z0107202, we got a lot of questions regarding what is 1Z0-1072 & how is it different from 1Z0-932? Check at https://k21academy.com/1z0107212 which covers: 1. What is 1Z0-1072 […]

The post 1Z0-932 V/S 1Z0-1072: Oracle Cloud Infra Architect Associate Certification appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator