Feed aggregator

A Ruthless Repository Shutdown Utility, Part II

Yann Neuhaus - Wed, 2019-12-18 15:51
Stopping the unreachable repositories

Suppose that the docbroker has been stopped prematurely and that we want to shut down the repositories but the out-of-the-box dm_shutdown_repository is not effective. Why is it so by the way ? If we look closely inside the shutdown script, we quickly notice the reason:

#!/bin/sh
################## DOCUMENTUM SERVER SHUTDOWN FILE ######################
#
# 1994-2018 OpenText Corporation. All rights reserved
# Version 16.4 of the Documentum Server.
#
# A generated server shutdown script for a repository.
# This file was generated on Fri Aug 30 12:15:10 CEST 2019 by user dmadmin.
#
check_connect_status() {
status=$1
if [ ! $status = 0 ] ; then
  cat <<-END
  ***** $0: ERROR
  ***** Unable to complete shutdown - unable to connect
  ***** to the server to issue $2 request.
END
  exit 1
fi
}
...
# Stop the server
echo Stopping Documentum server for repository: [dmtestgr02]
echo ''
DM_DMADMIN_USER=dmadmin
#
# Get the pid for the root process
#
DM_PID=`./iapi dmtestgr02 -U$DM_DMADMIN_USER -P -e << EOF  | grep 'root_pid' | sed -e 's/ .*[: A-Za-z]//'
apply,s0,NULL,LIST_SESSIONS
next,s0,q0
dump,s0,q0
exit
EOF`
status=$?
check_connect_status $status LIST_SESSIONS
...
            kill -9 $child_pid
...
  kill -9 $DM_PID
...
         kill -9 $child_pid
...

On line 29, the shutdown script first attempts to connect to the repository in order to retrieve the root pid of the server processes. On line 36, this attempt’s result is checked by the function check_connect_status defined earlier in the script at line 10. If something went wrong during the connection, iapi’s return status will be != 0 and check_connect_status will simply exit the script on line 18. So, if a repository has gone berserk, or no free sessions are available, or the docbroker is unreachable, the script will not be able to stop it. That logic is quite restrictive and we must fall back to killing the repository’s processes ourselves anyway.
Strangely enough, the script is not scared of killing processes, it does this from several places, but it rather looks like it is a bit shy in identifying the right ones and therefore relies on the server itself or, ultimately, on the user, for help in this area.
Admittedly, it is not always easy to pinpoint the right processes from the list returned by the command ps, especially if the repository is running in HA on the same machine, or if several repositories share the same machine, so extra care must be used in order not to kill the wrong ones. The dm_shutdown_docbase avoids this difficulty altogether by asking the content server (aka CS) its root pid and that is why it aborts if it cannot contact it.
Historically, the “kill” command could only “kill -9” (SIGKILL, forceful, coercive kill) but nowadays it has been generalized to send signals and could just as well have been forked to “signal” or “send”. So, can a signal be sent to the main executable ${DM_HOME}/bin/documentum to ask it to cleanly shut down the repository ? We wish but this has not been implemented. Signals such as SIGQUIT, SIGTRAP, SIGINT and SIGABRT are trapped indeed but will only kill the server after printing to the server’s log the last executed SQL or the call stack trace, e.g. after a SIGINT was sent:

2019-10-11T13:14:14.045467 24429[24429] 0100c35080004101 Error: dm_bear_trap: Unexpected exception, (SIGINT: interrupt: (2) at (Connection Failure)), during new session creation in module dmapply.cxx after line 542. Process exiting.
Last SQL statement executed by DB was:
 
 
Last SQL statement executed by DB was:
Last SQL statement executed by DB was:
 
 
 
 
Last SQL statement executed by DB was:
 
 
(23962) Outer Exception handler caught exception: SIGINT: interrupt: (2) at (RPC MAIN)

Thus, a corruption is theoretically possible while using any of those signals, just as it is when a SIGKILL signal is issued.
According to OTX Support, a trap handler that shuts down cleanly the repository has not been implemented because it needs a session to invoke the shutdown server method. OK, and what if a hidden session were opened at startup time and kept around just for such administrative cases ? How about a handler to immediately force a projection to the available docbrokers instead of waiting for the next checkpoint cycle ? As you see, there are ways to make the shutdown more resilient but my overall feeling is there is a lack of willingness to improve the content server.
Therefore, if waiting about 5 minutes for the repository to project to a docbroker is not acceptable, there is no other alternative than kill -9 the repository’s processes, start the docbroker(s) and then the repository. Other signals can work, but not always, and are not any safer.
In order to use that command, one needs to know the content server’s root pid and since the CS does not accept any connection at this point, one must get it from another source. Once the root pid is available, it can be given to the kill command with a slight subtlety: in order to include its children processes, the root pid must be negated, e.g.:

# use the standalone /bin/kill command;
$ /bin/kill --signal SIGKILL -12345
# or use bash's kill builtin:
$ command kill -s SIGKILL -12345

This will transitively kill the process with pid 12345 and all the others in same group, which are the ones it started itself, directly or indirectly.
If a numeric signal is preferred, the equivalent command is:

$ /bin/kill -9 -12345

I leave it to you to decide which one is more readable.
So now, we need to identify the repository’s root process. Once found, we can send its negated value the SIGKILL signal, which will propagate to all the child processes. Let’s see now how to identify this root process.

Identifying the content server’s root process

Ordinarily, the LIST_SESSIONS server method returns a collection containing the root_pid attribute among other valuable information, e.g.:

API> apply,c,NULL,LIST_SESSIONS
...
q0
API> next,c,q0
...
OK
API> dump,c,q0
...
USER ATTRIBUTES
 
  root_start                      : 12/11/2019 22:53:19
  root_pid                        : 25329
  shared_mem_id                   : 2588691
  semaphore_id                    : 0
  session                      [0]: 0100c3508000a11c
                               [1]: 0100c3508000a102
                               [2]: 0100c3508000a101
  db_session_id                [0]: 272
                               [1]: 37
                               [2]: 33
  typelockdb_session_id        [0]: -1
                               [1]: -1
                               [2]: -1
  tempdb_session_ids           [0]: -1
                               [1]: 45
                               [2]: 36
  pid                          [0]: 17686
                               [1]: 26512
                               [2]: 26465
  user_name                    [0]: dmadmin
                               [1]: dmadmin
                               [2]: dmadmin
  user_authentication          [0]: Trusted Client
                               [1]: Password
                               [2]: Trusted Client
  client_host                  [0]: docker
                               [1]: 172.19.0.3
                               [2]: docker
  client_lib_ver               [0]: 16.4.0070.0035
                               [1]: 16.4.0070.0035
                               [2]: 16.4.0070.0035
...

But in our case, the CS is not reachable so it cannot be queried.
An easy alternative is to simply look into the CS’s log:

dmadmin@docker:/app/dctm$ less /app/dctm/dba/log/dmtest.log
 
    OpenText Documentum Content Server (version 16.4.0080.0129  Linux64.Oracle)
    Copyright (c) 2018. OpenText Corporation
    All rights reserved.
 
2019-12-11T22:53:19.757264      25329[25329]    0000000000000000        [DM_SERVER_I_START_SERVER]info:  "Docbase dmtest attempting to open"
 
2019-12-11T22:53:19.757358      25329[25329]    0000000000000000        [DM_SERVER_I_START_KEY_STORAGE_MODE]info:  "Docbase dmtest is using database for cryptographic key storage"
...

The number 25329 is the root_pid. It can be extracted from the log file as shown below:

$ grep "\[DM_SERVER_I_START_SERVER\]info" /app/dctm/dba/log/dmtest.log | gawk '{if (match($2, /\[[0-9]+\]/)) {print substr($2, RSTART + 1, RLENGTH - 2); exit}}'
25329
# or compacter:
gawk '{if (match($0, /\[([0-9]+)\].+\[DM_SERVER_I_START_SERVER\]info/, root_pid)) {print root_pid[1]; exit}}' /app/dctm/dba/log/dmtest.log
25329

The extracted root_pid can be confirmed by the ps command with options ajxf showing a nice tree-like view of the running processes. E.g.:

dmadmin@docker:/app/dctm$ ps_gpid 25329
 PPID   PID  PGID   SID TTY      TPGID STAT   UID   TIME COMMAND
    1 25329 25329 25329 ?           -1 Ss    1001   0:01 ./documentum -docbase_name dmtest -security acl -init_file /app/dctm/dba/config/dmtest/server.ini
25329 25370 25329 25329 ?           -1 S     1001   0:00  \_ /app/dctm/product/16.4/bin/mthdsvr master 0xe901fc83, 0x7f084db15000, 0x223000 50000  5 25329 dmtest /app/dctm/dba/log
25370 25371 25329 25329 ?           -1 Sl    1001   0:05  |   \_ /app/dctm/product/16.4/bin/mthdsvr worker 0xe901fc83, 0x7f084db15000, 0x223000 50000  5 0 dmtest /app/dctm/dba/log
25370 25430 25329 25329 ?           -1 Sl    1001   0:05  |   \_ /app/dctm/product/16.4/bin/mthdsvr worker 0xe901fc83, 0x7f084db15000, 0x223000 50000  5 1 dmtest /app/dctm/dba/log
25370 25451 25329 25329 ?           -1 Sl    1001   0:05  |   \_ /app/dctm/product/16.4/bin/mthdsvr worker 0xe901fc83, 0x7f084db15000, 0x223000 50000  5 2 dmtest /app/dctm/dba/log
25370 25464 25329 25329 ?           -1 Sl    1001   0:05  |   \_ /app/dctm/product/16.4/bin/mthdsvr worker 0xe901fc83, 0x7f084db15000, 0x223000 50000  5 3 dmtest /app/dctm/dba/log
25370 25482 25329 25329 ?           -1 Sl    1001   0:05  |   \_ /app/dctm/product/16.4/bin/mthdsvr worker 0xe901fc83, 0x7f084db15000, 0x223000 50000  5 4 dmtest /app/dctm/dba/log
25329 25431 25329 25329 ?           -1 S     1001   0:00  \_ ./documentum -docbase_name dmtest -security acl -init_file /app/dctm/dba/config/dmtest/server.ini
25329 25432 25329 25329 ?           -1 S     1001   0:00  \_ ./documentum -docbase_name dmtest -security acl -init_file /app/dctm/dba/config/dmtest/server.ini
25329 25453 25329 25329 ?           -1 S     1001   0:00  \_ ./documentum -docbase_name dmtest -security acl -init_file /app/dctm/dba/config/dmtest/server.ini
25329 25465 25329 25329 ?           -1 S     1001   0:00  \_ ./documentum -docbase_name dmtest -security acl -init_file /app/dctm/dba/config/dmtest/server.ini
25329 25489 25329 25329 ?           -1 S     1001   0:00  \_ ./documentum -docbase_name dmtest -security acl -init_file /app/dctm/dba/config/dmtest/server.ini
25329 26439 25329 25329 ?           -1 Sl    1001   0:11  \_ ./dm_agent_exec -docbase_name dmtest.dmtest -docbase_owner dmadmin -sleep_duration 0
25329 26465 25329 25329 ?           -1 S     1001   0:00  \_ ./documentum -docbase_name dmtest -security acl -init_file /app/dctm/dba/config/dmtest/server.ini
    1 10112 25329 25329 ?           -1 Rl    1001   0:03 ./dm_agent_exec -docbase_name dmtest.dmtest -docbase_owner dmadmin -trace_level 0 -job_id 0800c3508000218b -log_directory /app/dctm/dba/log -docbase_id 50000

On line 3, the CS for docbase dmtest was started with pid 25329 and same value for its pgid. This process started then a few child processes all with the pgid 25329.
ps_pgid on line 1 is a bash function defined in ~/.bashrc as follows:

# returns the lines from ps -ajxf with given gpid;
# the ps command's header line is printed only if at least 1 entry is found;
function ps_pgid {
   pgid=$1
   ps -ajxf | gawk -v pgid=$pgid 'BEGIN {getline; header = $0; h_not_printed = 1} {if ($3 == pgid) {if (h_not_printed) {print header; h_not_printed = 0}; print}}'
}

The command does not show the method server nor the docbroker as they were started separately from the CS.
Thus, if we execute the command below:

$ kill --signal SIGKILL -25329

the CS will be killed along with all its child processes, which is exactly what we want.

Putting both commands together, we get:

kill --signal SIGKILL -$(grep "\[DM_SERVER_I_START_SERVER\]info" /app/dctm/dba/log/dmtest.log | gawk '{if (match($2, /\[[0-9]+\]/)) {print substr($2, RSTART + 1, RLENGTH - 2); exit}}')

It may be worth defining a bash function for it too:

function kill_cs {
   repo=$1
   kill --signal SIGKILL -$(grep "\[DM_SERVER_I_START_SERVER\]info" /app/dctm/dba/log/${repo}.log | gawk '{if (match($2, /\[[0-9]+\]/)) {print substr($2, RSTART + 1, RLENGTH - 2); exit}}')
}
 
# source it:
. ~/.bashrc
 
# call it:
kill_cs dmtest

where test is the content server to kill.
The naive way to search the running content server via the command “ps -ef | grep docbase_name” can be too ambiguous in case of multiple content servers for the same repository (e.g. in a high-availability installation) or when docbase_name is the stem of a family of docbases (e.g. dmtest_1, dmtest_2, …, dmtest_10, etc…). Besides, even if no ambiguity were possible, it would return too many processes to be killed individually. xargs could do it at once, sure, but why risk killing the wrong ones ? The above ps_pgid function is directly looking for the given group id which is the root_pid of the content server of interest taken straight out of its log file, no ambiguity here.

Hardening start-stop.sh

This ruthless kill functionality could be added to the start-stop script listed above, either as a command-line option to the stop parameter (say, like -k as in the dm_shutdown_repository script) or as a full parameter on a par with the stop | start | status ones, i.e.:

start-stop.sh stop | start | status | kill ...

or, simply by deciding that a stop should always succeed and forcing a kill if needed. In such variant, the stop_docbase() function becomes:

stop_docbase() {
   echo "stopping $docbase"
   docbase=$1
   ./dm_shutdown_${docbase}
   if [[ $? -eq 1 ]]; then
      echo "killing docbase $docbase"
      kill_cs $docbase
   fi
   echo "docbase $docbase stopped"
}
Conclusion

If the content server were open source we wouldn’t have this article’s title. Instead, it would be “Forcing impromptu projections to docbrokers through signal handling in content server: an implementation” or “Shutting down a content server by sending a signal: a proposal”. We could send this request to the maintainers and probably receive a positive answer. Or we could implement the changes ourselves and submit them as a RFC. This model does not work so much in closed, commercial source which evolves following its own marketing agenda. Nonetheless, this situation gives us the opportunity to rant about it and find work-arounds. Imagine a world where all software were flawless, would it be as fun ?

Cet article A Ruthless Repository Shutdown Utility, Part II est apparu en premier sur Blog dbi services.

A Ruthless Repository Shutdown Utility, Part I

Yann Neuhaus - Wed, 2019-12-18 13:16

You have finally completed that migration and need to restart all the Documentum processes. So, you shut down the docbroker and move on to the repositories but then you receive an error message about them not being reachable any more. Or conversely, you want to start all the Documentum processes and you start first the repositories and later the docbrokers. Next, you want to connect to one repository and you receive the same error message. Of course, you finally remember, since the docbroker is a requirement for the repositories, it must be started first and shut down last but it is too late now. How to get out if this annoying situation ? You could just (re)start the docbroker and wait for the next repostories’ checkpoint, at most 5 minutes by default. If this is not acceptable, at first sight, there is no other solution than to “kill -9” the repositories’ processes, start the docbroker and only next the repositories. Let’s see if we can find a better way. Spoiler alert: to stop this insufferable suspens, I must say up front that there is no other way, sorry, but there are a few ways to alleviate this inconvenience.

A quick clarification

Let’s first clarify a point of terminology here: there is a difference between docbases/repositories and content servers. A docbase encompasses the actual content and their persistent data and technical information whereas the content server is the set of running processes that give access to and manage one docbase. It is very similar to Oracle’s databases and instances, where one database can be served by several instances, providing parallelism and high availability. A docbase can be served by more than one content server, generally spread over different machines, with its own set of dm_start_docbase and dm_shutdown_docbase scripts and server.ini. A docbase knows how many content servers use it because they each have their own dm_server_config object. If there is just one content server, both docbase and content server can be used interchangeably but when there are several content servers for the same docbase, when one says “stopping the docbase” it really means “stopping one particular content server”, and this is the meaning that will be used in the rest of the article. If the docbase has more than one content servers, just extend the presented manipulations to each of them.

Connecting to the repositories without a docbroker

If one could connect to a repository without a running docbroker, the situation that triggered this article, things would be much easier. In the ancient, simpler times, the dmcl.ini parameters below could help working around an unavailable docbroker:

[DOCBROKER_DEBUG]
docbase_id = <id of docbase as specified in its server.ini file>
host =  <host's name the docbase server is running on>
port = <docbase's port as specified in /etc/services>
service = <docbase's service name as specified in /etc/services>

and they used to work.
After the switch to the dfc.properties file, those parameters were renamed as follows:

dfc.docbroker.debug.docbase_id=<id of docbase as specified in its server.ini file>
dfc.docbroker.debug.host=<host's name the docbase server is running on>
dfc.docbroker.debug.port=<docbase's port as specified in /etc/services>
dfc.docbroker.debug.service=<docbase's service name as specified in /etc/services>

Unfortunately, they don’t work any more. Actually, although they are still documented in the dfcfull.properties, they have not been implemented and will never be according to OTX. Moreover, they will be removed in the future. Too bad, that would have been such a cheap way to extricate oneself from an uncomfortable situation.

Preventing the situation

The best solution is obviously to prevent it to happen. This can be easily realized by using a central script for stopping and starting the Documentum stack. And, while we are at it, inquiring its status.
Documentum already provides such a script, e.g. see here Linux scripts for automatic startup and shutdown of Documentum Content Server. Here is another more sophisticated implementation:

#!/bin/bash
#
# See Usage() function below for explanations; 
# cec - dbi-services - April 2019
#

general_status=0

Usage() {
   cat <<EoU
Usage:
    start-stop.sh [(help) | start | stop | status] [(all) | docbases | docbrokers | docbase={,} | docbroker={,} | method_server]
 E.g.:
    display this help screen:
       start-stop.sh
    start all:
       start-stop.sh start [all]
    stop all:
       start-stop.sh stop [all]
    status all:
       start-stop.sh status [all]
    start docbroker01:
       start-stop.sh start docbroker=docbroker01
    start docbases global_registry and dmtest01:
       start-stop.sh docbase=global_registry,dmtest01
    start all the docbases:
       start-stop.sh docbases
    start all the docbrokers:
       start-stop.sh docbrokers
EoU
}

start_docbroker() {
   docbroker=$1
   echo "starting up docbroker $docbroker ..."
   ./dm_launch_${docbroker}
}

start_all_docbrokers() {
   echo "starting the docbrokers ..."
   DOCBROKERS=`ls -1 dm_launch_* 2>/dev/null | cut -f3 -d_`
   nb_items=0
   for docbroker in $DOCBROKERS; do
      start_docbroker $docbroker
      (( nb_items++ ))
   done
   echo "$nb_items docbrokers started"

}

start_docbase() {
   docbase=$1
   echo "starting $docbase"
   ./dm_start_${docbase}
}

start_all_docbases() {
   echo "starting the repositories ..."
   DOCBASES=`ls -1 config 2>/dev/null `
   nb_items=0
   for docbase in $DOCBASES; do
      start_docbase $docbase
      (( nb_items++ ))
   done
   echo "$nb_items repositories started"
}

start_method_server() {
   echo "starting the method server ..."
   cd ${DOCUMENTUM}/${JBOSS}/server
   nohup ${DOCUMENTUM}/${JBOSS}/server/startMethodServer.sh 2>&1 > /tmp/nohup.log &
   echo "method server started"
}

start_all() {
   echo "starting all the documentum processes ..."
   start_all_docbrokers
   start_all_docbases
   start_method_server
}

status_docbroker() {
   docbroker_name=$1
   docbroker_host=$(grep "^host=" /app/dctm/dba/dm_launch_${docbroker_name} | cut -d= -f2)
   docbroker_port=$(grep "dmdocbroker -port " /app/dctm/dba/dm_launch_${docbroker_name} | cut -d\  -f3)
   dmqdocbroker -t $docbroker_host -p $docbroker_port -c ping 2> /dev/null 1> /dev/null
   local_status=$?
   if [ $local_status -eq 0 ]; then
      echo "$(date +"%Y/%m/%d %H:%M:%S"): successfully pinged docbroker $docbroker_name listening on port $docbroker_port on host $docbroker_host"
   else
      echo "$(date +"%Y/%m/%d %H:%M:%S"): docbroker $docbroker_name listening on port $docbroker_port on host $docbroker_host is unhealthy"
      general_status=1
   fi
   echo "status for docbroker $docbroker_name:$docbroker_port: $local_status, i.e. $(if [[ $local_status -eq 0 ]]; then echo OK; else echo NOK;fi)"
}

status_all_docbrokers() {
   DOCBROKERS=`ls -1 dm_launch_* 2>/dev/null | cut -f3 -d_`
   DOCBROKERS_PORTS=`grep -h "./dmdocbroker" dm_launch_* | cut -f3 -d\ `
   for f in `ls -1 dm_launch_* 2>/dev/null `; do
      docbroker_name=`echo $f | cut -f3 -d_`
      docbroker_port=`grep "./dmdocbroker" $f | cut -f3 -d\ `
      status_docbroker $docbroker_name $docbroker_port
   done
   echo "general status for all docbrokers: $general_status, i.e. $(if [[ $general_status -eq 0 ]]; then echo OK; else echo NOK;fi)"
}

status_docbase() {
   docbase=$1
   timeout --preserve-status 30s idql $docbase -Udmadmin -Pxx 2> /dev/null 1> /dev/null <<eoq
     quit
eoq
   local_status=$?
   if [[ $local_status -eq 0 ]]; then
      echo "$(date +"%Y/%m/%d %H:%M:%S"): successful connection to repository $docbase"
   else
      echo "$(date +"%Y/%m/%d %H:%M:%S"): repository $docbase is unhealthy"
      general_status=1
   fi
   echo "status for docbase $docbase: $local_status, i.e. $(if [[ $local_status -eq 0 ]]; then echo OK; else echo NOK;fi)"
}

status_all_docbases() {
   DOCBASES=`ls -1 config 2>/dev/null `
   for docbase in $DOCBASES; do
      status_docbase $docbase
   done
   echo "general status for all docbases: $general_status, i.e. $(if [[ $general_status -eq 0 ]]; then echo OK; else echo NOK;fi)"
}

status_method_server() {
   # check the method server;
   curl --silent --fail -k http://${HOSTNAME}:9080/DmMethods/servlet/DoMethod 2>&1 > /dev/null
   local_status=$?
   if [ $local_status -eq 0 ]; then
      echo "$(date +"%Y/%m/%d %H:%M:%S"): method server successfully contacted"
   else
      echo "$(date +"%Y/%m/%d %H:%M:%S"): method server is unhealthy"
      general_status=1
   fi
   echo "status for method_server: $local_status, i.e. $(if [[ $local_status -eq 0 ]]; then echo OK; else echo NOK;fi)"
}

status_all() {
   status_all_docbrokers
   status_all_docbases
   status_method_server
   echo "General status: $general_status, i.e. $(if [[ $general_status -eq 0 ]]; then echo OK; else echo NOK;fi)"
}

stop_docbase() {
   echo "stopping $docbase"
   docbase=$1
   ./dm_shutdown_${docbase}
   echo "docbase $docbase stopped"
}

stop_all_docbases() {
   echo "stopping the repositories ..."
   DOCBASES=`ls -1 config 2>/dev/null `
   nb_items=0
   for docbase in $DOCBASES; do
      stop_docbase $docbase
      (( nb_items++ ))
   done
   echo "$nb_items repositories stopped"
}

stop_docbroker() {
   echo "stopping docbroker $docbroker ..."
   docbroker=$1
   ./dm_stop_${docbroker}
   echo "docbroker $docbroker stopped"
}

stop_all_docbrokers() {
   echo "stopping the docbrokers ..."
   DOCBROKERS=`ls -1 dm_stop_* 2>/dev/null | cut -f3 -d_`
   nb_items=0
   for docbroker in $DOCBROKERS; do
      stop_docbroker $docbroker
      (( nb_items++ ))
   done
   echo "$nb_items docbrokers stopped"
}

stop_method_server() {
   echo "stopping the method server ..."
   ${DOCUMENTUM}/${JBOSS}/server/stopMethodServer.sh
   echo "method server stopped"
}

stop_all() {
   echo "stopping all the documentum processes ..."
   stop_all_docbases
   stop_method_server
   stop_all_docbrokers
   echo "all documentum processes stopped"
   ps -ajxf | egrep '(PPID|doc|java)' | grep -v grep | sort -n -k2,2
}

# -----------
# main;
# -----------
   [[ -f ${DM_HOME}/bin/dm_set_server_env.sh ]] && . ${DM_HOME}/bin/dm_set_server_env.sh
   cd ${DOCUMENTUM}/dba
   if [[ $# -eq 0 ]]; then
      Usage
      exit 0
   else
      while [[ $# -ge 1 ]]; do
         case $1 in
	    help)
	       Usage
	       exit 0
	    ;;
            start|stop|status)
	       cmd=$1
	       shift
	       if [[ -z $1 || $1 = "all" ]]; then
	          ${cmd}_all
	       elif [[ $1 = "docbases" ]]; then
	          ${cmd}_all_docbases
	       elif [[ $1 = "docbrokers" ]]; then
	          ${cmd}_all_docbrokers
	       elif [[ ${1%%=*} = "docbase" ]]; then
	          docbases=`echo ${1##*=} | gawk '{gsub(/,/, " "); print}'`
                  for docbase in $docbases; do
	             ${cmd}_docbase $docbase
	          done
	       elif [[ ${1%%=*} = "docbroker" ]]; then
	          docbrokers=`echo ${1##*=} | gawk '{gsub(/,/, " "); print}'`
                  for docbroker in $docbrokers; do
	             ${cmd}_docbroker $docbroker
	          done
	       elif [[ $1 = "method_server" ]]; then
                  ${cmd}_method_server
               fi
               exit $general_status
            ;;
            *)
               echo "syntax error"
	       Usage
	       exit 1
	    ;;
         esac
         shift
      done
   fi

See lines 11 to 29 for its usage.
Note on line 110 the timeout command when attempting to connect to a docbase to check its status; see the article Adding a timeout in monitoring probes for an explanation.
We couldn’t help but adding the option to address each component individually, or a few of them, in addition to all of them at once. So, the script lets us stop, start and inquire the status of one particular docbroker or docbase or method server, or a list of docbrokers or a list of docbases, or everything at once.
After a maintenance task, to stop all the Documentum processes, the command below could be used:

$ start-stop.sh stop all

Similarly, to start everything:

$ start-stop.sh start all

Thus, the proper order is guaranteed to be used and human error is prevented. By standardizing on such script and using it as shown, the aforementioned problem won’t occur anymore.

That is fine but if we didn’t use the script and find ourselves in the situation where no docbroker is running and we must shut down the repositories, is there a way to do it easily and cleanly ? Well, easily, certainly, but cleanly, no. Please, continue reading on Part II.

Cet article A Ruthless Repository Shutdown Utility, Part I est apparu en premier sur Blog dbi services.

Datapump Import Partitioned Tables ORA-00600 qesmaGetPamR-NullCtx

Bobby Durrett's DBA Blog - Wed, 2019-12-18 10:28

I have not yet had time to build a test case and prove this out, but I wanted to document one last bug that we found so far in our 11.2.0.4 to 19c upgrade. We tried copying a bunch of partitioned tables on our source database to the new one using Datapump Import (impdp) over a database link. We got a boatload of errors like this:

ORA-00600: internal error code, arguments: [qesmaGetPamR-NullCtx], 

There are many Oracle bugs like this, but they seem to have been fixed in 11.2.0.4. For example:

Bug 12591399 – ORA-600[qesmagetpamr-nullctx] / ORA-14091 with distributed query with local partition table (Doc ID 12591399.8)

Puzzling. We ended up just exporting to disk and that has worked well so no big deal, but I wonder if this is some sort of recession of a fixed bug.

Anyway, I am off for the rest of the year. This should be my last post unless I mess with Nethack over vacation and post something about that. I hope everyone out there has a good new year.

Bobby

P.S. Created a simple partitioned table with 2 partitions and 100 rows in each one. I got the error importing over a link from 11.2.0.4 to 19c. It worked perfectly going from 11.2.0.4 to 11.2.0.4. Same source table. Parfile:

$ cat bobby_link_test.par
userid=MYUSER/MYPASSWORD
JOB_NAME=BOBBY_TEST
DIRECTORY=BOBBY_DIR
NETWORK_LINK=MYLINK
LOGFILE=bobby_link_test.log
tables=TEST

Table:

CREATE TABLE test
(
  PART_COL              NUMBER,
  data                  NUMBER
)
PARTITION BY RANGE (PART_COL)
(  
  PARTITION PART_COL_1 VALUES LESS THAN (100),  
  PARTITION PART_COL_2 VALUES LESS THAN (200)
)
;

PPS. Works fine going from 11.2.0.4 to 18c. Going to try a different 19c database just to be sure it isn’t the one that has the problem.

PPPS. Definitely a 19c bug. It fails on two different 19c databases but not on 18c. In every case source is same 11.2.0.4 database and same small partitioned table. Does anyone have time to file the bug report?

Categories: DBA Blogs

Wait for Java

Jonathan Lewis - Wed, 2019-12-18 03:59

This is a note courtesy of Jack can Zanen on the Oracle-L list server who asked a question about “wait for CPU” and then produced the answer a couple of days later. It’s a simple demonstration of how Java in the database can be very deceptive in terms of indicating CPU usage that isn’t really CPU usage.

Bottom line – when you call Java Oracle knows you’re about to start doing some work on the CPU, but once you’re inside the java engine Oracle has no way of knowing whether the java code is on the CPU or waiting. So if the java starts to wait (e.g. for some slow file I/O) Oracle will still be reporting your session as using CPU.

To demonstrate the principle, I’m going to create little java procedure that simply goes to sleep – and see what I find in the active session history (ASH) after I’ve been sleeping in java for 10 seconds.

rem
rem     Script:         java_wait_for_cpu.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Nov 2019
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem
rem     Based on an email from Jack van Zanen to Oracle-L
rem

set time on

create or replace procedure milli_sleep(i_milliseconds in number) 
as 
        language java
        name 'java.lang.Thread.sleep(int)';
/

set pagesize 60
set linesize 132
set trimspool on

column sample_time format a32
column event       format a32
column sql_text    format a60
column sql_id      new_value m_sql_id

set echo on
execute milli_sleep(1e4)

select 
        sample_time, sample_id, session_state, sql_id, event 
from 
        v$active_session_history
where 
        session_id = sys_context('userenv','sid')
and     sample_time > sysdate - 1/1440 
order by 
        sample_time
;

select sql_id, round(cpu_time/1e6,3) cpu_time, round(elapsed_time/1e6,3) elapsed, sql_text from v$sql where sql_id = '&m_sql_id';

I’ve set timing on and set echo on so that you can see when my code starts and finishes and correlate it with the report from v$active_session_history for my session. Since I’ve reported the last minute you may find some other stuff reported before the call to milli_sleep() but you should find that you get a report of about 10 seconds “ON CPU” even though your session is really not consuming any CPU at all. I’ve included a report of the SQL that’s “running” while the session is “ON CPU”.

Here (with a little edit to remove the echoed query against v$active_session_history) are the results from a run on 12.2.0.1 (and the run on 19.3.0.0 was very similar):


Procedure created.

18:51:17 SQL> execute milli_sleep(1e4)

PL/SQL procedure successfully completed.

SAMPLE_TIME                       SAMPLE_ID SESSION SQL_ID        EVENT
-------------------------------- ---------- ------- ------------- --------------------------------
16-DEC-19 06.51.11.983 PM          15577837 ON CPU  8r3xn050z2uqm
16-DEC-19 06.51.12.984 PM          15577838 ON CPU  8r3xn050z2uqm
16-DEC-19 06.51.13.985 PM          15577839 ON CPU  8r3xn050z2uqm
16-DEC-19 06.51.14.985 PM          15577840 ON CPU  8r3xn050z2uqm
16-DEC-19 06.51.15.986 PM          15577841 ON CPU  8r3xn050z2uqm
16-DEC-19 06.51.16.996 PM          15577842 ON CPU  8r3xn050z2uqm
16-DEC-19 06.51.17.995 PM          15577843 ON CPU  4jt6zf4nybawp
16-DEC-19 06.51.18.999 PM          15577844 ON CPU  4jt6zf4nybawp
16-DEC-19 06.51.20.012 PM          15577845 ON CPU  4jt6zf4nybawp
16-DEC-19 06.51.21.018 PM          15577846 ON CPU  4jt6zf4nybawp
16-DEC-19 06.51.22.019 PM          15577847 ON CPU  4jt6zf4nybawp
16-DEC-19 06.51.23.019 PM          15577848 ON CPU  4jt6zf4nybawp
16-DEC-19 06.51.24.033 PM          15577849 ON CPU  4jt6zf4nybawp
16-DEC-19 06.51.25.039 PM          15577850 ON CPU  4jt6zf4nybawp
16-DEC-19 06.51.26.047 PM          15577851 ON CPU  4jt6zf4nybawp
16-DEC-19 06.51.27.058 PM          15577852 ON CPU  4jt6zf4nybawp

16 rows selected.

18:51:27 SQL>
18:51:27 SQL> select sql_id, round(cpu_time/1e6,3) cpu_time, round(elapsed_time/1e6,3) elapsed, sql_text from v$sql where sql_id = '&m_sql_id';

SQL_ID          CPU_TIME    ELAPSED SQL_TEXT
------------- ---------- ---------- ------------------------------------------------------------
4jt6zf4nybawp       .004     10.029 BEGIN milli_sleep(1e4); END;


As you can see I had a statement executing for a few seconds before the call to milli_sleep(), but then we see milli_sleep() “on” the CPU for 10 consecutive samples; but when the sleep ends the query for actual usage shows us that the elapsed time was 10 seconds but the CPU usage was only 4 milliseconds.

 

OT Footnote

I’ve decided to this year to donate to a charity that works to reduce child mortality rates in Nepal with a two-pronged attack on malnutrition: feeding starving children, then educating their parents on how to make best use local resources to grow the most appropriate crops and use the best preparation methods to  produce nourishing meals in the future. (They also run other projects to improve the lives of the young people in Nepal – here’s a link to their home page, and a direct link to a 4 minute video that gives you a quick insight into what they do and how they do it.)

If you’re thinking of making any small donations to charity over the next few weeks, please think of this one. To make your donation more valuable I’ve set up a justgiving page and will match any donations made, up to a total of £1,000.

Thank you.

Spring Boot JPA project riff function demo

Pas Apicella - Tue, 2019-12-17 22:09
riff is an Open Source platform for building and running Functions, Applications, and Containers on Kubernetes. For more information visit the project riff home page https://projectriff.io/

riff supports running containers using Knative serving which in turn provides support for
  •     0-N autoscaling
  •     Revisions
  •     HTTP routing using Istio ingress
Want to try an example? If so head over to the following GitHub project which will show to do this step by step for Spring Data JPA function running using riff on a GKE cluster when required

https://github.com/papicella/SpringDataJPAFunction


More Information

1. Project riff home page
https://projectriff.io/

2. Getting started with riff
https://projectriff.io/docs/v0.5/getting-started

Categories: Fusion Middleware

Oracle Database 19c Automatic Indexing – Indexed Column Reorder (What Shall We Do Now?)

Richard Foote - Tue, 2019-12-17 18:49
  I previously discussed how the default column order of an Automatic Index (in the absence of other factors) is based on the Column ID, the order in which the columns are defined in the table. But what if there are “other factors” based on new workloads and the original index column order is no […]
Categories: DBA Blogs

db_securefile PREFERRED results in ORA-60019 with small uniform extents

Bobby Durrett's DBA Blog - Tue, 2019-12-17 17:35

Last 19c upgrade issue. Working on our new 19c database, several things died off with errors like this:

SQL> execute DBMS_STATS.CREATE_STAT_TABLE ('MYSCHEMA','MYSTATTAB','MYTS');
BEGIN DBMS_STATS.CREATE_STAT_TABLE ('MYSCHEMA','MYSTATTAB','MYTS'); END;

*
ERROR at line 1:
ORA-60019: Creating initial extent of size 14 in tablespace of extent size 8
ORA-06512: at "SYS.DBMS_STATS", line 20827
ORA-06512: at "SYS.DBMS_STATS", line 20770
ORA-06512: at "SYS.DBMS_STATS", line 20765
ORA-06512: at line 1

Our tablespaces had small uniform extents and our 19c database had defaulted the parameter db_securefile to PREFERRED. We bumped our uniform extent sizes up to 1 megabyte and the problem went away. Setting db_securefile to PERMITTED also resolved the issue.

Oracle’s support site has a bunch of good information about this. This might be a relevant bug:

Bug 9477178 : ORA-60019: CREATING INITIAL EXTENT OF SIZE X IN TABLESPACE FOR SECUREFILES

Bobby

Categories: DBA Blogs

Datapump Import Fails on Tables With Extended Statistics

Bobby Durrett's DBA Blog - Tue, 2019-12-17 17:11

Quick post before I leave on vacation. We used Datapump to import a schema from an 11.2 HP-UX database to a 19c Linux database and got errors on a few tables like these:

ORA-39083: Object type TABLE:"MYSCHEMA"."TEST" failed to create with error:
ORA-00904: "SYS_STU0S46GP2UUQY#45F$7UBFFCM": invalid identifier

Failing sql is:
ALTER TABLE "MYSCHEMA"."TEST"  MODIFY ("SYS_STU0S46GP2UUQY#45F$7UBFFCM" NUMBER GENERATED
ALWAYS AS (SYS_OP_COMBINED_HASH("COL1","COL2","COL3")) VIRTUAL )

Workaround was to create the table first empty with no indexes, constraints, etc. and import. Today I was trying to figure out why this happened. Apparently, the table has extended statistics on the three primary key columns. I found a post by Jonathan Lewis that shows a virtual column like the one this table has with extended statistics. The error is on the datapump import, impdp, of the table that has extended statistics. This error is similar to some Oracle documented issues such as:

DataPump Import (IMPDP) Raises The Errors ORA-39083 ORA-904 Due To Virtual Columns Dependent On A Function (Doc ID 1271176.1)

But I could not immediately find something that says that extended statistics cause a table to not be importable using Datapump impdp.

If you want to recreate the problem, try added extended stats like this (which I derived from Jonathan Lewis’s post):

select dbms_stats.create_extended_stats(NULL,'TEST','(COL1, COL2, COL3)') name from dual;

select * from user_tab_cols where table_name='TEST';

Then export table from 11.2 and import to 19c database using datapump. Anyway, posting here for my own memory and in case others find it useful. Maybe this is a bug?

Bobby

Categories: DBA Blogs

Merge Always Updates Sequence Number

Bobby Durrett's DBA Blog - Tue, 2019-12-17 11:55

This is nothing new, but I wanted to throw out a quick post to document it. If you have a sequence.nextval in the insert part of a merge statement the merge calls nextval for all the updated rows as well.

Oracle has a bug report about this from a 9.2 issue, so this is nothing new:

Bug 6827003 : SEQUENCE # IN MERGE BEING UPDATED FOR BOTH INSERT AND UPDATE

I created a couple of testcases if you want to try them: sequencewithmerge.zip

Oracle’s bug report says you can work around the issue by encasing the sequence.nextval call in a function so I tried it and it works.

Anyway, you can’t count on the sequence only being advanced on inserted rows with merge statements if you include sequence.nextval in the insert part of the merge statement.

Bobby

Categories: DBA Blogs

Oracle Recognized as Leader in Risk Management

Oracle Press Releases - Tue, 2019-12-17 07:00
Press Release
Oracle Recognized as Leader in Risk Management Chartis Research cites Oracle for core technology and overall strategy in managing risk and compliance

Redwood Shores, Calif.—Dec 17, 2019

For the fourth consecutive year, Oracle Financial Services has been ranked in the top three of the annual Chartis RiskTech100®. Compiled by industry research group Chartis Research, RiskTech100® ranks the world’s top 100 providers of risk management and compliance technology solutions that meet the needs of both financial and non-financial organizations.

Oracle ranked first in the categories of Core Technology, which looks at a vendor’s overall technology stack by benchmarking it against the latest best practices, and Data Integrity and Control, which reviews a vendor’s ability to maintain data quality, a key differentiator for Oracle. Oracle also received an honorable mention in the categories of asset and liability management (ALM), financial crime/anti-money laundering (AML), risk and finance integration, and risk data aggregation and reporting.

This announcement comes after Oracle was ranked as a category leader in the Chartis RiskTech Quadrant® for AML/watchlist monitoring solutions earlier this year.

“Each year our research highlights how vendors’ strategies are changing to address ongoing developments in RiskTech,” said Rob Stubbs, Head of Research at Chartis. “Oracle’s top three placing, and its two award wins this year, reflect its continued commitment to the market.”

A pioneer in modern risk and finance, Oracle Financial Services Analytical Applications product suite is used by many Global Systemically Important Financial Institutions (SIFI).

In September 2019, AsiaRisk (owned by Infopro Digital SAS) ranked Oracle’s ALM solution as Product of the Year for its value add to end users from an innovation and risk perspective. AsiaRisk specifically noted its ability to help bank officers across the organization gain a better understanding of the risks they have assumed and sensitivity in economic conditions.

“Today’s financial institutions are grappling with how to leverage new technology, manage an enormous amount of data, and meet increasingly complex regulatory and compliance requirements,” said Sonny Singh, senior vice president and general manager, Oracle Financial Services. “Our ranking re-affirms our strategy and vision to help organizations adapt, and we remain committed to continuously strengthening our capabilities.”

Now in its 14th year, RiskTech100® is a comprehensive independent study of the world’s major players in risk and compliance technology, and it serves as a valuable assessment and benchmarking tool for all participants in risk technology markets. The rankings in the report reflect Chartis analysts’ expert opinions, along with research into market trends, participants, expenditure patterns and best practices. The analysis is validated through several phases of independent verification.

More information and a copy of the full report can be found on Chartis’ website here.

Contact Info
Judi Palmer
Oracle
+1 650 607 6598
judi.palmer@oracle.com
Brian Pitts
Hill+Knowlton Strategies
+1 312 475 5921
brian.pitts@hkstrategies.com
About Chartis

Chartis is a research and advisory firm that provides technology and business advice to the global financial services industry. Chartis provides independent market intelligence on market dynamics, regulatory trends, technology trends, best practices, competitive landscapes, market sizes, expenditure priorities, and mergers and acquisitions. Chartis’ RiskTech Quadrant® reports are written by experienced analysts with hands-on experience of selecting, developing and implementing financial technology solutions for a variety of international companies in a range of industries including banking, insurance and capital markets.

About Oracle Financial Services 

Oracle Financial Services Global Business Unit provides clients in more than 140 countries with an integrated, best-in-class, end-to-end solution of intelligent software and powerful hardware designed to meet every financial service need. Our market leading platforms provide the foundation for banks and insurers’ digital and core transformations and we deliver a modern suite of Analytical Applications for Risk, Finance Compliance and Customer Insight. For more information, visit our website at https://www.oracle.com/industries/financial-services/index.html.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Judi Palmer

  • +1 650 607 6598

Brian Pitts

  • +1 312 475 5921

Oracle Achieves DISA Impact Level 5 Provisional Authorization for Oracle Cloud Infrastructure

Oracle Press Releases - Tue, 2019-12-17 07:00
Press Release
Oracle Achieves DISA Impact Level 5 Provisional Authorization for Oracle Cloud Infrastructure U.S. Federal and Department of Defense customers now can benefit from the power of Oracle’s Generation 2 Cloud infrastructure

Redwood Shores, Calif.—Dec 17, 2019

Following closely on the heels of Oracle achieving FedRAMP authorization, Oracle today announced three new government regions: Ashburn, Virginia; Phoenix, Arizona; and Chicago, Illinois. These regions have achieved DISA Impact Level 5 provisional authorization (IL5 PATO), providing a cloud environment where U.S. Department of Defense (DoD) and other Federal customers can harness the power of Oracle Cloud to unlock innovation, improve mission performance, and enhance service delivery.  

This is an important milestone in Oracle’s journey to deliver innovative cloud services with consistent high performance and exceptional security to the entire U.S. government. In 2020, Oracle plans to bring additional full-scale Gen 2 Cloud Regions online to support the classified missions of the US Government.

“U.S. DoD and other Federal customers are continually looking for new, secure ways to improve citizen services and keep our nation safe,” said Don Johnson, executive vice president, Oracle Cloud Infrastructure. “Oracle’s Generation 2 Cloud was engineered to deliver highly secure, high-performance, cost effective infrastructure that helps government organizations address the needs of the nation today and tomorrow.” 

Oracle has been a long-standing strategic technology partner of the U.S. government. Today, more than 500 government organizations take advantage of Oracle’s industry-leading technologies and superior performance. State, local, and federal government customers using Oracle to modernize their technology include Defense Manpower Data Center (DMDC) and the U.S. Air Force.

The Department of Defense recently awarded a contract to Oracle for its Oracle Cloud Infrastructure to support a large portion of the enterprise human resource portfolio. The award modernizes existing infrastructure and will assist the Defense Manpower Data Center in providing necessary human resource services and capabilities to its military members, veterans and their families.

With Oracle Cloud Infrastructure, customers benefit from best-in-class security, consistent high performance, simple predictable pricing, and the tools and expertise needed to bring enterprise workloads to cloud quickly and efficiently. In addition, Oracle now provides organizations with a complete set of solutions for any high performance computing (HPC) workload, enabling businesses to capitalize on the benefits of modern cloud computing while enjoying performance comparable to on-premises at a lower cost.

“Oracle Cloud Infrastructure brings incredible performance, flexibility, security, and cost-savings benefits to our federal civilian, commercial and higher education customers,” said Paul Seifert, Federal Sector President, Mythics, Inc. “Mythics’ DoD customers will now be able to leverage Oracle Cloud to better serve the unique requirements of the DoD at home and abroad.”

Oracle Cloud Infrastructure has achieved certifications and attestations for key security standards and compliance mandates. These independent third-party assurance programs demonstrate Oracle’s commitment to security and to meeting the needs of the public sector. These IL5 PATO government regions will launch with initial Oracle services including VM and Bare Metal Compute (CPU and GPU), Storage (including archive, block, and object storage), Database, Identity and Access Management, Key Management Service, Load Balancer, and Exadata Cloud Service.

“We’re excited to see the Oracle Cloud Infrastructure achieve DISA Impact Level 5 provisional authorization, as it provides additional options that our federal government clients—especially those seeking to migrate large and complex Oracle-based solutions to the cloud—can leverage,” said Anthony Flake, managing director of Accenture Federal Services’ Oracle practice.

Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Nicole Maloney

  • +1.650.506.0806

Driver Assistance Leader Hits the Road with Oracle

Oracle Press Releases - Tue, 2019-12-17 07:00
Press Release
Driver Assistance Leader Hits the Road with Oracle Agero supports nearly half of passenger vehicles on the road with Oracle SD-WAN

Redwood Shores, Calif.—Dec 17, 2019

Agero, a leader in the digitalization of white-labeled driver assistance services, is using Oracle enterprise communications technologies to safeguard drivers in more than 115 million vehicles in the U.S. When a driver is stranded roadside, time is of the essence. With Oracle, the company has achieved continuous contact center uptime, so customers can get the assistance they need and are back on the road as quickly as possible.
 
For 45 years, Agero has provided smart solutions for its clients and their drivers. Today, Agero’s  roadside assistance, accident management and consumer affairs services are leveraged in the U.S. by drivers in two-thirds of new passenger vehicles, policyholders from nine of the top 15 auto insurance carriers and customers of a variety of other diversified clients. To maximize the quality of its customers’ experiences and to eliminate communication failures or downtime, Agero needed predictable enterprise communications performance and real-time application support. Agero selected Oracle for the failsafe reliability, security and interoperability.
 
“When drivers are stranded on the road in the winter, creating not only an uncomfortable situation but also a health and safety issue, it is crucial for our customers to get directly in contact with an agent,” said Robert Sullivan, vice president, technology and shared services, Agero. “Oracle has the best of breed technology in the Oracle SD-WAN and Enterprise Session Border Controllers, which have helped us to deliver high availability, reliability and quality of experience for our customers.”
 
Since working with Oracle and Presidio to implement the Oracle SD-WAN solution, Agero has drastically reduced downtime and created special routing situations for sites without substantial circuit diversity. This advanced, “always-on performance” is invaluable to Agero as the company processes more than 12 million roadside and emergency support requests per year.
 
“Agero is reinventing how driver assistance is delivered, while elevating the consumer experience in often dire scenarios. As enterprises demand flexible WAN solutions supporting shifting business requirements, Oracle is increasing and leveraging bandwidth for affordable and trusted WAN connectivity, anywhere and whenever it’s needed,” said Andrew Morawski, senior vice president and general manager, Oracle Communications - Networks.
 
In addition to the Oracle SD-WAN, Agero deployed the Oracle 1100 Enterprise Session Border Controller (E-SBC) Oracle 3900 E-SBC and the Oracle Communications Converged Application Server (OCCAS) to protect its network and contact center from external threats.
 
To learn more about Oracle Communications industry solutions, visit: Oracle Communications LinkedIn, or join the conversation at Twitter @OracleComms.
Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
Brent Curry
Hill+Knowlton Strategies
+1.312.255.3086
brent.curry@hkstrategies.com
About Agero

Agero’s mission is to safeguard consumers on the road through a unique combination of platform intelligence and human powered solutions, strengthening our clients’ relationships with their drivers. We are a leading provider of driving solutions, including roadside assistance, accident management, consumer affairs and telematics. The company protects 115 million vehicles in partnership with leading automobile manufacturers, insurance carriers and other diversified clients. Managing one of the largest national networks of service providers, Agero responds to more than 12 million requests annually for assistance. Agero, a member company of The Cross Country Group, is headquartered in Medford, Mass., with operations throughout North America. To learn more, visit www.agero.com and follow on Twitter @AgeroNews.

About Oracle Communications

Oracle Communications provides integrated communications and cloud solutions for Service Providers and Enterprises to accelerate their digital transformation journey in a communications-driven world from network evolution to digital business to customer experience. www.oracle.com/communications

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Brent Curry

  • +1.312.255.3086

Oracle EBS Cloud Manager: New Release (19.3.1) Is Now Available

Online Apps DBA - Tue, 2019-12-17 02:31

[New Update] Oracle EBS Cloud Manager (EBSCM) version 19.3.1 Now Available: How To Upgrade EBS Cloud Manager is used to Build, Manage & Migrate EBS(R12) on Oracle Cloud. Latest version 19.3.1 of EBS Cloud Manager is now available Check out: https://k21academy.com/ebscloud34 The blog post discusses the: ✦ What is EBS Cloud Manager ✦ What’s new in […]

The post Oracle EBS Cloud Manager: New Release (19.3.1) Is Now Available appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Power BI Report Server – Kerberos Advanced configuration

Yann Neuhaus - Mon, 2019-12-16 12:21
Introduction

Following the basic configuration explained in a previous blog (Link), I describe here how to make more advanced configurations in some specific cases but often met by customers. It is only complementing what has being described in the previous blog in some specific situations needing additional or alternative configurations

Configuration using HTTPS protocol with DNS alias

If you are requested to secure the access to you Power BI Report Server, the solution is to use the HTTPS protocol of course, but first you will need a server certificate installed on your Power BI Report Server host. The aim of this blog is not to explain how to create certificate but just to give your the trick to make it compliant with Kerberos delegation when using a DNS alias for your server.
In that case, the URL used to access your Power BI Report Server portal is based on a DNS alias, you have to generate a certificate with a CN matching your URL but do not forget also to specify an alternative name with type DNS matching also your DNS alias.

After having you certificate issued and installed on your server you can use it for your Web Service URL and your Web Portal URL using the Report Server Configuration manager.

Finally do not forget to create the Http service SPN for the Power BI Report Server service account using your certificate URL.

SetSpn -a http/PowerBIRS.dbi-test.local PBIRSServiceAccount

 

Using Data Sources on SQL Server AlwaysOn with read-only ApplicationIntent

If your report data sources are linked to a SQL Server databases participating in availability groups, with Replica set as read-only, you probably wish your reporting system to read in order to minimize the load on your primary node.
To force the reports to query the data from the read-only replica the parameter ApplicationIntent=ReadOnly is specified in the connection string of the data source (Data Source=;Initial Catalog=;ApplicationIntent=ReadOnly;MultiSubnetFailover=True;Encrypt= True)
In this case a redirection is made by MSSQL Server listener to the dedicated read-only node.
In this context, if you use integrated security using Kerberos, you have to deal with the SPN of the read-only node, reading will be redirected to it
In this case additional SPN must be created on each SQL Server SQL Database Engine instance name (or DNS alias) participating in the availability group targeted. I recommend to create all the involved SPN to cover all case when the roles of your replicas are changing.
To illustrate this case, see the figure below as the SPN’s created for the SQL Sever service account:

SetSPN –a MSSQLSvc/LiDBSrc001:1433 svc_srvsql
SetSPN –a MSSQLSvc/LiDBSrc001.dbi-test.local:1433 svc_srvsql
SetSPN –a MSSQLSvc/ DBSrcIn01r1 svc_srvsql
SetSPN –a MSSQLSvc/ DBSrcIn01r1.dbi-test.local:1433 svc_srvsql
SetSPN –a MSSQLSvc/ DBSrcIn01r2 svc_srvsql
SetSPN –a MSSQLSvc/ DBSrcIn01r2.dbi-test.local:1433 svc_srvsql
SetSPN –a MSSQLSvc/ DBSrcIn01r3 svc_srvsql
SetSPN –a MSSQLSvc/ DBSrcIn01r3.dbi-test.local:1433 svc_srvsql

If you are using constraint delegation, do not forget to add all these services to your Power BI Report Server service account trusting it to allow the delegation to this published services.

Cet article Power BI Report Server – Kerberos Advanced configuration est apparu en premier sur Blog dbi services.

IOT Bug

Jonathan Lewis - Mon, 2019-12-16 09:58

Here’s a worrying bug that showed up a couple of days ago on the Oracle-L mailing list. It’s a problem that I’ve tested against 12.2.0.1 and 19.3.0.0 – it may be present on earlier versions of Oracle. One of the nastiest things about it is that you might not notice it until you get an “out of space” error from the operating system. You won’t get any wrong results from it, but it may well be adding an undesirable performance overhead.

Basically it seems that (under some circumstances, at least) Oracle is setting the “block guess” component of the secondary index on Index Organized Tables (IOTs) to point to blocks in the overflow segment instead of blocks in the primary key segment. As a result, when you execute a query that accesses the IOT through the secondary index and has to do reads from disc to satisfy the query – your session goes through the following steps:

  • Identify index entry from secondary index – acquire “block guess”
  • Read indicated block and discover the object number on the block is wrong, and the block type is wrong
  • Write a (silent) ORA-01410 error and do a block dump into the trace file
  • Use the “logical rowid” from the secondary index (i.e. the stored primary key value) to access the primary key index by key value

So your query runs to completion and you get the right result because Oracle eventually gets there using the primary key component stored in the secondary index, but it always starts with the guess[see sidebar] and for every block you read into the cache because of the guess you get a dump to the trace file.

Here’s a little code to demonstrate. The problem with this code is that everything appears to works perfectly, you have to be able to find the trace file for your session to see what’s gone wrong. First we create some data – this code is largely copied from the original posting on Oracle-L, with a few minor changes:


rem
rem     Script:         iot_bug_12c.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Nov 2019
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem
rem     Notes
rem     THe OP had tested on 19.5.0.0 to get the same effect, see:
rem     //www.freelists.org/post/oracle-l/IOT-cannot-get-valid-consensus-bug-or-unexplained-behavio
rem

drop table randomload purge;

create table randomload(
        roll number,
        name varchar2(40),
        mark1 number,
        mark2 number,
        mark3 number,
        mark4 number,
        mark5 number,
        mark6 number,
        primary key (roll)
) 
organization index 
including mark3 overflow
;

create index randomload_idx on randomload(mark6);

insert into randomload 
select 
        rownum, 
        dbms_random.string(0,40) name, 
        round(dbms_random.value(0,100)), 
        round(dbms_random.value(0,100)), 
        round(dbms_random.value(0,100)), 
        round(dbms_random.value(0,100)), 
        round(dbms_random.value(0,100)), 
        round(dbms_random.value(0,10000)) 
from 
        dual 
connect by 
        level < 1e5 -- > comment to avoid wordpress format issue
;

commit;

exec dbms_stats.gather_table_stats(null,'randomload', cascade=>true);

prompt  ==================================================
prompt  pct_direct_access should be 100 for randomload_idx
prompt  ==================================================

select 
        table_name, index_name, num_rows, pct_direct_access, iot_redundant_pkey_elim  
from 
        user_indexes
where
        table_name = 'RANDOMLOAD'
;

It should take just a few seconds to build the data set and you should check that the pct_direct_access is 100 for the index called randomload_idx.

The next step is to run a query that will do an index range scan on the secondary index.

 
column mark6 new_value m_6

select 
        mark6, count(*) 
from
        randomload 
group by 
        mark6
order by 
        count(*)
fetch first 5 rows only
;

alter system flush buffer_cache;
alter session set events '10046 trace name context forever, level 8';
set serveroutput off

select avg(mark3) 
from 
        randomload 
where 
        mark6 = &m_6
;

select * from table(dbms_xplan.display_cursor);

alter session set events '10046 trace name context off';
set serveroutput on

I’ve started by selecting one of the least frequencly occuring values of m_6 (a column I know to be in the overflow); then I’ve flushed the buffer cache so that any access I make to the data will have to start with disk reads (the original poster suggested restarting the database at this point, but that’s not necessary).

Then I’ve enabled sql_trace to show wait states (to capture details of what blocks were read and which object they belong to),, and I’ve run a query for m_3 (a column that is in the primary key (TOP) segment of the IOT) and pulled its execution plan from memory to check that the query did use a range scan of the secondary index. Here’s the plan:

----------------------------------------------------------------------------------------
| Id  | Operation          | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                   |       |       |    11 (100)|          |
|   1 |  SORT AGGREGATE    |                   |     1 |     7 |            |          |
|*  2 |   INDEX UNIQUE SCAN| SYS_IOT_TOP_77298 |    10 |    70 |    11   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN| RANDOMLOAD_IDX    |    10 |       |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("MARK6"=1316)
   3 - access("MARK6"=1316)

As you can see the plan shows what we are hoping to see – an index range scan of the secondary index that let’s it follow up with a unique scan of the primary key segment. It’s just a little odd that the access predicate reported for operation 2 (unique scan of TOP) suggests that the access is on a column that isn’t in the primary key and isn’t even in the TOP section.

So the query works and gives the right answer. But what do we find in the trace directory ? If you’re running 12c (possibly only 12.2), each time the error occurs the following pattern of information will be written to the alert log (it didn’t appear in 19.3)


ORCL(3):Hex dump of (file 22, block 16747) in trace file /u01/app/oracle/diag/rdbms/orcl12c/orcl12c/trace/orcl12c_ora_7888.trc
ORCL(3):
ORCL(3):Corrupt block relative dba: 0x0580416b (file 22, block 16747)
ORCL(3):Bad header found during multiblock buffer read (logical check)
ORCL(3):Data in bad block:
ORCL(3): type: 6 format: 2 rdba: 0x0580416b
ORCL(3): last change scn: 0x0000.0b86.0e36484c seq: 0x1 flg: 0x06
ORCL(3): spare3: 0x0
ORCL(3): consistency value in tail: 0x484c0601
ORCL(3): check value in block header: 0x4408
ORCL(3): computed block checksum: 0x0
ORCL(3):

And the following pattern of information is written to the trace file [Update: a follow-up test on 11.2.0.4 suggests that the basic “wrong block address” error also happens in that version of Oracle, but doesn’t result in a dump to the trace file]:


kcbzibmlt:: encounter logical error ORA-1410, try re-reading from other mirror..
cursor valid? 1 makecr 0 line 15461 ds_blk (22, 16747) bh_blk (22, 16747)
kcbds 0x7ff1ca8c0b30: pdb 3, tsn 8, rdba 0x0580416b, afn 22, objd 135348, cls 1, tidflg 0x8 0x80 0x0
    dsflg 0x108000, dsflg2 0x0, lobid 0x0:0, cnt 0, addr 0x0, exf 0x10a60af0, dx 0x0, ctx 0
    whr: 'qeilwh03: qeilbk'
env [0x7ff1ca8e3e54]: (scn: 0x00000b860e364893   xid: 0x0000.000.00000000  uba: 0x00000000.0000.00  statement num=0  parent xid:  0x0000.000.00000000  st-scn: 0x0000000000000000  hi-scn: 0x0000000000000000  ma-scn: 0x00000b860e364879  flg: 0x00000660)
BH (0xb1fd6278) file#: 22 rdba: 0x0580416b (22/16747) class: 1 ba: 0xb1c34000
  set: 10 pool: 3 bsz: 8192 bsi: 0 sflg: 2 pwc: 763,14
  dbwrid: 0 obj: 135348 objn: 135348 tsn: [3/8] afn: 22 hint: f
  hash: [0x9eff0528,0x77cff808] lru: [0xb1fd2578,0x9ff84658]
  ckptq: [NULL] fileq: [NULL]
  objq: [0xb6f654c0,0x9ff84680] objaq: [0xb6f654d0,0x9ff84690]
  use: [0x77b78128,0x77b78128] wait: [NULL]
  st: READING md: EXCL tch: 0
  flags: only_sequential_access
  Printing buffer operation history (latest change first):
  cnt: 5
  01. sid:10 L122:zgb:set:st          02. sid:10 L830:olq1:clr:WRT+CKT
  03. sid:10 L951:zgb:lnk:objq        04. sid:10 L372:zgb:set:MEXCL
  05. sid:10 L123:zgb:no:FEN          06. sid:10 L083:zgb:ent:fn
  07. sid:08 L192:kcbbic2:bic:FBD     08. sid:08 L191:kcbbic2:bic:FBW
  09. sid:08 L604:bic2:bis:REU        10. sid:08 L190:kcbbic2:bic:FAW
  11. sid:08 L602:bic1_int:bis:FWC    12. sid:08 L822:bic1_int:ent:rtn
  13. sid:08 L832:oswmqbg1:clr:WRT    14. sid:08 L930:kubc:sw:mq
  15. sid:08 L913:bxsv:sw:objq        16. sid:08 L608:bxsv:bis:FBW
Hex dump of (file 22, block 16747)

   ... etc.

Corrupt block relative dba: 0x0580416b (file 22, block 16747)
Bad header found during multiblock buffer read (logical check)
Data in bad block:
 type: 6 format: 2 rdba: 0x0580416b
 last change scn: 0x0000.0b86.0e36484c seq: 0x1 flg: 0x06
 spare3: 0x0
 consistency value in tail: 0x484c0601
 check value in block header: 0x4408
 computed block checksum: 0x0
TRCMIR:kcf_reread     :start:  16747:0:/u01/app/oracle/oradata/orcl12c/orcl/test_8k_assm.dbf
TRCMIR:kcf_reread     :done :  16747:0:/u01/app/oracle/oradata/orcl12c/orcl/test_8k_assm.dbf

The nasty bit, of course, is the bit I’ve removed and replaced with just “etc.”: it’s a complete block dump (raw and symbolic) which in my example was somthing like 500 lines and 35KB in size.

It’s not immediately obvious exactly what’s going on and why, but the 10046 trace helps a little. From another run of the test (on 19.3.0.0) I got the following combination of details – which is an extract showing the bit of the wait state trace leading into the start of the first block dump:

WAIT #140478118667016: nam='db file scattered read' ela= 108 file#=13 block#=256 blocks=32 obj#=77313 tim=103574529210
WAIT #140478118667016: nam='db file scattered read' ela= 2236 file#=13 block#=640 blocks=32 obj#=77313 tim=103574531549
WAIT #140478118667016: nam='db file scattered read' ela= 534 file#=13 block#=212 blocks=32 obj#=77312 tim=103574532257
kcbzibmlt: encounter logical error ORA-1410, try re-reading from other mirror..
cursor valid? 1 warm_up abort 0 makecr 0 line 16082 ds_blk (13, 212) bh_blk (13, 212)

Object 77313 is the secondary index, object 77312 is the primary key index (IOT_TOP). It may seem a little odd that Oracle is using db file scattered reads of 32 blocks to read the indexes but this is a side effect of flushing the buffer – Oracle may decide to prefeetch many extra blocks of an object to “warmup” the cache just after instance startup or a flush of the buffer cache. The thing I want to check, though, is what’s wrong with the blocks that Oracle read from object 77312:


alter system dump datafile 13 block min 212 block max 243;

BH (0xc8f68e68) file#: 13 rdba: 0x034000d4 (13/212) class: 1 ba: 0xc8266000
  set: 10 pool: 3 bsz: 8192 bsi: 0 sflg: 2 pwc: 0,15
  dbwrid: 0 obj: 77311 objn: 77311 tsn: [3/6] afn: 13 hint: f

BH (0xa7fd6c38) file#: 13 rdba: 0x034000d4 (13/212) class: 1 ba: 0xa7c2a000
  set: 12 pool: 3 bsz: 8192 bsi: 0 sflg: 2 pwc: 0,15
  dbwrid: 0 obj: 77311 objn: 77311 tsn: [3/6] afn: 13 hint: f

BH (0xa5f75780) file#: 13 rdba: 0x034000d5 (13/213) class: 0 ba: 0xa5384000
  set: 11 pool: 3 bsz: 8192 bsi: 0 sflg: 2 pwc: 0,15
  dbwrid: 0 obj: 77311 objn: 77311 tsn: [3/6] afn: 13 hint: f

BH (0xdafe9220) file#: 13 rdba: 0x034000d5 (13/213) class: 1 ba: 0xdadcc000
  set: 9 pool: 3 bsz: 8192 bsi: 0 sflg: 2 pwc: 0,15
  dbwrid: 0 obj: 77311 objn: 77311 tsn: [3/6] afn: 13 hint: f

I’ve reported the first few lines of the symbolic dump for the first few blocks of the resulting trace file. Look at the third line of each group of three BH lines: it’s reporting object 77311 (the overflow segment), not 77312 (the TOP segment). And every single block reported in the db file scattered read of 32 blocks for object 77312 reports itself, when dumped, as being part of object 77311. And that’s possibly the immediate cause of the ORA-01410.

We can take the investigation a little further by dumping a leaf block or two from the secondary index.


alter session set events 'immediate trace name treedump level 77313';

----- begin tree dump
branch: 0x3400104 54526212 (0: nrow: 542, level: 1)
   leaf: 0x340010d 54526221 (-1: row:278.278 avs:2479)
   leaf: 0x340075e 54527838 (0: row:132.132 avs:5372)
   leaf: 0x34005fb 54527483 (1: row:41.41 avs:7185)

alter system dump datafile 13 block 1886   -- leaf: 0x340075e

BH (0xd5f5d090) file#: 13 rdba: 0x0340075e (13/1886) class: 1 ba: 0xd5158000
  set: 9 pool: 3 bsz: 8192 bsi: 0 sflg: 2 pwc: 0,15
  dbwrid: 0 obj: 77313 objn: 77313 tsn: [3/6] afn: 13 hint: f
...
row#6[5796] flag: K------, lock: 0, len=18
col 0; len 2; (2):  c1 1d
col 1; len 4; (4):  c3 07 41 5c
tl: 8 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [ 4]  03 40 05 7c

I’ve done a treedump of the secondary index and picked a leaf block address from the treedump and dumped that leaf block, and from that leaf block I’ve extracted one index entry to show you the three components: the key value (c1 1d), the primary key for the row (c3 07 41 5c), and the block guess (03 40 05 75). Read the block guess as a 4 byte hex number, and it translates to file 13, block 1397 – which should belong to the TOP segment. So the exciting question is – what object does block (13, 1397) think it belongs to ?


alter system dump datafile 13 block 1397;

Block header dump:  0x03400575
 Object id on Block? Y
 seg/obj: 0x12dff  csc:  0x00000b860e308c46  itc: 2  flg: E  typ: 1 - DATA
     brn: 0  bdba: 0x3400501 ver: 0x01 opc: 0
     inc: 0  exflg: 0

Converting from Hex to Decimal: obj: 0x12dff = 77311 which is the overflow segment. The secondary index block guess is pointing at a block in the overflow segment.

There are two ways to handle this problem – you could simply rebuild the index (alter index rebuild) or, as the original poster did, use the “update block references” command to correct all the block guesses: “alter index randomload_idx update block references;”. Neither is desirable, but if you’re seeing a lot of large trace files following the pattern above then it may be necessary.

There was one particular inconsistency in the tests – which I ran many times – occasionally the pct_direct_access for the secondary index would be reported as zero (which, technically, should always happen given the error).  If it did, of course, Oracle wouldn’t follow the guess but would go straight to the step where it used the primary key “logical rowid” – thus bypassing the error and block dump.

tl;dr

In some circumstances the block guesses in the secondary indexes of IOTs may be pointing to the overflow segment instead of the primary key (TOP) segment. If this happens then queries will still run and give the right answers, but whenever they read a “guessed” block from disc they will report an ORA-01410 error and dump a block trace. This will affect performance and may cause space problems at the O/S level.

Sidebar

An entry in the secondary index of an Index Organized Table (IOT) consists of three parts, which intially we can think in the form:

({key-value}, {logical rowid}, {block guess})

Since IOTs don’t have real rowids the “logical rowid” is actually the primary key of the row where the {key value} will be found. As a short cut for efficient execution Oracle includes the block address (4 bytes) where that primary key value was stored when the row was inserted. Because an IOT is an index “rows” in the IOT can move as new data is inserted and leaf blocks split, so eventually any primary key may move to a different block – this is why we refer to the block address as a guess – a few days, hours, or minutes after you’ve inserted the row the block address may no longer be correct.)

To help the runtime engine do the right thing Oracle collects a statistic called pct_direct_access for secondary indexes of IOTs. This is a measure of what percentage of the block guesses are still correct at the time that the statistics are gathered. If this value is high enough the run-time engine will choose to try using the block guesses while executing a query (falling back to using the logical rowid if it turns out that the guess is invalid), but if the value drops too low the optimizer will ignore the block guesses and only use the logical rowid.

Not relevant to this note – but a final point about secondary indexes and logical rowids – if the definition of the index includes  some of the columns from the primary keys Oracle won’t store those columns twice (in more recent version, that is) – the code is clever enough to use the values stored in the (key value) component when it needs to use the (logical rowid) component.

 

OT Footnote

I’ve decided to this year to donate to a charity that works to reduce child mortality rates in Nepal with a two-pronged attack on malnutrition: feeding starving children, then educating their parents on how to make best use local resources to grow the most appropriate crops and use the best preparation methods to  produce nourishing meals in the future. (They also run other projects to improve the lives of the young people in Nepal – here’s a link to their home page, and a direct link to a 4 minute video that gives you a quick insight into what they do and how they do it.)

If you’re thinking of making any small donations to charity over the next few weeks, please think of this one. To make your donation more valuable I’ve set up a justgiving page and will match any donations made, up to a total of £1,000.

Thank you.

 

 

 

Oracle Recognized as a Leader in Gartner Magic Quadrant for Manufacturing Execution Systems for Oracle Manufacturing Cloud

Oracle Press Releases - Mon, 2019-12-16 07:00
Press Release
Oracle Recognized as a Leader in Gartner Magic Quadrant for Manufacturing Execution Systems for Oracle Manufacturing Cloud

Redwood Shores, Calif.—Dec 16, 2019

Oracle SCM Cloud provides end-to-end technology that takes customers beyond supply chain operations and into integrated business planning.

Oracle has been named a Leader in Gartner’s 2019 “Magic Quadrant for Manufacturing Execution Systems1” report. Out of 16 companies evaluated, Oracle is positioned as a Leader based on its completeness of vision and ability to execute for its Oracle Manufacturing Cloud–part of Oracle Supply Chain Management (SCM) Cloud. A complimentary copy of the report is available here.

“Manufacturing organizations are increasingly seeking solutions that extract additional value from manufacturing operations through increased efficiency and reduced costs,” said Andy Binsley, vice president, Manufacturing and ALM Strategy, Oracle. “They would also like to do that on the Cloud. To help our customers achieve these twin goals, we have integrated artificial intelligence, machine learning and Internet of Things capabilities within Oracle Supply Chain Management Cloud. We are pleased to be acknowledged as a Leader for manufacturing execution systems (MES) by Gartner, and see this recognition as a testament to our success in delivering business value to manufacturing organizations, and doing so on the Cloud.”

Gartner estimates that “By 2024, 50% of MES solutions will include industrial IoT (IIoT) platforms synchronized with microservices-based manufacturing operations management (MOM) apps, providing near-real-time transaction management, control, data collection and analytics.”

With Oracle SCM Cloud, Oracle provides a suite of supply chain cloud applications that enable businesses, including manufacturers, to manage their supply chains with the scale, security, innovation, and agility that today’s markets require. Oracle SCM Cloud provides end-to-end technology that takes customers beyond supply chain operations and into integrated business planning.

Oracle SCM Cloud has garnered consistent industry recognition. Oracle was recently named a Leader in both Gartner’s “Magic Quadrant for Warehouse Management Systems2,” and Gartner’s “Magic Quadrant for Transportation Management Systems3.”

1 Gartner, Magic Quadrant for Manufacturing Execution Systems, Rick Franzosa, 29 October 2019
2 Gartner, Magic Quadrant for Warehouse Management Systems, C. Klappich, Simon Tunstall, 8 May 2019
3 Gartner, Magic Quadrant for Transportation Management Systems, Bart De Muynck, Brock Johns, Oscar Sanchez Duran, 27 March 2019

Gartner Disclaimer
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Additional Information
For additional information on Oracle Supply Chain Management (SCM) Cloud, visit FacebookTwitter or the Oracle SCM blog.

Contact Info
Drew Smith
Oracle
+1.415.336.1103
drew.j.smith@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Drew Smith

  • +1.415.336.1103

Clob vs Binary XML storage

Tom Kyte - Sun, 2019-12-15 17:54
Hello Team, While doing poc for storing XML in ClOB storage and Binary XML storage ,I could see storing XML in Binary XML takes less table space as compared to CLOB .As far as I know both store XML in LOB storage.so why there is difference betwee...
Categories: DBA Blogs

ora-24247 when making an https call

Tom Kyte - Sun, 2019-12-15 17:54
Hi, I have a problem when making an https call inside a package. It doesn't appear to recognise the privileges granted to access the acl. When I call utl_http.begin_request in an anonymous plsql block or in a procedure with authid defined as cu...
Categories: DBA Blogs

HA and Failover in Oracle RAC

Tom Kyte - Sun, 2019-12-15 17:54
Hello, Ask Tom Team. I have some many questions about Oracle RAC HA and Failover. I was reading the info in below link and it help me a lot. But I still have some questions. https://asktom.oracle.com/pls/apex/asktom.search?tag=failover-in-rac...
Categories: DBA Blogs

Complex Query

Tom Kyte - Sun, 2019-12-15 17:54
I have a large number of orders (200) involving around 2000 diferent products and need to group the in batches of 6 orders. The task is to identify the best possible groups of orders so performance (human performance) can be maximized. As a start...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator