Feed aggregator

Can we call a procedure in select statement with any restriction?

Tom Kyte - Sun, 2019-12-15 17:54
hi tom plz tell me in simple example explanation Can we restrict the function invoke in select statement. Can we call a procedure in select statement with any restriction?
Categories: DBA Blogs

Documentum – NoSuchMethodError on setResourceBundle with D2-REST 16.4 on WebLogic

Yann Neuhaus - Sat, 2019-12-14 02:00

In the scope of an upgrade project, with some colleagues, we have been deploying some D2-REST applications on Kubernetes pods using WebLogic Server. As explained in a previous blog, we first tried to upgrade our D2-REST 4.x into 16.4 but faced a small error. I don’t know if you already used/deployed D2-REST but it seems to me that the deployment is always kinda chaotic. Sometimes you will need to apply some steps and then for the next version it’s not needed anymore but later it will be needed again, aso… So in the end, we always try to deploy the OOTB with some small improvements and whenever we face an error, we try to fix it for this specific version and this version only. Never assume that a fix for a version is good for all versions.

Below, I will be using some scripts and properties files that are present in this “dbi_resources” folder: it’s some utilities and stuff that we are using for automation and to simplify our lives. So we tried to deploy the D2-REST 16.4 on our WebLogic Servers 12.2.1.x:

[weblogic@wsd2rest-0 ~]$ cd $APPLICATIONS
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$ wlst_cmd="$ORACLE_HOME/oracle_common/common/bin/wlst.sh"
[weblogic@wsd2rest-0 dbi]$ wlst_script="${dbi_resources}/manageApplication.wls"
[weblogic@wsd2rest-0 dbi]$ domain_prop="${dbi_resources}/domain.properties"
[weblogic@wsd2rest-0 dbi]$ deploy_prop="${dbi_resources}/D2-REST.deploy"
[weblogic@wsd2rest-0 dbi]$ undeploy_prop="${dbi_resources}/D2-REST.undeploy"
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$ ${wlst_cmd} ${wlst_script} ${domain_prop} ${deploy_prop}

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

>>> Loaded the properties file: ${dbi_resources}/domain.properties
>>> Loaded the properties file: ${dbi_resources}/D2-REST.deploy
>>> Connected to the AdminServer.
>>> Edit Session started.

<Dec 11, 2019 4:49:19 PM UTC> <Info> <J2EE Deployment SPI> <BEA-260121> <Initiating deploy operation for application, D2-REST [archive: $APPLICATIONS/D2-REST.war], to msD2-REST-02 msD2-REST-01 .>
ERROR... check error messages for cause.
Error occurred while performing activate : Error while Activating changes. : java.lang.NoSuchMethodError: org.apache.log4j.Logger.setResourceBundle(Ljava/util/ResourceBundle;)V
Use dumpStack() to view the full stacktrace :
Problem invoking WLST - Traceback (innermost last):
  File "${dbi_resources}/manageApplication.wls", line 77, in ?
  File "<iostream>", line 569, in stopEdit
  File "<iostream>", line 553, in raiseWLSTException
WLSTException: Error occurred while performing stopEdit : Cannot call stopEdit without an edit session in progress

[weblogic@wsd2rest-0 dbi]$

 

At this point, the application has been deployed but it cannot be started properly. It will therefore be stuck in “New” status on the WebLogic side. On the D2-REST log file, the error message looks like this:

2019-12-11 16:49:57,112 UTC [ERROR] ([ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)') - o.springframework.web.context.ContextLoader   : Context initialization failed
java.lang.NoSuchMethodError: org.apache.log4j.Logger.setResourceBundle(Ljava/util/ResourceBundle;)V
        at com.documentum.fc.common.DfLogger.<clinit>(DfLogger.java:622) ~[dfc-16.4.jar:na]
        at com.documentum.fc.common.impl.logging.LoggingConfigurator.onPreferencesInitialized(LoggingConfigurator.java:178) ~[dfc-16.4.jar:na]
        at com.documentum.fc.common.DfPreferences.initialize(DfPreferences.java:71) ~[dfc-16.4.jar:na]
        at com.documentum.fc.common.DfPreferences.getInstance(DfPreferences.java:43) ~[dfc-16.4.jar:na]
        at com.documentum.fc.client.DfSimpleDbor.getDefaultDbor(DfSimpleDbor.java:78) ~[dfc-16.4.jar:na]
        at com.documentum.fc.client.DfSimpleDbor.<init>(DfSimpleDbor.java:66) ~[dfc-16.4.jar:na]
        at com.documentum.fc.client.DfClient$ClientImpl.<init>(DfClient.java:350) ~[dfc-16.4.jar:na]
        at com.documentum.fc.client.DfClient.<clinit>(DfClient.java:766) ~[dfc-16.4.jar:na]
        at com.emc.documentum.rest.context.WebAppContextInitializer.getDfcVersion(WebAppContextInitializer.java:104) ~[_wl_cls_gen.jar:na]
        at com.emc.documentum.rest.context.WebAppContextInitializer.collectInfo(WebAppContextInitializer.java:81) ~[_wl_cls_gen.jar:na]
        at com.emc.documentum.rest.context.WebAppContextInitializer.preloadAppEnvironment(WebAppContextInitializer.java:67) ~[_wl_cls_gen.jar:na]
        at com.emc.documentum.rest.context.WebAppContextInitializer.initialize(WebAppContextInitializer.java:38) ~[_wl_cls_gen.jar:na]
        at com.emc.documentum.rest.context.WebAppContextInitializer.initialize(WebAppContextInitializer.java:31) ~[_wl_cls_gen.jar:na]
        at org.springframework.web.context.ContextLoader.customizeContext(ContextLoader.java:482) ~[spring-web-4.3.10.RELEASE.jar:4.3.10.RELEASE]
        at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:442) ~[spring-web-4.3.10.RELEASE.jar:4.3.10.RELEASE]
        at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:325) ~[spring-web-4.3.10.RELEASE.jar:4.3.10.RELEASE]
        at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:107) [spring-web-4.3.10.RELEASE.jar:4.3.10.RELEASE]
        at weblogic.servlet.internal.EventsManager$FireContextListenerAction.run(EventsManager.java:705) [com.oracle.weblogic.servlet.jar:12.2.1.3]
        at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:328) [com.oracle.weblogic.security.subject.jar:12.2.1.3]
        at weblogic.security.service.SecurityManager.runAsForUserCode(SecurityManager.java:197) [com.oracle.weblogic.security.subject.jar:12.2.1.3]
        at weblogic.servlet.provider.WlsSecurityProvider.runAsForUserCode(WlsSecurityProvider.java:203) [com.oracle.weblogic.servlet.jar:12.2.1.3]
        at weblogic.servlet.provider.WlsSubjectHandle.run(WlsSubjectHandle.java:71) [com.oracle.weblogic.servlet.jar:12.2.1.3]
        at weblogic.servlet.internal.EventsManager.executeContextListener(EventsManager.java:251) [com.oracle.weblogic.servlet.jar:12.2.1.3]
        at weblogic.servlet.internal.EventsManager.notifyContextCreatedEvent(EventsManager.java:204) [com.oracle.weblogic.servlet.jar:12.2.1.3]
        at weblogic.servlet.internal.EventsManager.notifyContextCreatedEvent(EventsManager.java:192) [com.oracle.weblogic.servlet.jar:12.2.1.3]
        at weblogic.servlet.internal.WebAppServletContext.preloadResources(WebAppServletContext.java:1921) [com.oracle.weblogic.servlet.jar:12.2.1.3]
        at weblogic.servlet.internal.WebAppServletContext.start(WebAppServletContext.java:3106) [com.oracle.weblogic.servlet.jar:12.2.1.3]
        at weblogic.servlet.internal.WebAppModule.startContexts(WebAppModule.java:1843) [com.oracle.weblogic.servlet.jar:12.2.1.3]
        at weblogic.servlet.internal.WebAppModule.start(WebAppModule.java:884) [com.oracle.weblogic.servlet.jar:12.2.1.3]
        at weblogic.application.internal.ExtensibleModuleWrapper$StartStateChange.next(ExtensibleModuleWrapper.java:360) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.internal.ExtensibleModuleWrapper$StartStateChange.next(ExtensibleModuleWrapper.java:356) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:45) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.internal.ExtensibleModuleWrapper.start(ExtensibleModuleWrapper.java:138) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.internal.flow.ModuleListenerInvoker.start(ModuleListenerInvoker.java:124) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.internal.flow.ModuleStateDriver$3.next(ModuleStateDriver.java:233) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.internal.flow.ModuleStateDriver$3.next(ModuleStateDriver.java:228) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:45) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.internal.flow.ModuleStateDriver.start(ModuleStateDriver.java:78) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.internal.flow.StartModulesFlow.activate(StartModulesFlow.java:52) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.internal.BaseDeployment$2.next(BaseDeployment.java:752) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:45) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.internal.BaseDeployment.activate(BaseDeployment.java:262) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.internal.SingleModuleDeployment.activate(SingleModuleDeployment.java:52) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.internal.DeploymentStateChecker.activate(DeploymentStateChecker.java:165) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.deploy.internal.targetserver.AppContainerInvoker.activate(AppContainerInvoker.java:90) [com.oracle.weblogic.deploy.jar:12.2.1.3]
        at weblogic.deploy.internal.targetserver.operations.AbstractOperation.activate(AbstractOperation.java:631) [com.oracle.weblogic.deploy.jar:12.2.1.3]
        at weblogic.deploy.internal.targetserver.operations.ActivateOperation.activateDeployment(ActivateOperation.java:171) [com.oracle.weblogic.deploy.jar:12.2.1.3]
        at weblogic.deploy.internal.targetserver.operations.ActivateOperation.doCommit(ActivateOperation.java:121) [com.oracle.weblogic.deploy.jar:12.2.1.3]
        at weblogic.deploy.internal.targetserver.operations.AbstractOperation.commit(AbstractOperation.java:348) [com.oracle.weblogic.deploy.jar:12.2.1.3]
        at weblogic.deploy.internal.targetserver.DeploymentManager.handleDeploymentCommit(DeploymentManager.java:907) [com.oracle.weblogic.deploy.jar:12.2.1.3]
        at weblogic.deploy.internal.targetserver.DeploymentManager.activateDeploymentList(DeploymentManager.java:1468) [com.oracle.weblogic.deploy.jar:12.2.1.3]
        at weblogic.deploy.internal.targetserver.DeploymentManager.handleCommit(DeploymentManager.java:459) [com.oracle.weblogic.deploy.jar:12.2.1.3]
        at weblogic.deploy.internal.targetserver.DeploymentServiceDispatcher.commit(DeploymentServiceDispatcher.java:181) [com.oracle.weblogic.deploy.jar:12.2.1.3]
        at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.doCommitCallback(DeploymentReceiverCallbackDeliverer.java:217) [com.oracle.weblogic.deploy.service.jar:12.2.1.3]
        at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.access$100(DeploymentReceiverCallbackDeliverer.java:14) [com.oracle.weblogic.deploy.service.jar:12.2.1.3]
        at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer$2.run(DeploymentReceiverCallbackDeliverer.java:69) [com.oracle.weblogic.deploy.service.jar:12.2.1.3]
        at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:678) [com.bea.core.weblogic.workmanager.jar:12.2.1.3]
        at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:352) [com.bea.core.utils.full.jar:12.2.1.3]
        at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:337) [com.bea.core.utils.full.jar:12.2.1.3]
        at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:57) [com.oracle.weblogic.work.jar:12.2.1.3]
        at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41) [com.bea.core.weblogic.workmanager.jar:12.2.1.3]
        at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:652) [com.bea.core.weblogic.workmanager.jar:12.2.1.3]
        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:420) [com.bea.core.weblogic.workmanager.jar:12.2.1.3]
        at weblogic.work.ExecuteThread.run(ExecuteThread.java:360) [com.bea.core.weblogic.workmanager.jar:12.2.1.3]

 

The solution is quite simple, it’s just a conflict for the log4j jars that comes with the OOTB war file provided by OpenText… So you just need to undeploy the application, remove the conflict and redeploy it afterwards. If you are facing the error above, then it’s linked to the “log4j-over-slf4j” jar file and you can solve it like that:

[weblogic@wsd2rest-0 dbi]$ ${wlst_cmd} ${wlst_script} ${domain_prop} ${undeploy_prop}

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

>>> Loaded the properties file: ${dbi_resources}/domain.properties
>>> Loaded the properties file: ${dbi_resources}/D2-REST.undeploy
>>> Connected to the AdminServer.
>>> Edit Session started.

<Dec 11, 2019 4:54:46 PM UTC> <Info> <J2EE Deployment SPI> <BEA-260121> <Initiating undeploy operation for application, D2-REST [archive: null], to msD2-REST-01 msD2-REST-02 .>

Current Status of your Deployment:
Deployment command type: undeploy
Deployment State : completed
Deployment Message : no message
None

>>> Execution completed.
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$ jar -tvf D2-REST.war | grep "WEB-INF/lib/log4j"
481535 Tue Jan 05 05:02:00 UTC 2016 WEB-INF/lib/log4j-1.2.16.jar
 12359 Mon Dec 12 03:29:02 UTC 2016 WEB-INF/lib/log4j-over-slf4j-1.6.1.jar
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$ zip -d D2-REST.war WEB-INF/lib/log4j-over-slf4j*
deleting: WEB-INF/lib/log4j-over-slf4j-1.6.1.jar
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$ ${wlst_cmd} ${wlst_script} ${domain_prop} ${deploy_prop}

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

>>> Loaded the properties file: ${dbi_resources}/domain.properties
>>> Loaded the properties file: ${dbi_resources}/D2-REST.deploy
>>> Connected to the AdminServer.
>>> Edit Session started.

<Dec 11, 2019 4:56:05 PM UTC> <Info> <J2EE Deployment SPI> <BEA-260121> <Initiating deploy operation for application, D2-REST [archive: $APPLICATIONS/D2-REST.war], to msD2-REST-02 msD2-REST-01 .>

Current Status of your Deployment:
Deployment command type: deploy
Deployment State : completed
Deployment Message : no message
None

>>> Execution completed.
[weblogic@wsd2rest-0 dbi]$

 

As you can see above, the D2-REST 16.4 is now successfully deployed. You can access it and work with it without any issues.

[weblogic@wsd2rest-0 dbi]$ curl -s -k https://lb_url/D2-REST/product-info | python -mjson.tool
{
    "links": [
        {
            "href": "https://lb_url/D2-REST/product-info",
            "rel": "self"
        }
    ],
    "name": "documentum-rest-services-product-info",
    "properties": {
        "build_number": "0511",
        "major": "16.4",
        "minor": "0000",
        "product": "Documentum D2 REST Services",
        "product_version": "16.4.0000.0511",
        "revision_number": "NA"
    }
}
[weblogic@wsd2rest-0 dbi]$

 

Cet article Documentum – NoSuchMethodError on setResourceBundle with D2-REST 16.4 on WebLogic est apparu en premier sur Blog dbi services.

Documentum – Cast trouble with D2-REST 16.5.x on WebLogic

Yann Neuhaus - Sat, 2019-12-14 02:00

In the scope of an upgrade project, with some colleagues, we have been deploying some D2-REST applications on Kubernetes pods using WebLogic Server. At the beginning, we started using D2-REST 16.4 and that was working properly (once the issue described here is fixed (and some others linked to FIPS 140-2, aso…)). After that, we tried to switch to higher versions (16.5.0 Pxx, 16.5.1 P00 or P04) but it stopped working with some error. We were able to replicate the issue with WebLogic Server 12.2.1.3 and 12.2.1.4 so it’s not just specific to one small use case but it seems more global to the D2-REST 16.5.x versions on WebLogic. It might impact other Application Servers as well, that would need some testing.

Upon accessing the D2-REST URL (E.g.: https://lb_url/D2-REST), the service seemed to be working but while going further on the product information page for example (E.g.: https://lb_url/D2-REST/product-info), then the following error was always displayed:

<error xmlns="http://identifiers.emc.com/vocab/documentum" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <status>500</status>
  <code>E_INTERNAL_SERVER_ERROR</code>
  <message>An internal server error occurs.</message>
  <details>
    org.owasp.esapi.reference.DefaultSecurityConfiguration cannot be cast to com.emc.d2.web.security.D2SecurityConfiguration
  </details>
  <id>51872a76-g47f-4d6e-9d47-e9fa5d8c1291</id>
</error>

 

The error generated on the D2-REST logs at that time was:

java.lang.ClassCastException: org.owasp.esapi.reference.DefaultSecurityConfiguration cannot be cast to com.emc.d2.web.security.D2SecurityConfiguration
	at com.emc.d2.web.security.D2HttpUtilities.getHeader(D2HttpUtilities.java:40)
	at com.emc.documentum.d2.rest.filter.AppInfoFilter.getRemoteAddr(AppInfoFilter.java:82)
	at com.emc.documentum.d2.rest.filter.AppInfoFilter.doFilter(AppInfoFilter.java:36)
	at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
	at com.emc.documentum.rest.security.filter.RepositoryNamingFilter.doFilter(RepositoryNamingFilter.java:40)
	at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
	at com.emc.documentum.rest.filter.RestCorsFilter.doFilterInternal(RestCorsFilter.java:47)
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
	at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
	at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:197)
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
	at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
	at com.emc.documentum.rest.filter.CompressionFilter.doFilter(CompressionFilter.java:73)
	at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
	at com.emc.documentum.rest.log.MessageLoggingFilter.doFilter(MessageLoggingFilter.java:69)
	at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
	at com.emc.documentum.rest.security.filter.ExceptionHandlerFilter.doFilterInternal(ExceptionHandlerFilter.java:31)
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
	at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
	at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3797)
	at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3763)
	at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:344)
	at weblogic.security.service.SecurityManager.runAsForUserCode(SecurityManager.java:197)
	at weblogic.servlet.provider.WlsSecurityProvider.runAsForUserCode(WlsSecurityProvider.java:203)
	at weblogic.servlet.provider.WlsSubjectHandle.run(WlsSubjectHandle.java:71)
	at weblogic.servlet.internal.WebAppServletContext.doSecuredExecute(WebAppServletContext.java:2451)
	at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2299)
	at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2277)
	at weblogic.servlet.internal.ServletRequestImpl.runInternal(ServletRequestImpl.java:1720)
	at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1680)

 

Since we couldn’t find anything obvious, we opened an OpenText Support case (#4322241). There is a KB (KB14050670) around Internal Server Error but it didn’t help us in this case. After some research on OpenText side, it seems that this is a known issue and there is a solution for it but it is not documented at the moment: that’s the whole purpose of this blog. The thing is that the solution is going to be in the next version of the D2FS REST Services Development Guide and therefore if you are looking into the OpenText Support Site, you won’t find anything related to this error yet. Don’t ask me why it will be in the Development guide, maybe they didn’t find another suitable location.

So the solution is very simple, you just have to add a small piece into the D2-REST web.xml file:

[weblogic@wsd2rest-0 ~]$ cd $APPLICATIONS
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$ jar -xvf D2-REST.war WEB-INF/web.xml
 inflated: WEB-INF/web.xml
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$ cat WEB-INF/web.xml
<?xml version="1.0" encoding="UTF-8"?>

<web-app xmlns="http://java.sun.com/xml/ns/javaee"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://java.sun.com/xml/ns/javaee
         http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"
         version="3.0"
         metadata-complete="true">

  <display-name>D2-REST</display-name>
  <description>D2-REST</description>
  <error-page>
    <error-code>404</error-code>
    <location>/errors/redirect/404</location>
  </error-page>
  <error-page>
    <error-code>500</error-code>
    <location>/errors/redirect/500</location>
  </error-page>
</web-app>
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$ sed -i 's,</web-app>,  <listener>\n&,' WEB-INF/web.xml
[weblogic@wsd2rest-0 dbi]$ sed -i 's,</web-app>,    <listener-class>com.emc.d2.rest.context.WebAppContextListener</listener-class>\n&,' WEB-INF/web.xml
[weblogic@wsd2rest-0 dbi]$ sed -i 's,</web-app>,  </listener>\n&,' WEB-INF/web.xml
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$ cat WEB-INF/web.xml
<?xml version="1.0" encoding="UTF-8"?>

<web-app xmlns="http://java.sun.com/xml/ns/javaee"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://java.sun.com/xml/ns/javaee
         http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"
         version="3.0"
         metadata-complete="true">

  <display-name>D2-REST</display-name>
  <description>D2-REST</description>
  <error-page>
    <error-code>404</error-code>
    <location>/errors/redirect/404</location>
  </error-page>
  <error-page>
    <error-code>500</error-code>
    <location>/errors/redirect/500</location>
  </error-page>
  <listener>
    <listener-class>com.emc.d2.rest.context.WebAppContextListener</listener-class>
  </listener>
</web-app>
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$ jar -uvf D2-REST.war WEB-INF/web.xml
adding: WEB-INF/web.xml(in = 753) (out= 326)(deflated 56%)
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$ rm -rf WEB-INF/
[weblogic@wsd2rest-0 dbi]$

 

As you can see above, it’s all about adding a new listener into the web.xml file for the “WebAppContextListener“. This class – based on its name – has absolutely nothing to do with the error shown above and yet, adding this listener will solve the cast issue. So just redeploy/update your Application in WebLogic and that’s it, the issue should be gone.

 

Cet article Documentum – Cast trouble with D2-REST 16.5.x on WebLogic est apparu en premier sur Blog dbi services.

Database Link to 9.2 Database from 19c

Bobby Durrett's DBA Blog - Fri, 2019-12-13 15:12

I have mentioned in previous posts that I am working on migrating a large 11.2 database on HP Unix to 19c on Linux. I ran across a database link to an older 9.2 database in the current 11.2 database. That link does not work in 19c so I thought I would blog about my attempts to get it to run in 19c. It may not be that useful to other people because it is a special case, but I want to remember it for myself if nothing else.

First, I’ll just create test table in my own schema on a 9.2 development database:

SQL> create table test as select * from v$version;

Table created.

SQL> 
SQL> select * from test;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit Production
PL/SQL Release 9.2.0.5.0 - Production
CORE	9.2.0.6.0	Production
TNS for HPUX: Version 9.2.0.5.0 - Production
NLSRTL Version 9.2.0.5.0 - Production

Next, I will create a link to this 9.2 database from a 19c database. I will hide the part of the link creation that has my password and the database details, but they are not needed.

SQL> create database link link_to_92
... removed for security reasons ...

Database link created.

SQL> 
SQL> select * from test@link_to_92;
select * from test@link_to_92
                   *
ERROR at line 1:
ORA-03134: Connections to this server version are no longer supported.

So I looked up ways to get around the ORA-03134 error. I can’t remember all the things I checked but I have a note that I looked at this one link: Resolving 3134 errors. The idea was to create a new database link from an 11.2 database to a 9.2 database. Then create a synonym on the 11.2 database for the table I want on the 9.2 system. Here is what that looks like on my test databases:

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
... removed for brevity ...

SQL> create database link link_from_112
... removed for security ...

Database link created.

SQL> create synonym test for test@link_from_112;

Synonym created.

SQL> 
SQL> select * from test;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit Production

Now that I have the link and synonym on the 11.2 middleman database, I go back to the 19c database and create a link to the 11.2 database and query the synonym to see the original table:

SQL> select * from v$version;

BANNER                                                                           ...
-------------------------------------------------------------------------------- ...
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production           ...
...										    

SQL> create database link link_to_112
...

Database link created.
...
SQL> select * from v$version@link_to_112;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
...

SQL> select * from test@link_to_112;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit Production

So far so good. I am not sure how clear I have been, but the point is that I could not query the table test on the 9.2 database from a 19c database without getting an error. By jumping through an 11.2 database I can now query from it. But, alas, that is not all my problems with this remote 9.2 database table.

When I first started looking at these remote 9.2 tables in my real system, I wanted to get an execution plan of a query that used them. The link through an 11.2 database trick let me query the tables but not get a plan of the query.

SQL> truncate table plan_table;

Table truncated.

SQL> 
SQL> explain plan into plan_table for
  2  select * from test@link_to_112
  3  /

Explained.

SQL> 
SQL> set markup html preformat on
SQL> 
SQL> select * from table(dbms_xplan.display('PLAN_TABLE',NULL,'ADVANCED'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------
Error: cannot fetch last explain plan from PLAN_TABLE

SQL> 
SQL> select object_name from plan_table;

OBJECT_NAME
------------------------------------------------------------------------------

TEST

Kind of funky but not the end of the world. Only a small number of queries use these remote 9.2 tables so I should be able to live without explain plan. Next, I needed to use the remote table in a PL/SQL package. For simplicity I will show using it in a proc:

SQL> CREATE OR REPLACE PROCEDURE BOBBYTEST
  2  AS
  3  ver_count number;
  4  
  5  BEGIN
  6    SELECT
  7    count(*) into ver_count
  8    FROM test@link_to_112;
  9  
 10  END BOBBYTEST ;
 11  /

Warning: Procedure created with compilation errors.

SQL> SHOW ERRORS;
Errors for PROCEDURE BOBBYTEST:

LINE/COL ERROR
-------- -----------------------------------------------------------------
6/3      PL/SQL: SQL Statement ignored
6/3      PL/SQL: ORA-00980: synonym translation is no longer valid

I tried creating a synonym for the remote table but got the same error:

SQL> create synonym test92 for test@link_to_112;

...

SQL> CREATE OR REPLACE PROCEDURE BOBBYTEST
  2  AS
  3  ver_count number;
  4  
  5  BEGIN
  6    SELECT
  7    count(*) into ver_count
  8    FROM test92;
  9  
 10  END BOBBYTEST ;
 11  /

Warning: Procedure created with compilation errors.

SQL> SHOW ERRORS;
Errors for PROCEDURE BOBBYTEST:

LINE/COL ERROR
-------- -----------------------------------------------------------------
6/3      PL/SQL: SQL Statement ignored
6/3      PL/SQL: ORA-00980: synonym translation is no longer valid

Finally, by chance I found that I could use a view for the remote synonym and the proc would compile:

SQL> create view test92 as select * from test@link_to_112;

View created.

...

SQL> CREATE OR REPLACE PROCEDURE BOBBYTEST
  2  AS
  3  ver_count number;
  4  
  5  BEGIN
  6    SELECT
  7    count(*) into ver_count
  8    FROM test92;
  9  
 10  END BOBBYTEST ;
 11  /

Procedure created.

SQL> SHOW ERRORS;
No errors.
SQL> 
SQL> execute bobbytest;

PL/SQL procedure successfully completed.

SQL> show errors
No errors.

Now one last thing to check. Will the plan work with the view?

SQL> explain plan into plan_table for
  2  select * from test92
  3  /

Explained.

SQL> select * from table(dbms_xplan.display('PLAN_TABLE',NULL,'ADVANCED'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------
Error: cannot fetch last explain plan from PLAN_TABLE

Sadly, the view was not the cure all. So, here is a summary of what to do if you have a procedure on a 19c database that needs to access a table on a 9.2 database:

  • Create a link on a 11.2 database to the 9.2 database
  • Create a synonym on the 11.2 database pointing to the table on the 9.2 database
  • Create a link on the 19c database to the 11.2 database
  • Create a view on the 19c database that queries the synonym on the 11.2 database
  • Use the view in your procedure on your 19c database
  • Explain plans may not work with SQL that use the view

Bobby

Categories: DBA Blogs

Updating the trail file location for Oracle GoldenGate Microservices

DBASolved - Fri, 2019-12-13 09:21

When you first install Oracle GoldenGate Microservices, you may have taken the standard installation approach and all the configuration, logging and trail file information will reside in a standard directory structure.  This makes the architecture of your enviornment really easy.   Let’s say you want to identify what trail files are being used by the […]

The post Updating the trail file location for Oracle GoldenGate Microservices appeared first on DBASolved.

Categories: DBA Blogs

Q2 FY20 GAAP EPS UP 14% TO $0.69 and NON-GAAP EPS UP 12% TO $0.90

Oracle Press Releases - Thu, 2019-12-12 15:00
Press Release
Q2 FY20 GAAP EPS UP 14% TO $0.69 and NON-GAAP EPS UP 12% TO $0.90 Fusion ERP Cloud Revenue Up 37%; Autonomous Database Cloud Revenue Up >100%

Redwood Shores, Calif.—Dec 12, 2019

Oracle Corporation (NYSE: ORCL) today announced fiscal 2020 Q2 results. Total Revenues were $9.6 billion, up 1% in USD and in constant currency compared to Q2 last year. Cloud Services and License Support revenues were $6.8 billion, while Cloud License and On-Premise License revenues were $1.1 billion.

GAAP Operating Income was up 3% to $3.2 billion, and GAAP Operating Margin was 33%. Non-GAAP Operating Income was $4.0 billion, and non-GAAP Operating Margin was 42%. GAAP Net Income was $2.3 billion, and non-GAAP Net Income was $3.0 billion. GAAP Earnings Per Share was up 14% to $0.69, while non-GAAP Earnings Per Share was up 12% to $0.90.

Short-term deferred revenues were $8.1 billion. Operating Cash Flow was $13.8 billion during the trailing twelve months.

“We had another strong quarter in our Fusion and NetSuite cloud applications businesses with Fusion ERP revenues growing 37% and NetSuite ERP revenues growing 29%,” said Oracle CEO, Safra Catz. “This consistent rapid growth in the now multibillion dollar ERP segment of our cloud applications business has enabled Oracle to deliver a double-digit EPS growth rate year-after-year. I fully expect we will do that again this year.”

“It’s still early days, but the Oracle Autonomous Database already has thousands of customers running in our Gen2 Public Cloud,” said Oracle CTO, Larry Ellison. “Currently, our Autonomous Database running in our Public Cloud business is growing at a rate of over 100%. We expect that growth rate to increase dramatically as we release our Autonomous Database running on our Gen2 Cloud@Customer into our huge on-premise installed base over the next several months.”

The Board of Directors also declared a quarterly cash dividend of $0.24 per share of outstanding common stock. This dividend will be paid to stockholders of record as of the close of business on January 9, 2020, with a payment date of January 23, 2020.

Q2 Fiscal 2020 Earnings Conference Call and Webcast

Oracle will hold a conference call and webcast today to discuss these results at 2:00 p.m. Pacific. You may listen to the call by dialing (816) 287-5563, Passcode: 425392. To access the live webcast, please visit the Oracle Investor Relations website at http://www.oracle.com/investor. In addition, Oracle’s Q2 results and fiscal 2020 financial tables are available on the Oracle Investor Relations website.

A replay of the conference call will also be available by dialing (855) 859-2056 or (404) 537-3406, Passcode: 4597628.

Contact Info
Ken Bond
Oracle Investor Relations
+1.650.607.0349
ken.bond@oracle.com
Deborah Hellinger
Oracle Corporate Communciations
+1.212.508.7935
deborah.hellinger@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly-Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE:ORCL), visit us at www.oracle.com or contact Investor Relations at investor_us@oracle.com or (650) 506-4073.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

“Safe Harbor” Statement

Statements in this press release relating to Oracle's future plans, expectations, beliefs, intentions and prospects, including statements regarding the growth of our earnings per share and our Autonomous Database business, are "forward-looking statements" and are subject to material risks and uncertainties. Many factors could affect our current expectations and our actual results, and could cause actual results to differ materially. We presently consider the following to be among the important factors that could cause actual results to differ materially from expectations: (1) Our success depends upon our ability to develop new products and services, integrate acquired products and services and enhance our existing products and services. (2) Our cloud strategy, including our Oracle Software-as-a-Service and Infrastructure-as-a-Service offerings, may adversely affect our revenues and profitability. (3) We might experience significant coding, manufacturing or configuration errors in our cloud, license and hardware offerings. (4) If the security measures for our products and services are compromised and as a result, our customers' data or our IT systems are accessed improperly, made unavailable, or improperly modified, our products and services may be perceived as vulnerable, our brand and reputation could be damaged, the IT services we provide to our customers could be disrupted, and customers may stop using our products and services, all of which could reduce our revenue and earnings, increase our expenses and expose us to legal claims and regulatory actions. (5) Our business practices with respect to data could give rise to operational interruption, liabilities or reputational harm as a result of governmental regulation, legal requirements or industry standards relating to consumer privacy and data protection. (6) Economic, political and market conditions can adversely affect our business, results of operations and financial condition, including our revenue growth and profitability, which in turn could adversely affect our stock price. (7) Our international sales and operations subject us to additional risks that can adversely affect our operating results. (8) Acquisitions present many risks and we may not achieve the financial and strategic goals that were contemplated at the time of a transaction. A detailed discussion of these factors and other risks that affect our business is contained in our SEC filings, including our most recent reports on Form 10-K and Form 10-Q, particularly under the heading "Risk Factors." Copies of these filings are available online from the SEC or by contacting Oracle Corporation's Investor Relations Department at (650) 506-4073 or by clicking on SEC Filings on Oracle’s Investor Relations website at http://www.oracle.com/investor. All information set forth in this press release is current as of December 12, 2019. Oracle undertakes no duty to update any statement in light of new information or future events. 

Talk to a Press Contact

Ken Bond

  • +1.650.607.0349

Deborah Hellinger

  • +1.212.508.7935

Documentum – MigrationUtil – 5 – Change Installation Owner

Yann Neuhaus - Thu, 2019-12-12 12:59

The Documentum installation owner is the operating system user that owns the server executable and other related files along with the OS process when the server is running. It is originally determined when the server is installed, in fact, it is the logged-in user that performed the Documentum installation. Of course, it is preferable to install Documentum and never change the installation owner. However, sometimes company policy change and dictates that the original installation must be changed, for example, the user name does not conform to a new naming policy.

This blog is the last one of the MigrationUtil blogs serie (please find links of other blogs below), I will show how to change the Installation Owner using the MigrationUtil. If you want to change only the password please read this blog.

The installation owner user is important at the operating system and in the Docbase/Content Server level, it is given the following privileges:

  • Operating System:
    – Rights to start Documentum Services such as Docbase, Docbroker, Java Method Server and other installed Documentum products.
    – Permission to change the Content Server configuration (i.e. upgrade, create, and delete docbases).
    – Folder level permission to view data, configuration, and many log files located under the DOCUMENTUM_HOME directory.
  • Docbase and Content Server:
    – Superuser and System Administrator rights.
    – Set as the r_install_owner value in the dm_server_config object.
    – Set as the operating system user to run several Administrative jobs.

As you can deduce, the change of the installer owner is not a minor change within Documentum, so it is very critical. That’s why you have to prepare very well this operation and determine the best approach to execute it.

Below two change levels to be done:

  • OS Level change:
    – Create the new install owner at the operating system level, it should correspond to the user_os_name of the new docbase user.
  • Docbase level change:
    – Create a new user in the docbase to be the installation owner and reassign the previous installation owner’s objects to the new user. The MigrationUtil will be able to this part.
Preparation Before any change
  • Clean the environment:
    Run the Consistency Checker job: The report gives you a list of bad data within your system. Cleaning up inconsistent data before making the change will speed up the process.
    Purge all old log files: Changing the installation owner requires updating permissions on Documentum data and log files. The purge will reduce work on “unneeded data” and will greatly speed up the process.
  • Back up:
    – Back up all the impacted environment before performing any major change within Documentum (Content Server files and the Database).
Create new user – OS Level

Add the new installation user at the OS Level, in the same group as the actual installation user :

[root@vmtestdctm01 ~]# useradd -g 1004 dmdocbase1
[root@vmtestdctm01 ~]# passwd dmdocbase1
Changing password for user dmdocbase1.
New password: 
Retype new password: 
passwd: all authentication tokens updated successfully.
[root@vmtestdctm01 ~]# 

To know the group of actual installation user:

[root@vmtestdctm01 ~]# cat /etc/passwd
...
dmadmin:x:1002:1004::/home/dmadmin:/bin/bash
Create new user – Docbase Level

You need to create the user in all docbases, using below dql query :

CREATE dm_user OBJECT
SET user_name = 'dmdocbase1',
SET user_password = 'install164',
SET user_login_name = 'dmdocbase1',
SET user_address = 'dmdocbase1@dbi-services.com',
SET description = 'This User is the owner of docbase1',
SET user_os_name = 'dmdocbase1',
SET client_capability = 8,
SET user_privileges = 16,
SET user_xprivileges = 56,
SET user_source = 'inline password'

Result:

object_created  
----------------
1101e24080000500
Configure the MigrationUtil

Adapt the MigrationUtil configuration file like below:

[dmadmin@vmtestdctm01 ~]$ cat $DM_HOME/install/external_apps/MigrationUtil/config.xml 
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
<comment>Database connection details</comment>
<entry key="dbms">oracle</entry> <!-- This would be either sqlserver, oracle, db2 or postgres -->
<entry key="tgt_database_server">vmtestdctm01</entry> <!-- Database Server host or IP -->
<entry key="port_number">1521</entry> <!-- Database port number -->
<entry key="InstallOwnerPassword">install164</entry>
<entry key="isRCS">no</entry>    <!-- set it to yes, when running the utility on secondary CS -->

<!-- <comment>List of docbases in the machine</comment> -->
<entry key="DocbaseName.1">docbase1</entry>

<!-- <comment>docbase owner password</comment> -->
<entry key="DocbasePassword.1">install164</entry>

...

<entry key="ChangeInstallOwner">yes</entry>
<entry key="InstallOwner">dmadmin</entry>
<entry key="NewInstallOwner">dmdocbase1</entry>
<entry key="NewInstallOwnerPassword">install164</entry>
...
Migration Stop Docbase(s) and Docbroker(s)

Before you execute the migration you have to stop the docbase(s) and the docbroker(s).

$DOCUMENTUM/dba/dm_shutdown_Docbase1
$DOCUMENTUM/dba/dm_stop_DocBroker
Execute the migration script

Once every thing stopped, you can execute the migration script:

[dmadmin@vmtestdctm01 ~]$ $DM_HOME/install/external_apps/MigrationUtil/MigrationUtil.sh

Welcome... Migration Utility invoked.
 
Skipping Docbase ID Changes...

Skipping Host Name Change...

Changing Install Owner...
Created new log File: /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/InstallOwnerChange.log
Finished changing Install Owner...Please check log file for more details/errors
Finished changing Install Owner...

Skipping Server Name Change...

Skipping Docbase Name Change...

Skipping Docker Seamless Upgrade scenario...

Migration Utility completed.

Have a look on the Migration log:

[dmadmin@vmtestdctm01 ~]$ cat /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/InstallOwnerChange.log
Start: 2019-12-12 05:14:37.191
Changing Install Owner
=====================
InstallOwner: dmadmin
New InstallOwner: dmdocbase1
Changing InstallOwner for docbase: Docbase1
Retrieving server.ini path for docbase: Docbase1
Found path: /app/dctm/product/16.4/dba/config/Docbase1/server.ini

Database Details:
Database Vendor:oracle
Database Name:dctmdb.local
Databse User:Docbase1
Database URL:jdbc:oracle:thin:@vmtestdctm01:1521/dctmdb.local
Successfully connected to database....

Processing Database Changes for docbase: Docbase1
Created database backup File '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/InstallOwnerChange_Docbase1_DatabaseRestore.sql'
Processing _s table...
select r_object_id,object_name from dm_sysobject_s where object_name = 'dmadmin'
update dm_sysobject_s set object_name = 'dmdocbase1' where r_object_id = '0c01e24080000105'
select r_object_id,r_install_owner from dm_server_config_s where r_install_owner = 'dmadmin'
update dm_server_config_s set r_install_owner = 'dmdocbase1' where r_object_id = '3d01e24080000102'
select r_object_id,user_name from dm_user_s where user_name = 'dmadmin'
update dm_user_s set user_name = 'dmdocbase1' where r_object_id = '1101e24080000102'
select r_object_id,user_os_name from dm_user_s where user_os_name = 'dmadmin'
update dm_user_s set user_os_name = 'dmdocbase1' where r_object_id = '1101e24080000102'
...
update dm_workflow_r set r_last_performer = 'dmdocbase1' where r_last_performer = 'dmadmin'
update dm_workflow_s set r_creator_name = 'dmdocbase1' where r_creator_name = 'dmadmin'
update dm_workflow_s set supervisor_name = 'dmdocbase1' where supervisor_name = 'dmadmin'
Successfully updated database values...
Committing all database operations...
Finished processing database changes for docbase: Docbase1

Processing server.ini changes for docbase: Docbase1
Backed up '/app/dctm/product/16.4/dba/config/Docbase1/server.ini' to '/app/dctm/product/16.4/dba/config/Docbase1/server.ini_install_dmadmin.backup'
Updated server.ini file:/app/dctm/product/16.4/dba/config/Docbase1/server.ini
Updating acs.properties for docbase: Docbase1
Backed up '/app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties' to '/app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties_install_dmadmin.backup'
Updated acs.properties: /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties
Backed up '/app/dctm/product/16.4/dba/dm_shutdown_Docbase1' to '/app/dctm/product/16.4/dba/dm_shutdown_Docbase1_install_dmadmin.backup'
Updated shutdown script: /app/dctm/product/16.4/dba/dm_shutdown_Docbase1
...
Processing Services File Changes...
Backed up '/app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/ServerApps.ear/DmMethods.war/WEB-INF/web.xml' to '/app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/ServerApps.ear/DmMethods.war/WEB-INF/web.xml_install_dmadmin.backup'
Updated web.xml: /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/ServerApps.ear/DmMethods.war/WEB-INF/web.xml
WARNING...File /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/bpm.ear/bpm.war/WEB-INF/web.xml doesn't exist
No need to update method server startup script: /app/dctm/product/16.4/wildfly9.0.1/server/startMethodServer.sh
Finished processing File changes...

Finished changing Install Owner...
End: 2019-12-12 05:14:39.815
Change permissions

Change the permissions of all folders and files under DOCUMENTUM_HOME directory, if your content storage directories are not located under the DOCUMENTUM_HOME directory, change the permissions on each content storage directory as well.

[root@vmtestdctm01 ~]$ chown -R dmdocbase1 /app/dctm
Start Docbase(s) and Docbroker(s)

Start the Docbroker and the docbase:

$DOCUMENTUM/dba/dba/dm_launch_DocBroker
$DOCUMENTUM/dba/dba/dm_start_Docbase1
Post Migration

Check the docbase logs:

...
2019-12-12T05:30:09.982774      13301[13301]    0000000000000000        [DM_MQ_I_DAEMON_START]info:  "Message queue daemon (pid : 13570, session 0101e24080000456) is started sucessfully."
2019-12-12T05:30:20.255917      13569[13569]    0101e24080000003        [DM_DOCBROKER_I_PROJECTING]info:  "Sending information to Docbroker located on host (vmtestdctm01) with port (1490).  Information: (Config(Docbase1), Proximity(1), Status(Open), Dormancy Status(Active))."
Wed Dec 12 05:30:32 2019 [INFORMATION] [AGENTEXEC 13628] Detected during program initialization: Version: 16.4.0000.0248  Linux64
Wed Dec 12 05:30:35 2019 [INFORMATION] [AGENTEXEC 13628] Detected during program initialization: Agent Exec connected to server Docbase1:  [DM_SESSION_I_SESSION_START]info:  "Session 0101e24080000500 started for user dmdocbase1."

Try to connect with old installation owner to Docbase1 throw idql:

...
Connecting to Server using docbase Docbase1
Could not connect
[DM_SESSION_E_AUTH_FAIL]error:  "Authentication failed for user dmadmin with docbase Docbase1."

This is the expected behavior, the old installation owner is no more active.

The environment I used to make this test is a very simple one ( with only one Docbase, no HA, no FullText, aso) and created only for this purpose. It worked fine on my environment, but be careful if you have a more complex environment!

Cet article Documentum – MigrationUtil – 5 – Change Installation Owner est apparu en premier sur Blog dbi services.

SQL Server – Collecting last backup information in an AlwaysOn environment

Yann Neuhaus - Thu, 2019-12-12 09:02
Introduction

Sometimes you face interesting challenges with unusual environment. One of my customer needed a automated and flexible backup solution. Said like that nothing very complex you will say. But if I mention that some databases were 60TB big with more than 30 filegroups and around 600 database data files each and moreover synchronized in an AlwayOn availability group, it is not the same story and you can easily imagine that working with standard backup strategy will not be viable. Therefore I was working on implementing solution using partial full, partial differential and read-only filegroups backups to minimize the time needed.
Well this post is not explaining the whole solution, but only a way to collect the last backup information of my databases, especially for the ones being in an AlwaysOn availability group and which filegroup states changed.
If you already worked with partial backups and read-only filegroups backups you know that the backup sequence is very important, but if you don’t you will quickly notice it if you need to restore, and you can easily understand why this last backup information is crucial. As the backups always have to run on the primary replica, you have to collect the information on all replicas if failover occurred and the primary changed to ensure that you execute the right backups at the right moment and not make unnecessary backups (remember the data volumes).

 

Explanation of the solution and code

Another thing to mentioned, because of security policies, it was forbidden to use linked server, but hopefully xp_CmdShell was possible. I wanted each replica to work independently, and needed a way to query the remote replicas to collect the last backup information on each SQL Server instances involved. Because backup history might be cleans, I need to store this information in local tables. I created 2 tables, one to stored last database backups information and the other to store last read-only filegroup backups information. Additionally I created 2 tables to collect temporarily the information coming from all replicas.

Creation of the last backup information tables:

--########################################################
--###Backup generator - backup last date info temporary table
--########################################################

USE [<YourDatabaseName>]
GO
/*
if OBJECT_ID('[dbo].[bakgen_backuplastdt_databases_temp]') is not null
	drop table [dbo].[bakgen_backuplastdt_databases_temp]
*/
create table [dbo].[bakgen_backuplastdt_databases_temp] (
	ServerName sysname not null,
	SqlInstanceName sysname  not null,
	SqlServerName sysname  not null,
	ServiceBrokerGuid uniqueidentifier not null,
	DatabaseCreationDate datetime  not null,
	DatabaseName sysname  not null,
	BackupType char(1) not null,
	LastBackupDate datetime  not null,
	LastBackupSize numeric(20,0) not null,
	is_primary bit null,
	insertdate datetime  not null
)
GO
create unique clustered index idx_bakgen_backuplastdt_databases_temp on [dbo].[bakgen_backuplastdt_databases_temp](DatabaseCreationDate,DatabaseName,BackupType,ServerName,SqlInstanceName)



--########################################################
--###Backup generator - backup last date info
--########################################################

USE [<YourDatabaseName>]
GO
/*
if OBJECT_ID('[dbo].[bakgen_backuplastdt_databases]') is not null
	drop table [dbo].[bakgen_backuplastdt_databases]
*/
create table [dbo].[bakgen_backuplastdt_databases] (
	ServerName sysname  not null,
	SqlInstanceName sysname  not null,
	SqlServerName sysname  not null,
	ServiceBrokerGuid uniqueidentifier not null,
	DatabaseCreationDate datetime  not null,
	DatabaseName sysname  not null,
	BackupType char(1) not null,
	LastBackupDate datetime  not null,
	LastBackupSize numeric(20,0) not null,
	is_primary bit null,
	insertdate datetime  not null
)
GO
create unique clustered index idx_bakgen_backuplastdt_databases on [dbo].[bakgen_backuplastdt_databases](DatabaseCreationDate,DatabaseName,BackupType,ServerName,SqlInstanceName)

I finally decided to work with a stored procedure calling a PowerShell scripts to remotely execute the queries on the replicas.
The stored procedure lists the existing replicas and collects the last database backup information, then the read-only filegroup backup information creating 2 different queries to execute locally on the server and store the data in the temp tables first. It will create similar queries, excluding the databases not involved in availability groups and execute them on the remote replicas using xp_CmdShell running PowerShell scripts. The PowerShell scripts are dynamically created using the TSQL queries generated. They used one function of the well-known DBATools. So you will have to install it first.
You will notice that in order to log the scripts generated are nicely formatted in order to read and debug them easier. But before executing you PowerShell script through xp_CmdShell you need to apply some string formatting like the 2 lines I added to avoid the execution to fail:

set @PSCmd = replace(replace(@PSCmd, nchar(13), N”), nchar(10), N’ ‘)
set @PSCmd = replace(@PSCmd, ‘>’, N’^>’)

Do not forget to escape some characters, otherwise the execution will fails, in my case omitting to escape the ‘>’ sign raise an “Access is denied” message in the output of the xp_CmdShell execution.

After that the code is comparing what has been collected in the temp tables with the final information and update information if needed.

Here is the complete code of the stored procedure:

use [<YourDatabaseName>]
if OBJECT_ID('dbo.bakgen_p_getbakinfo') is not null
            drop procedure dbo.bakgen_p_getbakinfo 
go

CREATE PROCEDURE dbo.bakgen_p_getbakinfo 
AS
/************************************
*   dbi-services SA, Switzerland    *
*   http://www.dbi-services.com        *
*************************************
    Group/Privileges..: DBA
    Script Name......:       bakgen_p_getbakinfo.sql
    Author...........:          Christophe Cosme
    Date.............:           2019-09-20
    Version..........:          SQL Server 2016 / 2017
    Description......:        Get the backup information locally but also on the replica involved

    Input parameters.: 

            Output parameter: 
                                               
    Called by........:         Stored Procdedure : [dbo].[bakgen_p_bakexe]
************************************************************************************************
    Historical
    Date        Version    Who    Whats                  Comments
    ----------  -------    ---    --------    -----------------------------------------------------
    2019-09-30  1.0        CHC    Creation
************************************************************************************************/ 
BEGIN 

BEGIN TRY
            
            set nocount on

            declare 
    @ErrorMessage  NVARCHAR(4000), 
    @ErrorSeverity INT, 
    @ErrorState    INT;

            declare @ModuleName sysname,
                                    @ProcName sysname,
                                    @InfoLog nvarchar(max),
                                    @Execute char(1)
                        
            set @ModuleName = 'BakGen'
            set @ProcName = OBJECT_NAME(@@PROCID)
            set @Execute = 'A'

            set @InfoLog = 'Retrieve backup information'
            execute dbo.bakgen_p_log       
                        @ModuleName = @ModuleName,
                        @ProcedureName = @ProcName,
                        @ExecuteMode = @Execute,
                        @LogType = 'INFO',
                        @DatabaseName = null,
                        @Information = @InfoLog,
                        @Script = null


            --###variable to store error message
            declare @errmsg varchar(4000)
            --###variable with the current datetime
            declare @cdt datetime = getdate()

            --###variabler to store the sql and powershell commands to execute
            declare @sqllocalDB nvarchar(4000),
                                    @sqllocalFG nvarchar(4000),
                                    @sqlremoteDB nvarchar(4000),
                                    @sqlremoteFG nvarchar(4000),
                                    @PSCmd nvarchar(4000)

            --###variable to store the local SQL server name
            declare @LocalSqlServerName sysname
            --###variable to store the list of replicas
            declare @TAgReplica table (AgReplicaName sysname)
            --###variable for the cursors
            declare @AgReplicaName sysname

            --###set the local SQL Server name
            set @LocalSqlServerName = lower(convert(sysname,serverproperty('ServerName')))
                        

            --############################################################################
            --### check if tables exist
            --############################################################################
            if object_id('[dbo].[bakgen_backuplastdt_databases_temp]') is null
            begin
                        set @errmsg = 'Get Backup info : table not found'
                        set @errmsg += '          table name = [dbo].[bakgen_backuplastdt_databases_temp]' 
                        raiserror (@errmsg,11,1);
            end
            if object_id('[dbo].[bakgen_backuplastdt_fgreadonly_temp]') is null
            begin
                        set @errmsg = 'Get Backup info : table not found'
                        set @errmsg += '          table name = [dbo].[bakgen_backuplastdt_fgreadonly_temp]' 
                        raiserror (@errmsg,11,1);                      
            end

            if object_id('[dbo].[bakgen_backuplastdt_databases]') is null
            begin
                        set @errmsg = 'Get Backup info : table not found'
                        set @errmsg += '          table name = [dbo].[bakgen_backuplastdt_databases]' 
                        raiserror (@errmsg,11,1);
            end
            if object_id('[dbo].[bakgen_backuplastdt_fgreadonly]') is null
            begin
                        set @errmsg = 'Get Backup info : table not found'
                        set @errmsg += '          table name = [dbo].[bakgen_backuplastdt_fgreadonly]' 
                        raiserror (@errmsg,11,1);                      
            end


            
            --############################################################################
            --### select the replicas involved adding first the local server
            --############################################################################
            insert into @TAgReplica (AgReplicaName ) select @LocalSqlServerName

            --###check if alwayson feature is activated
            if (serverproperty('IsHadrEnabled') = 1)
            begin
                        insert into @TAgReplica (AgReplicaName )
                        select lower(agr.replica_server_name) from sys.availability_replicas agr
                                    where agr.replica_server_name <> @LocalSqlServerName
            end


            --############################################################################
            --### construct the SQL command to execute on the local SQL Server
            --############################################################################
            set @sqllocalDB = ''
            set @sqllocalDB +='

            declare @Tbi table (
                        ServerName sysname,
                        SqlInstanceName sysname,
                        SqlServerName sysname,
                        ServiceBrokerGuid uniqueidentifier,
                        DatabaseCreationDate datetime,
                        DatabaseName sysname,
                        BackupType char(1),
                        LastBackupDate datetime,
                        is_primary bit null,
                        insertdate datetime       
            )


            insert into @Tbi (
                        [ServerName],
                        [SqlInstanceName],
                        [SqlServerName],
                        [ServiceBrokerGuid],
                        [DatabaseCreationDate],
                        [DatabaseName],
                        [BackupType],
                        [LastBackupDate],
                        [is_primary],
                        [insertdate])
            select  
                        lower(convert(sysname,serverproperty(''machinename''))) as ServerName,
                        lower(convert(sysname,serverproperty(''InstanceName''))) as SqlInstanceName,
                        lower(convert(sysname,serverproperty(''ServerName''))) as SqlServerName,
                        db.service_broker_guid as ServiceBrokerGuid,
                        db.create_date as DatabaseCreationDate,
                        bs.database_name as DatabaseName,
                        bs.type as BackupType,
                        max(bs.backup_finish_date) as LastBackupDate,
                        sys.fn_hadr_is_primary_replica(bs.database_name) as is_primary,
                        ''' + convert(varchar,@cdt,120) + '''   
            from msdb.dbo.backupset bs
                        inner join sys.databases db on db.name = bs.database_name
                        where bs.type in (''D'',''I'',''P'',''Q'')
                                    and bs.is_copy_only = 0
                                    and coalesce(sys.fn_hadr_is_primary_replica(bs.database_name),-1) in (-1,0,1)
                        group by
                                    db.service_broker_guid,
                                    db.create_date,
                                    bs.database_name,
                                    bs.type, 
                                    sys.fn_hadr_is_primary_replica(bs.database_name)

            insert into [dbo].[bakgen_backuplastdt_databases_temp] (
                        [ServerName],
                        [SqlInstanceName],
                        [SqlServerName],
                        [ServiceBrokerGuid],
                        [DatabaseCreationDate],
                        [DatabaseName],
                        [BackupType],
                        [LastBackupDate],
                        [LastBackupSize],
                        [is_primary],
                        [insertdate])
            select  
                        t.[ServerName],
                        t.[SqlInstanceName],
                        t.[SqlServerName],
                        t.[ServiceBrokerGuid],
                        t.[DatabaseCreationDate],
                        t.[DatabaseName],
                        t.[BackupType],
                        t.[LastBackupDate],
                        bs.[backup_size],
                        t.[is_primary],
                        t.[insertdate]
            from @Tbi t
                        inner join msdb.dbo.backupset bs on 
                                    bs.backup_finish_date = t.LastBackupDate  
                                    and bs.database_name collate database_default = t.DatabaseName collate database_default
                                    and bs.type collate database_default = t.BackupType collate database_default
'




            set @sqllocalFG = ''
            set @sqllocalFG +='

            insert into [dbo].[bakgen_backuplastdt_fgreadonly_temp]
           ([ServerName],
           [SqlInstanceName],
           [SqlServerName],
                           [ServiceBrokerGuid],
                           [DatabaseCreationDate],
           [DatabaseName],
           [BackupType],
           [filegroup_name],
           [file_logicalname],
           [filegroup_guid],
           [file_guid],
           [LastBackupDate],
                           [LastBackupReadOnlyLsn],
           [is_primary],
                           [insertdate])
            select  
                        lower(convert(sysname,serverproperty(''machinename''))) as ServerName,
                        lower(convert(sysname,serverproperty(''InstanceName''))) as SqlInstanceName,
                        lower(convert(sysname,serverproperty(''ServerName''))) as SqlServerName,
                        db.service_broker_guid as ServiceBrokerGuid,
                        db.create_date as DatabaseCreationDate,
                        bs.database_name as DatabaseName,
                        bs.type as BackupType,
                        bf.filegroup_name,
                        bf.logical_name as file_logicalname,
                        bf.filegroup_guid,
                        bf.file_guid,
                        max(bs.backup_finish_date) as LastBackupDate,
                        max(bf.read_only_lsn) as LastBackupReadOnlyLsn,
                        sys.fn_hadr_is_primary_replica(bs.database_name) as is_primary, 
                        ''' + convert(varchar,@cdt,120) + '''   
            from msdb.dbo.backupset bs
                                    inner join msdb.dbo.backupfile bf on  bf.backup_set_id = bs.backup_set_id
                                    inner join sys.databases db on db.name = bs.database_name 
                        where 
                                    bs.backup_finish_date >= db.create_date 
                                    and bs.type in (''F'')
                                    and bs.is_copy_only = 0
                                    and coalesce(sys.fn_hadr_is_primary_replica(bs.database_name),-1) in (-1,0,1)
                                    and bf.is_present = 1
                                    and bf.is_readonly = 1
                                    and bf.file_type = ''D''
                        group by
                                    db.service_broker_guid,
                                    db.create_date,
                                    bs.database_name, 
                                    bs.type,
                                    bf.filegroup_name,
                                    bf.logical_name, 
                                    bf.filegroup_guid,
                                    bf.file_guid,
                                    sys.fn_hadr_is_primary_replica(bs.database_name)
'


            
            --############################################################################
            --### construct the SQL command to execute on the remote SQL Server
            --############################################################################
            set @sqlremoteDB = ''
            set @sqlremoteDB +='

            declare @Tbi table (
                        ServerName sysname,
                        SqlInstanceName sysname,
                        SqlServerName sysname,
                        ServiceBrokerGuid uniqueidentifier,
                        DatabaseCreationDate datetime, 
                        DatabaseName sysname,
                        BackupType char(1),
                        LastBackupDate datetime,
                        is_primary bit null,
                        insertdate datetime       
            )

            insert into @Tbi (
                        [ServerName],
                        [SqlInstanceName],
                        [SqlServerName],
                        [ServiceBrokerGuid],
                        [DatabaseCreationDate],
                        [DatabaseName],
                        [BackupType],
                        [LastBackupDate],
                        [is_primary],
                        [insertdate])
            select  
                        lower(convert(sysname,serverproperty(''machinename''))) as ServerName,
                        lower(convert(sysname,serverproperty(''InstanceName''))) as SqlInstanceName,
                        lower(convert(sysname,serverproperty(''ServerName''))) as SqlServerName,
                        db.service_broker_guid as ServiceBrokerGuid,
                        db.create_date as DatabaseCreationDate,
                        bs.database_name as DatabaseName,
                        bs.type as BackupType,
                        max(bs.backup_finish_date) as LastBackupDate,
                        sys.fn_hadr_is_primary_replica(bs.database_name) as is_primary, 
                        ''' + convert(varchar,@cdt,120) + '''     
            from msdb.dbo.backupset bs
                        inner join sys.databases db on db.name = bs.database_name 
                        where bs.type in (''D'',''I'',''P'',''Q'')
                                    and bs.is_copy_only = 0
                                    and coalesce(sys.fn_hadr_is_primary_replica(bs.database_name),-1) in (0,1)
                        group by
                                    db.service_broker_guid,
                                    db.create_date,
                                    bs.database_name,
                                    bs.type,
                                    sys.fn_hadr_is_primary_replica(bs.database_name) 

            select  
                        t.[ServerName],
                        t.[SqlInstanceName],
                        t.[SqlServerName],
                        t.[ServiceBrokerGuid],
                        t.[DatabaseCreationDate],
                        t.[DatabaseName],
                        t.[BackupType],
                        t.[LastBackupDate],
                        bs.[backup_size],
                        t.[is_primary],
                        t.[insertdate]
            from @Tbi t
                        inner join msdb.dbo.backupset bs on 
                                    bs.backup_finish_date = t.LastBackupDate 
                                    and bs.database_name collate database_default = t.DatabaseName collate database_default
                                    and bs.type collate database_default = t.BackupType collate database_default

'

            set @sqlremoteFG = ''
            set @sqlremoteFG +='

            select  
                        lower(convert(sysname,serverproperty(''machinename''))) as ServerName,
                        lower(convert(sysname,serverproperty(''InstanceName''))) as SqlInstanceName,
                        lower(convert(sysname,serverproperty(''ServerName''))) as SqlServerName,
                        db.service_broker_guid as ServiceBrokerGuid,
                        db.create_date as DatabaseCreationDate,
                        bs.database_name as DatabaseName,
                        bs.type as BackupType,
                        bf.filegroup_name,
                        bf.logical_name as file_logicalname,
                        bf.filegroup_guid,
                        bf.file_guid,
                        max(bs.backup_finish_date) as LastBackupDate,
                        max(bf.read_only_lsn) as LastReadOnlyLsn,
                        sys.fn_hadr_is_primary_replica(bs.database_name) as is_primary, 
                        ''' + convert(varchar,@cdt,120) + '''   
            from msdb.dbo.backupset bs
                                    inner join msdb.dbo.backupfile bf on  bf.backup_set_id = bs.backup_set_id
                                    inner join sys.databases db on db.name = bs.database_name 
                        where 
                                    bs.backup_finish_date >= db.create_date 
                                    and bs.type in (''F'')
                                    and bs.is_copy_only = 0
                                    and coalesce(sys.fn_hadr_is_primary_replica(bs.database_name),-1) in (0,1)
                                    and bf.is_present = 1
                                    and bf.is_readonly = 1
                                    and bf.file_type = ''D''
                        group by
                                    db.service_broker_guid,
                                    db.create_date, 
                                    bs.database_name, 
                                    bs.type,
                                    bf.filegroup_name,
                                    bf.logical_name, 
                                    bf.filegroup_guid,
                                    bf.file_guid,
                                    sys.fn_hadr_is_primary_replica(bs.database_name) 
'

            --############################################################################
            --### delete all records in the backup info tables
            --############################################################################
            delete from [dbo].[bakgen_backuplastdt_databases_temp]
            delete from [dbo].[bakgen_backuplastdt_fgreadonly_temp]

            --############################################################################
            --### loop for all replicas involved
            --############################################################################
            declare cur_replica cursor
            static local forward_only
            for 
                        select AgReplicaName
                        from @TAgReplica
                 
            open cur_replica
            fetch next from cur_replica into 
                        @AgReplicaName                    


            while @@fetch_status = 0
            begin 
                                    
                        if @LocalSqlServerName = @AgReplicaName
                        begin 

                                    set @InfoLog = 'Get database backup information on local SQL Server instance ' + QUOTENAME(@AgReplicaName)
                                    execute dbo.bakgen_p_log       
                                               @ModuleName = @ModuleName,
                                               @ProcedureName = @ProcName,
                                                @ExecuteMode = @Execute,
                                               @LogType = 'INFO',
                                               @DatabaseName = null,
                                               @Information = @InfoLog,
                                               @Script = @sqllocalDB
                                    execute sp_executesql @sqllocalDB

                                    set @InfoLog = 'Get read-only filegroup backup information on local SQL Server instance ' + QUOTENAME(@AgReplicaName)
                                    execute dbo.bakgen_p_log       
                                               @ModuleName = @ModuleName,
                                               @ProcedureName = @ProcName,
                                               @ExecuteMode = @Execute,
                                               @LogType = 'INFO',
                                               @DatabaseName = null,
                                               @Information = @InfoLog,
                                               @Script = @sqllocalFG
                                    execute sp_executesql @sqllocalFG

                        end 
                        else
                        begin
                                    --############################################################################
                                    --### construct the PowerShell command to execute on the remote SQL Server
                                    --############################################################################
                                    set @PSCmd  = ''
                                    set @PSCmd += 'PowerShell.exe '
                                    set @PSCmd += '-Command "'
                                    set @PSCmd += '$qrydb = \"' + @sqlremoteDB + '\"; ' 
                                    set @PSCmd += '$qryfg = \"' + @sqlremoteFG + '\"; ' 
                                    set @PSCmd += '$rdb = Invoke-DbaQuery -SqlInstance ' + @AgReplicaName + ' -Query $qrydb; '
                                    set @PSCmd += '$rfg = Invoke-DbaQuery -SqlInstance ' + @AgReplicaName + ' -Query $qryfg; '
                                    set @PSCmd += 'if ($rdb -ne $null) { '
                                    set @PSCmd += 'Write-DbaDbTableData -SqlInstance ' + @LocalSqlServerName + ' -Database ' + db_name() + ' -Schema dbo -Table bakgen_backuplastdt_databases_temp -InputObject $rdb;'
                                    set @PSCmd += '} '
                                    set @PSCmd += 'if ($rfg -ne $null) { '
                                    set @PSCmd += 'Write-DbaDbTableData -SqlInstance ' + @LocalSqlServerName + ' -Database ' + db_name() + ' -Schema dbo -Table bakgen_backuplastdt_fgreadonly_temp -InputObject $rfg;'
                                    set @PSCmd += '} '
                                    set @PSCmd += '"'

                                    set @InfoLog = 'Get backup information on replica SQL Server instance ' + QUOTENAME(@AgReplicaName) + ' executing master..xp_cmdshell PowerShell script'
                                    execute dbo.bakgen_p_log       
                                               @ModuleName = @ModuleName,
                                               @ProcedureName = @ProcName,
                                               @ExecuteMode = @Execute,
                                               @LogType = 'INFO',
                                               @DatabaseName = null,
                                               @Information = @InfoLog,
                                               @Script = @PSCmd

                                    --###remove CRLF for xp_cmdshell and PowerShell 
                                    set @PSCmd = replace(replace(@PSCmd, nchar(13), N''), nchar(10), N' ')
                                    set @PSCmd = replace(@PSCmd, '>', N'^>')
                                    --###Execute the powershell command on the replica and store the result in the temporary tables
                                    exec master..xp_cmdshell @PSCmd
                        end
                        
                        fetch next from cur_replica into 
                                    @AgReplicaName                    


            end
            close cur_replica
            deallocate cur_replica


            --############################################################################
            --### Update and insert backup information in final tables
            --############################################################################

            --###Update first the database creation date with the local ones
            Update t
                        set t.DatabaseCreationDate = db.create_date
            from [dbo].[bakgen_backuplastdt_databases_temp] t
                        inner join sys.databases db 
                                    on db.name collate database_default = t.DatabaseName collate database_default 
                                               and db.service_broker_guid = t.ServiceBrokerGuid

            Update t
                        set t.DatabaseCreationDate = db.create_date
            from [dbo].[bakgen_backuplastdt_fgreadonly_temp] t
                        inner join sys.databases db 
                                    on db.name collate database_default = t.DatabaseName collate database_default 
                                               and db.service_broker_guid = t.ServiceBrokerGuid




            BEGIN TRY

                        begin transaction 

                        delete f
                                    from [dbo].[bakgen_backuplastdt_databases_temp] t
                                               inner join [dbo].[bakgen_backuplastdt_databases] f 
                                                           on f.DatabaseCreationDate = t.DatabaseCreationDate
                                                                       and f.DatabaseName = t.DatabaseName 
                                                                       and f.BackupType = t.BackupType 
                                                                       and f.ServerName = t.ServerName 
                                                                       and t.SqlInstanceName = f.SqlInstanceName
                                               where f.LastBackupDate < t.LastBackupDate

                        Insert into [dbo].[bakgen_backuplastdt_databases] (
                                    ServerName,
                                    SqlInstanceName,
                                    SqlServerName,
                                    DatabaseCreationDate,
                                    DatabaseName,
                                    BackupType,
                                    LastBackupDate,
                                    LastBackupSize,
                                    is_primary,
                                    insertdate 
                        )
                        select 
                                    t.ServerName,
                                    t.SqlInstanceName,
                                    t.SqlServerName,
                                    t.DatabaseCreationDate,
                                    t.DatabaseName,
                                    t.BackupType,
                                    t.LastBackupDate,
                                    t.LastBackupSize,
                                    t.is_primary,
                                    t.insertdate 
                                    from [dbo].[bakgen_backuplastdt_databases_temp] t
                                               where not exists (select 1 from [dbo].[bakgen_backuplastdt_databases] f 
                                                                                                                      where f.DatabaseName = t.DatabaseName 
                                                                                                                                  and f.BackupType = t.BackupType 
                                                                                                                                  and f.ServerName = t.ServerName 
                                                                                                                                  and t.SqlInstanceName = f.SqlInstanceName)
                                    
                        
                        commit

                        begin transaction

                        delete f
                                    from [dbo].[bakgen_backuplastdt_fgreadonly_temp] t
                                               inner join [dbo].[bakgen_backuplastdt_fgreadonly] f 
                                                           on f.DatabaseName = t.DatabaseName 
                                                                       and f.BackupType = t.BackupType 
                                                                       and f.filegroup_name = t.filegroup_name
                                                                       and f.ServerName = t.ServerName 
                                                                       and f.SqlInstanceName = t.SqlInstanceName
                                               where f.LastBackupDate < t.LastBackupDate


                        Insert into [dbo].[bakgen_backuplastdt_fgreadonly] (
                                    ServerName,     
                                    SqlInstanceName,
                                    SqlServerName,           
                                    DatabaseCreationDate,
                                    DatabaseName,            
                                    BackupType,
                                    filegroup_name,
                                    file_logicalname,          
                                    filegroup_guid, 
                                    file_guid,          
                                    LastBackupDate,          
                                    LastBackupReadOnlyLsn,
                                    is_primary,
                                    insertdate                     
                        )
                        select 
                                    t.ServerName,   
                                    t.SqlInstanceName,
                                    t.SqlServerName,
                                    t.DatabaseCreationDate,
                                    t.DatabaseName,          
                                    t.BackupType,
                                    t.filegroup_name,
                                    t.file_logicalname,        
                                    t.filegroup_guid,           
                                    t.file_guid,        
                                    t.LastBackupDate,        
                                    t.LastBackupReadOnlyLsn,
                                    t.is_primary,
                                    t.insertdate                   
                        from [dbo].[bakgen_backuplastdt_fgreadonly_temp] t                                        
                                    where not exists (
                                               select 1 from  [dbo].[bakgen_backuplastdt_fgreadonly] f 
                                               where f.DatabaseName = t.DatabaseName 
                                                                       and f.BackupType = t.BackupType 
                                                                       and f.filegroup_name = t.filegroup_name
                                                                       and f.ServerName = t.ServerName 
                                                                       and t.SqlInstanceName = f.SqlInstanceName)

                        
                        commit
            END TRY
            BEGIN CATCH
                SELECT 
                                    @ErrorMessage = ERROR_MESSAGE(), 
                                    @ErrorSeverity = ERROR_SEVERITY(), 
                                    @ErrorState = ERROR_STATE();

                        IF @@TRANCOUNT > 0
                                    ROLLBACK
                        
                        raiserror(@ErrorMessage, @ErrorSeverity, @ErrorState);

            END CATCH



RETURN;

END TRY
BEGIN CATCH
    SELECT 
        @ErrorMessage = ERROR_MESSAGE(), 
        @ErrorSeverity = ERROR_SEVERITY(), 
        @ErrorState = ERROR_STATE();

            set @InfoLog = '@ErrorState = ' + convert(nvarchar, @ErrorState) + '/@ErrorSeverity = ' + convert(nvarchar, @ErrorSeverity) + '/@ErrorMessage = ' + @ErrorMessage
            execute dbo.bakgen_p_log       
                        @ModuleName = @ModuleName,
                        @ProcedureName = @ProcName,
                        @ExecuteMode = @Execute,
                        @LogType = 'ERROR',
                        @DatabaseName = null,
                        @Information = @InfoLog,
                        @Script = null

    raiserror(@ErrorMessage, @ErrorSeverity, @ErrorState);
END CATCH;

RETURN
END
Other Objects needed

As mentioned above I used the DBATools Write-DbaDbTableData function, so need to install it before being able to run the above stored procedure.

I share also the 2 other objects used in the above stored procedure, but of course you can adapt the code to your needs

Creation of the log table:

--########################################################
--###Backup generator - logs
--########################################################

USE [<YourDatabaseName>]
GO
/*
if OBJECT_ID('[dbo].[bakgen_logs]') is not null
	drop table [dbo].[bakgen_logs]
*/
create table [dbo].[bakgen_logs] (
	id bigint identity(1,1) not null,
	LogDate datetime,
	SqlServerName sysname,
	ModuleName sysname,
	ProcedureName sysname,
	ExecuteMode char(1),
	LogType nvarchar(50),
	DatabaseName sysname null,
	Information nvarchar(max) null,
	Scripts nvarchar(max) null,
CONSTRAINT [PK_bakgen_logs] PRIMARY KEY CLUSTERED 
(
	[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
)
GO

Creation of the stored procedure to write the logs:

use [<YourDatabaseName>]
if OBJECT_ID('dbo.bakgen_p_log') is not null
	drop procedure dbo.bakgen_p_log 
go

CREATE PROCEDURE dbo.bakgen_p_log 
(
	@ModuleName sysname,
	@ProcedureName sysname,
	@ExecuteMode char(1),
	@LogType nvarchar(50),
	@DatabaseName sysname = null,
	@Information nvarchar(max) =  null,
	@Script nvarchar(max)  = null
)

AS
/************************************
*   dbi-services SA, Switzerland    *
*   http://www.dbi-services.com        *
*************************************
    Group/Privileges..: DBA
    Script Name......:	bakgen_p_log.sql
    Author...........:	Christophe Cosme
    Date.............:	2019-09-20
    Version..........:	SQL Server 2016 / 2017
    Description......:	write information to the log table to keep trace of the step executed

    Input parameters.: 

	Output parameter: 
				
************************************************************************************************
    Historical
    Date        Version    Who    Whats		Comments
    ----------  -------    ---    --------	-----------------------------------------------------
    2019-10-14  1.0        CHC    Creation
************************************************************************************************/ 
BEGIN 

BEGIN TRY
	
	--###variable to store error message
	declare @errmsg varchar(4000)

	if OBJECT_ID('[dbo].[bakgen_logs]') is null
	begin
		set @errmsg = 'bakgen_p_log : table not found - be sure the table exists'
		set @errmsg += '	table name = [dbo].[bakgen_logs]' 
		raiserror (@errmsg,11,1);
	end		

	insert into [dbo].[bakgen_logs] (
		LogDate,
		SqlServerName,
		ModuleName,
		ProcedureName,
		ExecuteMode,
		LogType,
		DatabaseName,
		Information,
		Scripts
		)
	values(
		getdate(),
		convert(sysname,SERVERPROPERTY('servername')),
		@ModuleName,
		@ProcedureName,
		@ExecuteMode,
		@LogType,
		@DatabaseName,
		@Information,
		@Script
		)


RETURN;

END TRY
BEGIN CATCH
	declare 
    @ErrorMessage  NVARCHAR(4000), 
    @ErrorSeverity INT, 
    @ErrorState    INT;
    SELECT 
        @ErrorMessage = ERROR_MESSAGE(), 
        @ErrorSeverity = ERROR_SEVERITY(), 
        @ErrorState = ERROR_STATE();
 
    -- return the error inside the CATCH block
    raiserror(@ErrorMessage, @ErrorSeverity, @ErrorState);
END CATCH;

RETURN
END
Conclusion

Triggering PowerShell from a stored procedure did the trick for my special case and is very practical. But to find the right syntax to make the script running through xp_CmdShell was not so trivial. I admit to spend sometimes to figure out what was causing the issue.
But I definitely enjoyed the solution for retrieving information outside the local SQL Server instance.

Cet article SQL Server – Collecting last backup information in an AlwaysOn environment est apparu en premier sur Blog dbi services.

Design Thinking in IT Development life cycle

OracleApps Epicenter - Thu, 2019-12-12 04:07
Here is question from a reader Does integrate design thinking help into day to day operations of IT and Service Management ? Is there any frameworks or matured models of Design thinking embedded into development cycle? Design thinking at its very core is about understanding your customer’s needs better than they can ever articulate it […]
Categories: APPS Blogs

Fixing 19c runcluvfy.sh – PRCZ-2004 : File “/usr/local/bin/sudo” was not found

Michael Dinh - Wed, 2019-12-11 20:43

Running runcluvfy using -method sudo -user oracle failed and not sure why path for sudo is hard coded.

[oracle@ol7-121-rac1 ~]$ /u01/app/19.3.0.0/grid/runcluvfy.sh stage -pre crsinst -upgrade -rolling \
> -src_crshome /u01/app/12.1.0.2/grid -dest_crshome /u01/app/19.3.0.0/grid \
> -dest_version 19.0.0.0.0 -method sudo -user oracle
Enter "SUDO" password:

PRCZ-2004 : File "/usr/local/bin/sudo" was not found
PRKC-1002 : Not all the submitted commands completed successfully.

Pre-check for cluster services setup was unsuccessful on all the nodes.

CVU operation performed:      stage -pre crsinst
Date:                         Dec 12, 2019 2:08:59 AM
CVU home:                     /u01/app/19.3.0.0/grid/
User:                         oracle
[oracle@ol7-121-rac1 ~]$ which sudo
/bin/sudo
[oracle@ol7-121-rac1 ~]$

To fix the issue, create symlink for /usr/local/bin/sudo for ALL NODES.

root@ol7-121-rac1 ~]# ln -s /bin/sudo /usr/local/bin/sudo
[root@ol7-121-rac1 ~]# which sudo
/usr/local/bin/sudo
[root@ol7-121-rac1 ~]# ls -l /usr/local/bin/sudo
lrwxrwxrwx. 1 root root 9 Dec 12 02:15 /usr/local/bin/sudo -> /bin/sudo
[root@ol7-121-rac1 ~]#

[root@ol7-121-rac2 ~]# ln -s /bin/sudo /usr/local/bin/sudo
[root@ol7-121-rac2 ~]# which sudo
/usr/local/bin/sudo
[root@ol7-121-rac2 ~]# ls -l /usr/local/bin/sudo
lrwxrwxrwx. 1 root root 9 Dec 12 02:17 /usr/local/bin/sudo -> /bin/sudo
[root@ol7-121-rac2 ~]#

runcluvfy using -method sudo -user oracle

[oracle@ol7-121-rac1 ~]$ /u01/app/19.3.0.0/grid/runcluvfy.sh stage -pre crsinst -upgrade -rolling \
> -src_crshome /u01/app/12.1.0.2/grid -dest_crshome /u01/app/19.3.0.0/grid \
> -dest_version 19.0.0.0.0 -method sudo -user oracle
Enter "SUDO" password:


Pre-check for cluster services setup was successful.


Failures were encountered during execution of CVU verification request "stage -pre crsinst".

Verifying ACFS Driver Checks ...FAILED
PRVG-6096 : Oracle ACFS driver is not supported on the current operating system
version for Oracle Clusterware release version "19.0.0.0.0".


CVU operation performed:      stage -pre crsinst
Date:                         Dec 12, 2019 2:17:26 AM
CVU home:                     /u01/app/19.3.0.0/grid/
User:                         oracle
[oracle@ol7-121-rac1 ~]$

What happens when not using -method sudo -user oracle?

[oracle@ol7-121-rac1 ~]$ /u01/app/19.3.0.0/grid/runcluvfy.sh stage -pre crsinst -upgrade -rolling \
> -src_crshome /u01/app/12.1.0.2/grid -dest_crshome /u01/app/19.3.0.0/grid \
> -dest_version 19.0.0.0.0

Pre-check for cluster services setup was successful.


Failures were encountered during execution of CVU verification request "stage -pre crsinst".

Verifying ACFS Driver Checks ...FAILED
PRVG-6096 : Oracle ACFS driver is not supported on the current operating system
version for Oracle Clusterware release version "19.0.0.0.0".

Verifying RPM Package Manager database ...INFORMATION
PRVG-11250 : The check "RPM Package Manager database" was not performed because
it needs 'root' user privileges.


CVU operation performed:      stage -pre crsinst
Date:                         Dec 12, 2019 2:26:29 AM
CVU home:                     /u01/app/19.3.0.0/grid/
User:                         oracle
[oracle@ol7-121-rac1 ~]$

The choice is yours.

Upgrade your Power BI Report Server

Yann Neuhaus - Wed, 2019-12-11 10:08
Introduction

Even if upgrading your Power BI Report Server is straight forward, I have been asked many times where to find the installation files and how to run it that I thought a blog is worth it.

Before you start
Before upgrading your Power BI Report Server it is recommended to perform some backup steps.
  • Back up the encryption keys
  • Back up the report server databases

Back up the configuration files (in the default installation location folders)

C:\Program Files\Microsoft Power BI Report Server\PBIRS\ReportServer

  • Rsreportserver.config
  • Rssvrpolicy.config
  • Web.config

C:\Program Files\Microsoft Power BI Report Server\PBIRS\ReportServer\bin

  • Reportingservicesservice.exe.config

C:\Program Files\Microsoft Power BI Report Server\PBIRS\RSHostingService

  • config.json
  • RSHostingService.exe.config
Download the latest version

Most of the time searching for download the latest Power BI Report Server version you will land to this site: https://powerbi.microsoft.com/en-us/report-server/ As I was asked several times, I found useful to mention to select “Advanced download option” to find the download site.

Notice the Power BI Version available, then click download

Select the install files you want. I always advice to select both the server and the desktop version at the same time and to distribute the desktop version to your report developers to avoid surprises later when publishing the report on the portal

Upgrade the report server

Execute PowerBIReportServer.exe you downloaded Click on “Upgrade Power BI Report Server Accept the license terms and click Upgrade

When the upgrade is competed, you can close the application

Check if the version has been installed correctly either using the Power BI Report Server configuration manager…

…or directly within the web portal.

Be aware that if you do not see the new version installed, restart the upgrade process, you will be probably requested to restart your computer before.

Cet article Upgrade your Power BI Report Server est apparu en premier sur Blog dbi services.

Baltimore Gas & Electric and Oracle Reshape Peak Pricing Programs

Oracle Press Releases - Wed, 2019-12-11 07:00
Press Release
Baltimore Gas & Electric and Oracle Reshape Peak Pricing Programs

Redwood Shores, Calif.—Dec 11, 2019

Baltimore Gas & Electric (BGE) has launched a digital experience pilot for thousands of Baltimore residents who pay on and off peak rates for electricity. BGE is using Oracle Utilities Opower Behavioral Load Shaping Cloud Service to engage customers with a proactive, personalized experience designed to help them save on their utility bills. The new service encourages customers to shift their biggest everyday energy loads, such as running energy-intensive appliances and electric vehicle charging, to off peak times. With these tips, BGE customers can save money while helping reduce daily peak energy demand and supporting a cleaner, healthier grid.

“We know on peak and off peak rates can seem complex, and we have a responsibility to offer excellent service to customers who choose them,” commented Mark Case, VP of regulatory policy and strategy at BGE. “With this new service from Opower, we can deliver a better experience for these customers by helping them shift their energy load for improved power affordability and reliability, all while reducing emissions.”

Learn more about the new Opower Behavioral Load Shaping Service here.

Peak pricing programs have not traditionally provided the ongoing, personalized outreach customers need to help them shift their energy use and benefit from lower off-peak rates. Years of public evaluation data show programs that offered some outreach only left customers wanting more. With machine learning, user experience design, and customer engagement automation, Opower is reshaping this equation.

With Opower, BGE is providing residents new insight into how small behavior changes can create significant bill savings. Enrolled customers began receiving weekly digital communications that help them understand how their on and off peak rates work. Each customer receives continually evolving content like week-over-week spending comparisons, personalized information about their on and off peak spending, and adaptive, intelligent recommendations for shifting their largest energy loads in order to save money.

“On and off peak rates are nothing new—our industry has been implementing them for decades. Program evaluators have found again and again that customers with peak pricing are eager for better insights into their energy usage and their bills,” noted Dr. Ahmad Faruqui, principal and energy economist with The Brattle Group. “What’s new and different is the way in which enabling technologies boost customer awareness and price responsiveness. BGE and Opower are putting those learnings into practice and employing a smart experimental design that will expand our industry’s body of knowledge.”

BGE and Opower are running the program as a randomized control trial in order to yield novel, statistically significant peak pricing pilot results. Throughout the trial, BGE and Opower will be isolating and measuring the impact of the customer experience itself—discretely from the peak price signal—on bill savings, customer satisfaction, peak demand, and adoption of BGE programs and products that can help customers save even more. The trial started in Summer 2019. 

Several additional utilities in the U.S. are running the Opower Behavioral Load Shaping service this year. This is the fourth new product released by Opower recently, in addition to hundreds of new customer engagement features for utilities and their customers. Opower is the world’s most widely deployed utility customer engagement platform, providing energy data analytics on over two trillion meter reads and powering the utility customer experience for more than 60 million households.

Contact Info
Kristin Reeves
Oracle Corporation
+1 925 787 6744
kris.reeves@oracle.com
Wendy Wang
H&K Strategies
+1 979 216 8157
wendy.wang@hkstrategies.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kristin Reeves

  • +1 925 787 6744

Wendy Wang

  • +1 979 216 8157

Microsofts Visual Studio Code on Linux

Dietrich Schroff - Tue, 2019-12-10 14:04
On the last weekend i was wondering, what kind of IDE i can use for doing some small programs with javascript. My first idea was eclipse, but a friend mentioned Microsofts Visual Studio Code:
 So i opened https://code.visualstudio.com/ and got
So i downloaded the .deb and after a dpkg -i of that file i was able to run Visual Studio Code on my Linux machine:
schroff@zerberus:~$ code 
The startup was amazing fast - less a second.

Within Visual Studio Code it is very easy to install some extensions:

Running a small javascript program is very easy. I just entered these lines and without any further configuration running the program or debugging was no problem:


Updating parameter files from REST

DBASolved - Tue, 2019-12-10 10:12

One of the most important and time consuming things to do with Oracle GoldenGate is to build parameter files for the GoldenGate processes.  In the past, this required you to access GGSCI and run commands like: GGSCI> edit params <process group> After which, you then had to bounce the process group for the changes to […]

The post Updating parameter files from REST appeared first on DBASolved.

Categories: DBA Blogs

How to optimize a campaign to get the most out of mobile advertising

VitalSoftTech - Tue, 2019-12-10 09:54

  When marketing for a campaign, we must optimize it in the best way possible to get the most out of it. Otherwise, it is just advertising revenue going to waste. Same goes for mobile advertising. We are here to discuss the best mobile ad strategies. However, before we start, here is a question for […]

The post How to optimize a campaign to get the most out of mobile advertising appeared first on VitalSoftTech.

Categories: DBA Blogs

Oracle Challenger Series Returns to Southern California in 2020 with Newport Beach, Indian Wells Events

Oracle Press Releases - Tue, 2019-12-10 09:00
Press Release
Oracle Challenger Series Returns to Southern California in 2020 with Newport Beach, Indian Wells Events

Indian Wells, Calif.—Dec 10, 2019

The Oracle Challenger Series today announced its return to Southern California for two events in early 2020. The third stop of the 2019-2020 series takes place at the Newport Beach Tennis Club on January 27 – February 2. The Indian Wells Tennis Garden hosts the final tournament on March 2-8.

Now in its third year, the Oracle Challenger Series helps up-and-coming American players secure both ranking points and prize money in the United States. The two American men and two American women who accumulate the most points over the course of the Challenger Series receive wild cards into the singles main draws at the BNP Paribas Open in Indian Wells. As part of the Oracle Challenger Series’ mission to grow the sport and make professional tennis events more accessible, each tournament is free and open to the public.

The Newport Beach and Indian Wells events will conclude the 2019-2020 Road to Indian Wells and are instrumental in determining which American players receive wild card berths at the 2020 BNP Paribas Open. At the halfway point of the Challenger Series, Houston champion Marcos Giron holds the top spot for the men. Usue Arconada is in first place for the women following an impressive showing in New Haven with finals appearances in both singles and doubles. Trailing just behind them are Tommy Paul, the men’s champion in New Haven, and CoCo Vandeweghe, the women’s runner-up in Houston.

The Newport Beach event has propelled its champions to career-defining seasons over the previous two years. Americans Taylor Fritz and Danielle Collins began their steady climb up the world rankings by capturing the titles at the 2018 inaugural event. Bianca Andreescu’s 2019 title marked the beginning of her meteoric rise to WTA stardom. Likewise, the Indian Wells event has featured some of the Challenger Series’ strongest player fields and produced champions Martin Klizan, Sara Errani, Kyle Edmund and Viktorija Golubic.

The Newport Beach tournament will also feature the Oracle Champions Cup which takes place on Saturday, February 1. Former World No. 1 and 2003 US Open Champion Andy Roddick; 10-time ATP Tour titlist and former World No. 4 James Blake; 2004 Olympic silver medalist and 6-time ATP Tour singles winner Mardy Fish; and 2005 US Open semifinalist Robby Ginepri headline the one-night tournament. The event consists of two one-set semifinals with the winners meeting in a one-set championship match.

Tickets to the Oracle Champions Cup go on-sale to the general public on Tuesday, December 17. Special VIP packages including play with the pros, special back-stage access and an exclusive player party are also available.

For more information about the Oracle Challenger Series visit oraclechallengerseries.com, and be sure to follow @OracleChallngrs on Twitter and @OracleChallengers on Instagram. To inquire about volunteer opportunities, including becoming a ball kid, please email oraclechallengerseries@desertchampions.com.

Contact Info
Mindi Bach
Oracle
mindi.bach@oracle.com
About the Oracle Challenger Series

The Oracle Challenger Series was established to help up-and-coming American tennis players secure both ranking points and prize money. The Oracle Challenger Series is the next chapter in Oracle’s ongoing commitment to support U.S. tennis for men and women at both the collegiate and professional level. The Challenger Series features equal prize money in a groundbreaking tournament format that combines the ATP Challenger Tour and WTA 125K Series.

The Oracle Challenger Series offers an unmatched potential prize of wild cards into the main draw of the BNP Paribas Open, widely considered the top combined ATP Tour and WTA professional tennis tournament in the world, for the top two American male and female finishers.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

The Global Oracle APEX Community Delivers. Again.

Joel Kallman - Mon, 2019-12-09 16:59

Oracle was recently recognized as a November 2019 Gartner Peer Insights Customers’ Choice for Enterprise Low-Code Application Platform Market for Oracle APEX.  You can read more about that here.

I personally regard this a distinction for the global Oracle APEX community.  We asked for your assistance by participating in these reviews, and you delivered.  Any time we've asked for help or feedback, the Oracle APEX community has selflessly and promptly responded.  You have always been very gracious with your time and energy.

I was telling someone recently how I feel the Oracle APEX community is unique within all of Oracle, but I also find it to be unique within the industry.  It is the proverbial two-way partnership that many talk about but rarely live through their actions.  We remain deeply committed to our customers' personal and professional success - it is a mindset which permeates our team.  We are successful only when our customers and partners are successful.

Thank you to all who participated in the Gartner Peer Insights reviews - customers, partners who nudged their customers, and enthusiasts.  You, as a community, stand out amongst all others.  We are grateful for you.

Oracle Names Vishal Sikka to the Board of Directors

Oracle Press Releases - Mon, 2019-12-09 15:15
Press Release
Oracle Names Vishal Sikka to the Board of Directors

Redwood Shores, Calif.—Dec 9, 2019

Oracle (NYSE: ORCL) today announced that Dr. Vishal Sikka, founder and CEO of the AI company Vianai Systems, has been named to Oracle’s Board of Directors.  Before starting Vianai, Vishal was a top executive at SAP and the CEO of Infosys.

“The digital transformation of an enterprise is enabled by the rapid adoption of modern cloud applications and technologies,” said Oracle CEO Safra Catz. “Vishal clearly understands how Oracle’s Gen2 Cloud Infrastructure, Autonomous Database and Applications come together in the Oracle Cloud to help our customers drive business value and adapt to change. I am very happy that he will be joining the Oracle Board.”

“For years, the Oracle Database has been the heartbeat and life-blood of every large and significant organization in the world,” said Dr. Vishal Sikka. “Today, Oracle is the only one of the big four cloud companies that offers both Enterprise Application Suites and Secure Infrastructure technologies in a single unified cloud. Oracle’s unique position in both applications and infrastructure paves the way for enormous innovation and growth in the times ahead. I am excited to have the opportunity to join the Oracle Board, and be part of this journey.”

“Vishal is one the world’s leading experts in Artificial Intelligence and Machine Learning,” said Oracle Chairman and CTO Larry Ellison. “These AI technologies are key foundational elements of the Oracle Cloud’s Autonomous Infrastructure and Intelligent Applications. Vishal’s expertise and experience makes him ideally suited to provide strategic vision and expert advice to our company and to our customers. He is a most welcome addition to the Oracle Board.”

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com/investor or contact Investor Relations at investor_us@oracle.com or (650) 506-4073.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor Statement

Statements in this press release relating to Oracle’s future plans, expectations, beliefs, intentions and prospects are “forward-looking statements” and are subject to material risks and uncertainties. Many factors could affect our current expectations and our actual results, and could cause actual results to differ materially. A detailed discussion of these factors and other risks that affect our business is contained in our U.S. Securities and Exchange Commission (“SEC”) filings, including our most recent reports on Form 10-K and Form 10-Q, particularly under the heading “Risk Factors.” Copies of these filings are available online from the SEC, by contacting Oracle Corporation’s Investor Relations Department at (650) 506-4073 or by clicking on SEC Filings on Oracle’s Investor Relations website at http://www.oracle.com/investor. All information set forth in this press release is current as of December 9, 2019. Oracle undertakes no duty to update any statement in light of new information or future events.

Upcoming Webinar: Is Your Sensitive Data Playing Hide and Seek with You?

Is Your Sensitive Data Playing Hide and Seek with You?

Thursday, December 12, 2019 - 2:00 pm EST

Your Oracle databases and ERP applications may contain sensitive personal data like Social Security numbers, credit card numbers, addresses, date of births, and salary information. Understanding in what tables and columns sensitive data resides is critical in protecting the data and ensure compliance with regulations like GDPR, PCI, and the new California Consumer Privacy Act (CCPA). However, sensitive data is like a weed and can spread quickly if not properly managed. The challenge is how to effectively and continuously find sensitive data, especially in extremely large databases and data warehouses.  This educational webinar will discuss methodologies and tools to find sensitive such as by searching column names, crawling the database table by table, and performing data qualification to eliminate false positives.  Other locations where sensitive data might reside such as trace files, dynamic views (e.g., V$SQL_BIND_DATA), and materialized views will be reviewed.

>>> Register for this webinar <<<

Oracle Database, Webinar
Categories: APPS Blogs, Security Blogs

Pages

Subscribe to Oracle FAQ aggregator