Quantcast
Channel: ATeam Chronicles
Viewing all 228 articles
Browse latest View live

Custom Message Data Encryption of Payload in SOA 11g

$
0
0

Introduction

This article explains how to encrypt sensitive data (such as ssn, credit card number, etc ) in the incoming payload and decrypt the data back to clear text (or original form) in the outgoing message. The purpose is to hide the sensitive data in the payload, in the audit trail, console and logs.

Main Article

Oracle provides Oracle Web Services Manager (OWSM) message protection, but it encrypts the entire payload. However, Oracle OWSM gives us the capability to create our own custom policies and custom assertions. The framework is implemented in Java and allows us to write our own custom assertions which can be attached to a policy to encrypt and decrypt message data. These policies must be attached to the SOA composites in order to execute the policy assertion.

Step by step guide:

  1. 1. Create a custom Java encryptor class

This is the Java implementation class for encrypting the data in incoming messages. It must extend oracle.wsm.policyengine.impl.AssertionExecutor and must have the method execute

 public IResult execute(IContext iContext)

This method is invoked by the policy framework. The execute method  gets the xml nodes in the SOAP message that require encryption from the SOAP message and encrypts the value. It then sets the node value to the encrypted value.

  1. 2. Create a custom Java decryptor class

This is the Java implementation class for decrypt the data in outgoing message. It must extend oracle.wsm.policyengine.impl.AssertionExecutor and must have method execute

 public IResult execute(IContext iContext)

This method is invoked by the policy framework. The execute method  gets the xml nodes in the SOAP message that require decryption from the SOAP message and decrypts the value. It then sets the node value to the decrypted value.

3. Compile and build Java encryptor and decryptor in a jar file

Required libraries are:

$ORACLE_COMMON_HOME\modules\oracle.wsm.common_11.1.1\wsm-policy-core.jar

$ORACLE_COMMON_HOME\modules\oracle.wsm.agent.common_11.1.1\wsm-agent-core.jar

$ORACLE_COMMON_HOME\modules\oracle.osdt_11.1.1\osdt_wss.jar

$ORACLE_COMMON_HOME\modules\oracle.osdt_11.1.1\osdt_core.jar

4. Copy the jar file to $SOA_HOME\soa\modules\oracle.soa.ext_11.1.1

5. Run ant in $SOA_HOME\soa\modules\oracle.soa.ext_11.1.1

6. Restart SOA server

7. Create a custom encryption assertion template

This custom assertion template calls the custom Java encryptor class which encrypts the message data.

When this assertion is attached to a policy that is attached to the SOA composite web service then whenever a request is made to a SOA composite service, OWSM applies the policy enforcement and the execute method of the custom encryptor Java class is invoked.

<orawsp:AssertionTemplate xmlns:orawsp="http://schemas.oracle.com/ws/2006/01/policy"
                          orawsp:Id="soa_encryption_template"
                          orawsp:attachTo="generic" orawsp:category="security"
                          orawsp:description="Custom Encryption of payload"
                          orawsp:displayName="Custom Encryption"
                          orawsp:name="custom/soa_encryption"
                          xmlns:custom="http://schemas.oracle.com/ws/soa/custom">
  <custom:custom-executor orawsp:Enforced="true" orawsp:Silent="false"
                   orawsp:category="security/custom"
                   orawsp:name="WSSecurity_Custom_Assertion">
    <orawsp:bindings>
      <orawsp:Implementation>fully qualified Java class name that will be called by this assertion </orawsp:Implementation>
      <orawsp:Config orawsp:configType="declarative" orawsp:name="encrypt_soa">
        <orawsp:PropertySet orawsp:name="encrypt">
          <orawsp:Property orawsp:contentType="constant"
                           orawsp:name="encryption_key" orawsp:type="string">
            <orawsp:Value>MySecretKey</orawsp:Value>
          </orawsp:Property>
        </orawsp:PropertySet>
      </orawsp:Config>
    </orawsp:bindings>
  </custom:custom-executor>
</orawsp:AssertionTemplate>

8. Use Enterprise Manager (EM) to import the custom encryption assertion template into the Weblogic domain Web Services Policies

9. Create an assertion using the encryption assertion template that was imported

10. Create custom decryption assertion template

This custom assertion template calls the custom Java decryptor class which decrypts the message data.

When this assertion is attached to a policy that is attached to the SOA composite web service then whenever a request is made to that SOA composite web service then OWSM applies the policy enforcement  and the execute method of the custom outbound decryptor is invoked.

<orawsp:AssertionTemplate xmlns:orawsp="http://schemas.oracle.com/ws/2006/01/policy"
                          orawsp:Id="soa_decryption_template"
                          orawsp:attachTo="binding.client" orawsp:category="security"
                          orawsp:description="Custom Decryption of payload"
                          orawsp:displayName="Custom Decryption"
                          orawsp:name="custom/soa_decryption"
                          xmlns:custom="http://schemas.oracle.com/ws/soa/custom">
  <custom:custom-executor orawsp:Enforced="true" orawsp:Silent="false"
                   orawsp:category="security/custom"
                   orawsp:name="WSSecurity Custom Assertion">
    <orawsp:bindings>
      <orawsp:Implementation>fully qualified Java class name that will be called by this assertion</orawsp:Implementation>
      <orawsp:Config orawsp:configType="declarative" orawsp:name="encrypt_soa">
        <orawsp:PropertySet orawsp:name="decrypt">
          <orawsp:Property orawsp:contentType="constant"
                           orawsp:name="decryption_key" orawsp:type="string">
            <orawsp:Value>MySecretKey</orawsp:Value>
          </orawsp:Property>
        </orawsp:PropertySet>
      </orawsp:Config>
    </orawsp:bindings>
  </custom:custom-executor>
</orawsp:AssertionTemplate>

11. Create an assertion using the decryption assertion template that was imported

  1. 12. In Enterprise Manager (EM), export custom encryption policy to a file and save it to $JDEV_USER_DIR/system11.1.1.x.x.x.x/DefaultDomain/oracle/store/gmds/owsm/policies/oracle

13. In Enterprise Manager (EM), export custom decryption policy to a file and save it to $JDEV_USER_DIR/system11.1.1.x.x.x.x/DefaultDomain/oracle/store/gmds/owsm/policies/oracle

14. In JDeveloper, attach the custom encryption policy to the SOA composite inbound services that require message data encryption

15. In JDeveloper, attach custom decryption policy to the SOA composite outbound services that have message data are in encryption format but need to be decrypted for outbound message

16. Compile and deploy the SOA composite


Oracle SOA Suite for HealthCare – Using Remote JMS with Multiple Domains

$
0
0

Oracle SOA Suite for HealthCare – Using Remote JMS with Multiple Domains

As SOA Suite for HealthCare (HC) gains popularity among providers, I have seen the need to separate the SOA Suite (SOA) and SOA Suite for HealthCare into separate Weblogic domains. Generally this is done for performance reasons; more specifically it is done when the customer has a high transaction throughput rate coupled with relatively short and stringent service level agreements. In a multi-domain architecture such as this, you will need to use a method other than the default, in-memory binding to pass messages between your SOA Composite and HealthCare.

SOA Suite for HealthCare can access JMS queues from other domains in one of three ways: you can setup and use Weblogic’s Store and Forward capability for JMS, you can create a foreign JMS server in the domain and use the local jndi references when creating the queues for HC, or you can specify the details of the remote JNDI location in the Destination Provider attribute of an Internal Delivery Channel (IDC) and include the IDC in your endpoint.

For this article, we will be using the later method where the remote JNDI location is defined in the transport details of the Internal Delivery Channel. I think this is the simplest method to use and since HC persists all of its messages in its own repository, you don’t have to worry about losing them if the remote JMS provider is down. Consider the message flow in the following diagram:

Figure 1 - Message flow across separate HC and SOA Domains

Figure 1 – Message flow across separate HC and SOA Domains

Here messages flow from a single endpoint via MLLP to the HealthCare adapter in the HC domain. The HealthCare adapter processes the message and writes it remotely to the queue RMT_ADM_GENERIC_ADT in the SOA domain. The SOA composite reads the message locally from RMT_ADM_GENERIC_ADT, processes the message, and then writes it to another local queue RMT_LAB_GENERIC_ADT. SOA Suite for HealthCare then reads the message remotely from RMT_LAB_GENERIC_ADT, processes it, and sends it to the target endpoint via MLLP. Here is how to configure a remote JMS queue in SOA Suite for HealthCare.

Transport Details

Either create or open an Internal Delivery Channel (IDC) to be used to access the remote JMS queue. Add the remote JNDI details to the transport protocol parameters of the IDC. Both sending and receiving IDCs can access remote JMS queues. In the IDC, click the Transport Details button to open a pop-up window where the details are entered.

Fig2

Figure 2 – Adding the Destination Name and Connection Factory to the Internal Delivery Channel

In the Basic tab, add the destination name for the remote JMS queue in the Destination Name field. Next enter the name of the remote Connection Factory used to connect to the queue. Now switch to the Advanced tab.

Fig3

Figure 3 – Adding the Destination Provider Location, Username, and Password to the Internal Delivery Channel

In the Destination Provider field under the Advanced tab, add the following JNDI destination location details for the remote JMS provider. Here is a sample of what the location information will look like:

java.naming.factory.initial=weblogic.jndi.WLInitialContextFactory;java.naming.provider.url=t3://soahost:8001

Provide the hostname and port number for the remote JMS provider. Add a valid username for the remote location along with the password, then save and enable the channel. After enabling the channel, you should see an additional listener on the remote queue when you look at it in the monitoring tab of the WLS console.

OSB Http Transport Client Certificate Authentication Common Pitfall

$
0
0

I recently worked with a customer to help them resolve some issues they were having with configuring client certificate authentication (2-way SSL) for an Http Business Service in Oracle Service Bus (OSB).  This blog is to discuss a common issue encountered and how to fix it.

The customer’s use case was to invoke a service provided by an external provider that required 2-way SSL.  The provider issued a client certificate to the customer to be used as its client credentials that was signed by its own certificate authority (CA).  The certificate was in PKCS#12 format, containing both the certificate and the private key, which was password protected.

The customer completed the steps required for configuring the use of client certificate authentication with HTTP:

  1. 1. Create a keystore containing the client certificates
    2. Configure a PKI Credential Mapping Provider, referencing the keystore
  2. 3. Create a Service Key Provider referencing the correct certificate from the keystore.
  3. 4. Configure an HTTP Transport based business service, indicating client certificate authentication

Everything seemed to be correct, but the request to the external service was denied with a 403 -forbidden error.  After several iterations of debugging and contacting the external service provider, the root cause was determined.  When establishing a client 2-way SSL connection, as part of the SSL handshake, the server will request the client’s certificate.  Along with this request message, the server will send a list of certificate authorities (CA) that it will accept a certificate from.  The SSL library will then scan the certificates contained in its designated keystore for a certificate originating from one of the acceptable CAs.

The problem in this case was that the certificate entry in the keystore did not contain a chain of certificates back to an accepted CA, so the client never sent the certificate during the handshake.  To resolve this, the client certificate and its chain of certificates back to a CA accepted by the provider’s server had to be imported into the keystore.

OSB MQ Transport Tuning – Proxy Service threads

$
0
0

The MQ Transport is a polling transport.  At the user defined interval, a polling thread, fired by a timer, checks for new messages to process on a designated queue.  If messages are found, a number of worker thread requests are scheduled to execute via the WebLogic work scheduler.  Each of these worker threads will get messages from the queue and initiate the processing of a proxy service.

The number of worker threads to schedule is based on the following factors:

The queue depth
The number of managed servers
The number of worker threads currently executing for this proxy service
The max number of threads defined for the work manager associated with the proxy service

If a work manager is not assigned to the proxy service, it will use the WLS Default work manager and if a Max Threads Constraint has not been defined for the Default work manager it will use 16 as a default value.  Therefore, without any tuning, the default will result in a maximum of 16 threads concurrently processing for a given MQ Proxy service.  In order to change this, define a new work manager with the desired max thread constraint and assign it to the proxy service via its dispatch policy setting.

IDM FA Integration flows

$
0
0

Introduction

One of the key aspects of Fusion Applications operations is the Users and Roles management. Fusion Applications uses the Oracle Identity management for its Identity store and policy store by default.This article explains how user and roles flows work from different poin of views, using ‘key’ IDM products for each flow in detail. With a clear understanding of the workings of the Fusion Applications with Identity Management for user provisioning and roles management you will have better understanding and can improve your FA IDM environments by integrating with the rest of the enterprise assets and processes. For example: If you need to integrate your current IDM enterprise with this solution what are the flows you need to be aware of.

Main Article

FA relies on roles and privileges implemented in IDM to both authenticate and authorize users and operations respectively. FA uses jobs in the ESS system to reconcile the users and roles in OIM. OIM, in turn, gets the corresponding data from the user and policy store respectively using LdapSynch(provisioning and reconciliation process). This flow is described below

Fig1: FA IDM integration flow

Fig1: FA IDM integration flow.

Brief explanation of each topic on this main flow above:

FA OID flow: OID holds policy information from FA. Basically duty roles and privileges are created from FA to OID(Policy or Security Store).

Fig2: FusionApps and OID.

Fig2: FusionApps and OID.

FA OIM flow:FA/OIM provision users or roles to OIM/FA through SPML.

For example: Enterprise business logic may qualify the requester and initiate a role provisioning request by invoking the Services Provisioning.

Language (SPML) client module, as may occur during onboarding of internal users with Human Capital Management (HCM), in which case the SPML client submits an asynchronous SPML call to OIM.

Or OIM handles the role request by presenting roles for selection based on associated policies.

Or it communicates with each other produc providing challenge questions response , password reset procedure and more.

Fig3:picture above helps to explain the flow information that we explained above.

Fig3: picture above helps to explain the flow information that we explained above.

OID OIM flow: OIM connects into OVD through LDAP ITResource feature, that allows the connection and it is also responsible for LDAP Synch Reconciliations from OID to OIM as well as the event handlers that OIM triggers, if there is any update from there.

Fig4: Provides the visual explanation of the OID OIM flow.

Fig4: Provides the visual explanation of the third flow.

FA OIM flow: Here it’s ESS JOB from FA that create user into OID or update it from OID. 4.1)”Retrieve Latest LDAP Changes” reads from OID and updates FA if there are any things missing (users, role assignments, etc); 4.2) “Send Pending LDAP Changes” will send over to OIM any requests that have not yet been processed. (If you are using the FA UIs like Manage Users to create a user, it should happen almost immediately, but if you have bulk loaded employees and assignments, you need to run Send Pending LDAP Requests to get the requests processed.)

Fig5: OAM -FA integrated.

Fig5: OAM -FA integrated.

Conclusion

Implementing FA+IDM solution for an organization is a proposition that should be done with all other flows consideration, such as ‘New Hire’ and ‘Authentication and Autorization’ flows. Using a proper planning and understanding the various dimensions provided by this solution and its concepts allows an organization to discern why or even whether they need Oracle IDM and FA wired or not with their IDM enterprise solution. It also highlights, what of the enterprise is willing to protect on user details, and how best to offer Oracle protection in an integrated and effective manner.

Other useful links:

Oracle® Fusion Applications Security Guide ,11g Release 1 (11.1.1.5.0) : http://docs.oracle.com/cd/E15586_01/fusionapps.1111/e16689/F323392AN1A795.htm

BPM 11g: XML_DOCUMENT Table Growth

$
0
0

Introduction

I’ve heard from several customers lately who have asked about unexpected growth in XML_DOCUMENT table compared to other BPM tables. This blog looks into the reasons for this growth and some suggestions on how to mitigate it.

Test Project

In order to demonstrate XML_DOCUMENT table growth we’ll use the following process….

XD_01

… this has a simple embedded sub-process and a multi-instance embedded sub-process so we can monitor whether these have any impact. Both have timer activities inside to give us a window to run some queries against the SOAINFRA tables.

Audit Settings

For our first tests the audit level will be set to “Development”…

XD_02

Queries

We will be running the following queries against the SOAINFRA tables….

CUBE_INSTANCE

Will allow us to query the compressed and uncompressed payload size….

select * from cube_instance
order by creation_date desc;

CUBE_SCOPE

Will allow us to view whether the payload of the running instance is stored in CUBE_SCOPE….

select * from cube_scope
order by modify_date desc;

XML_DOCUMENT

Will allow us to see whether the payload of the running instance is stored in XML_DOCUMENT….

select * from xml_document
order by doc_partition_date desc;

Test 1: Audit Level Development

The process takes a payload such as the following….

<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
    <soap:Body>
    	<ns1:start xmlns:ns1="http://xmlns.oracle.com/bpmn/bpmnProcess/BpmTestXMLDoc" xmlns:ns2="http://www.oracle.com/repeat">
    	<ns2:list>
    		<ns2:ListRow>
    			<ns2:Field1>R1F1</ns2:Field1>
    			<ns2:Field2>R1F2</ns2:Field2>
    			<ns2:Field3>R1F3</ns2:Field3>
                </ns2:ListRow>
    		<ns2:ListRow>
    			<ns2:Field1>R2F1</ns2:Field1>
    			<ns2:Field2>R2F2</ns2:Field2>
    			<ns2:Field3>R2F3</ns2:Field3>
                </ns2:ListRow>
    		<ns2:ListRow>
    			<ns2:Field1>R3F1</ns2:Field1>
    			<ns2:Field2>R3F2</ns2:Field2>
    			<ns2:Field3>R3F3</ns2:Field3>
                </ns2:ListRow>
        </ns2:list>
        </ns1:start>
    </soap:Body>
</soap:Envelope>

…i.e. three repeating rows with three fields in each, populated with simple 4-byte strings, not a very large message at all.

Contents of Tables at First Wait

XD_03

CUBE_INSTANCE

XD_04

…even with such a small message we have an uncompressed scope size of almost 15k.

CUBE_SCOPE

XD_05

…the payload itself is stored in the “SCOPE_BIN” field as a BLOB.

XML_DOCUMENT

XD_06

…there are two rows in XML_DOCUMENT, not really significant for our purposes.

Contents of Tables at Second Wait

XD_19

CUBE_INSTANCE (iteration 1)

XD_07

…the uncompressed payload size has increased to 16.5k

CUBE_SCOPE (iteration 1)

XD_08

…the payload has been updated (modify_date changed).

XML_DOCUMENT (iteration 1)

XD_09

…we now have another two rows in XML_DOCUMENT.

CUBE_INSTANCE (iteration 2)

XD_10

…uncompressed payload size remains the same, previous iteration must have been overwritten.

CUBE_SCOPE (iteration 2)

XD_11

…the payload has been updated (modify_date changed).

 

XML_DOCUMENT (iteration 2)

XD_12

…unchanged.

CUBE_INSTANCE (iteration 3)

XD_13

…uncompressed payload size remains the same, previous iteration must have been overwritten.

CUBE_SCOPE (iteration 3)

XD_14

…the payload has been updated (modify_date changed).

XML_DOCUMENT (iteration 3

XD_15

…unchanged.

Contents of Tables at Process End

XD_20

CUBE_INSTANCE

XD_16

…uncompressed payload size has dropped considerably, previous scopes removed.

CUBE_SCOPE

XD_17

…the payload has been updated (modify_date changed).

 

XML_DOCUMENT

XD_18

…unchanged.

Conclusion

Although the payload size (compressed and uncompressed) in CUBE_INSTANCE seems large compared to the message we passed in to the process, XML_DOCUMENT itself seems little used. It is evident that the payload itself is store in CUBE_SCOPE and is overwritten as the instance progresses through activities to completion.

Test 2: Audit Level Production

This should be more pertinent a test given this is the recommended setting for production environments.

XD_21

Contents of Tables at First Wait

CUBE_INSTANCE

XD_22

…more or less the same as before.

CUBE_SCOPE

XD_23

…as before.

XML_DOCUMENT

XD_24

…only one row this time, clearly audit level of “production” reduces the number of rows in XML_DOCUMENT.

Contents of Tables at Second Wait

CUBE_INSTANCE (iteration 1)

XD_25

…as test 1.

CUBE_SCOPE (iteration 1)

XD_26

…as before.

XML_DOCUMENT (iteration 1)

XD_27

…still only the one row.

The progression continues as in Test 1, and at the process end we get the following….

XML_DOCUMENT

XD_28

…still only the one row !

Conclusion

So with standard settings we have been unable to reproduce any noticeable growth in the XML_DOCUMENT table, in fact we reduced it by setting the audit level to “Production”… so what is causing this growth ?

Large Document Threshold

This is the key to XML_DOCUMENT table growth, the parameter can be found in the BPMN properties in Enterprise Manager….

XD_29

…what exactly is this telling us ? If the payload size is larger than 100k it is stored in XML_DOCUMENT instead of CUBE_SCOPE. There is a similar property for BPEL also.

What difference does this make ? Let’s do some more tests.

Test 3: Large Document Threshold Lowered.

It will be simpler for us to reduce the threshold rather than create a larger message (and therefore payload)…. let’s go to the extreme and reduce this to 100 (from 100k)….

XD_30

Contents of Tables at First Wait

 CUBE_INSTANCE

XD_31

…more or less unchanged.

CUBE_SCOPE

XD_32

…again not much difference (although the contents of the BLOB may be).

XML_DOCUMENT

XD_33

…already 6 rows in this table. Unlike the CUBE_SCOPE table which simply overwrote the payload BLOB as the instance progressed, we now seem to be adding a new row to XML_DOCUMENT for at the very least the entry and exit of each activity…. i.e. we have history in this table.

Contents of Tables at Second Wait

We’ll just concentrate on XML_DOCUMENT from now on….

XML_DOCUMENT (iteration 1)

XD_34

…another 5 rows.

XML_DOCUMENT (iteration 2)

XD_35

…another 6 rows.

XML_DOCUMENT (iteration 3

XD_36

…another 4 rows.

Contents of Tables at Process End

XML_DOCUMENT

XD_37

…unchanged, no activities after the end of the second scope.

Conclusion

So, we have found the culprit, payloads greater than the large document threshold are stored in XML_DOCUMENT, and unlike CUBE_SCOPE the payloads are not overwritten for each activity, a new row is created.

Overall Conclusion

It seems simple then, just increase the large document threshold to a huge value and XML_DOCUMENT will not be used.Instead the payload will always be stored in CUBE_SCOPE.

Easy !

Or not !

This highlights several things….

  • follow the BPM best practice to keep messages and payloads small
  • try to avoid iterating over large resultsets with large payloads in multi-instance sub-processes
  • have a fully tested purge strategy

But if we find ourselves in this situation, what can we do ?

As a first step i would advise doing some investigation into average compressed and uncompressed payload sizes in CUBE_INSTANCE for running instances, to give an idea of how large the payloads are.

As a second step I would use a like-live pre-production environment to load test as I was making incremental increases to the “large document threshold” and pay close attention to possible increased memory usage as the number of rows written to XML_DOCUMENT decreases.

There is no one-size-fits-all magic number here for “large document threshold”…. only thorough load testing can give you an appropriate value for your environment.

Summary

In this blog entry we have seen how the “large document threshold” has a critical impact on the growth of the XML_DOCUMENT table and briefly looked at ways of mitigating this.

Interoperability between Microsoft and SOA Suite 12c

$
0
0

Introduction

During the design of SOA applications it is inevitable that from time to time you will need to interface with Microsoft-based applications. While technologies like SOAP and REST do a great job when request-reply communication is needed, most people struggle when a messaging-based communication is required. This blog will present two approaches to get messaging working between Microsoft and SOA Suite 12c.

Which Choices Do I have?

SOA Suite 12c offers a complete set of tools to integrate with Microsoft applications using messaging. Which one to use is a simple question of asking where the messaging system resides. If the messaging system to be accessed sits on SOA Suite side (WebLogic JMS) then you should use the WebLogic JMS .NET Client. If the messaging system to be accessed sits on Microsoft side (Microsoft Message Queuing) then you should use the JCA adapter for MSMQ. Using the WebLogic JMS .NET Client allows code written in .NET to access the WebLogic JMS server using the T3 protocol, just like any other Java application. Using the JCA adapter for MSMQ allows SOA composites and OSB applications to send/receive messages to/from MSMQ queues.

Using the WebLogic JMS .NET Client

The implementation of the WebLogic JMS .NET Client is very straightforward. All you have to do is deploy your .NET application with the WebLogic.Messaging.dll assembly file. You still need to code how your application will send/receive messages to/from the WebLogic JMS destinations. You can easily find the WebLogic.Messaging.dll assembly file in the following location: $FMW_HOME/wlserver/modules/com.bea.weblogic.jms.dotnetclient_x.x.x.x.

In the same location you can find the WebLogic JMS .NET Client API documentation. For those of you that are familiar with the JMS API, it will be easy to understand since the API design is almost the same. For beginners, I have provided the following C# sample code that shows how to publish messages to an WebLogic JMS queue.

using System;
using System.Collections.Generic;
using System.Text;
using WebLogic.Messaging;

namespace com.oracle.fmw.ateam.soa
{

    public partial class SampleDotNetApplication
    {

        private void sendMessage()
        {

            IConnectionFactory wlsConnectionFactory;
            IQueue ordersQueue;

            IDictionary<string, Object> environment;
            IContext jndiContext;

            IConnection connection = null;
            ISession session = null;
            IMessageProducer messageProducer = null;
            ITextMessage message = null;

            try
            {

                environment = new Dictionary<string, Object>();
                environment[Constants.Context.PROVIDER_URL] = "t3://soa.suite.machine:8001";
                environment[Constants.Context.SECURITY_PRINCIPAL] = "weblogic";
                environment[Constants.Context.SECURITY_CREDENTIALS] = "welcome1";
                jndiContext = ContextFactory.CreateContext(environment);

                wlsConnectionFactory = jndiContext.LookupConnectionFactory("jms/wlsConnectionFactory");
                ordersQueue = (IQueue) jndiContext.LookupDestination("jms/ordersQueue");

                connection = wlsConnectionFactory.CreateConnection();
                connection.Start();

                session = connection.CreateSession(Constants.SessionMode.AUTO_ACKNOWLEDGE);
                messageProducer = session.CreateProducer(ordersQueue);
                message = session.CreateTextMessage();
                message.SetStringProperty("customProperty", "123456789");
                message.Text = "<message>Oracle SOA Suite 12c Rocks</message>";
                messageProducer.Send(message);

            }
            finally
            {

                messageProducer.Close();
                session.Close();
                connection.Stop();
                connection.Close();

            }

        }

    }

}

Note that in the sample code above, an initial context object is instantiated during every method call. The code was written this way for clarity purposes but in the real world you should avoid this practice. That could lead to potential performance issues as explained in the next section. As a best practice instantiate only one initial context object per CLR process. An elegant way of doing this is applying the Singleton pattern around the initial context object.

Special considerations when using the WebLogic JMS .NET Client

Before moving applications from development/staging to production, maybe you should step back and take a look in some of the following guidelines:

* Make sure that the T3 protocol is available in the managed server accessed by the .NET application. When pointing to the admin server you don’t have to worry about this, but perhaps other managed servers must have networks channels configured to get this protocol enabled.

* Use the -Dweblogic.protocol.t3.login.replyWithRel10Content=true JVM property to allow WebLogic JMS .NET client applications written prior of 12.1.3 version to interoperate with the 12.1.3 version.

* Be aware that the following features are not currently supported: Queue browsers, XA transactions, SSL, HTTP tuneling, SAF clients, multicast subscribers and automatic reconnect.

* How you implement your messaging code has a significant impact on the overall performance of your application and will affect the rate of message production and/or consumption. According to the Microsoft CLR specification, each process uses a fixed thread pool with 25 threads per available processor. Each time you create an initial context that uses the T3 protocol you burn a thread from that pool and also create a socket connection to the WebLogic server. On the WebLogic server side you also create a thread to handle requests coming from that socket, using the traditional socket-reader muxer thread pool. That means if you create a large number of concurrent initial context objects, there will be a correspondingly large number of socket connections to manage in the client thread pool. You can also run out threads in the client application if the number of initial contexts created exceeds the thread pool size. Consider using one shared initial context per process to optimize the client thread pool and minimize the performance hit on the server incurred when there are too much searches in the JNDI tree.

Using the JCA Adapter for MSMQ

This adapter leverages the jCOM technology available in WebLogic to provide connectivity to the MSMQ server. The first thing to do is to enable jCOM in all servers where the adapter will be deployed. You can easily do this using the WebLogic administration console. In the managed server settings page, go to the “Protocols” > “jCOM” tab. Select the “Enable COM” check box as shown in the screen shot below:

Enabling jCOM in WebLogic

Due to the nature of the JCA adapter, you will also need to create an outbound connection pool. In the deployments page, search for the “MSMQAdapter” and then go to the “Configuration” > “Outbound Connection Pools” tab. Create a new javax.resource.cci.ConnectionFactory and provide it with a proper JNDI name. After that click in the newly created outbound connection pool and go to the “Properties” tab. Here is the summary of the main properties:

Property Name Description Possible Values
AccessMode Identifies if the connection factory allows for native access or not. If native, the Oracle WebLogic server should be installed on the same host as MSMQ. Use native for better performance or when using SOA co-located with MSMQ. Use DCOM only when accessing MSMQ remotely.  Native | DCOM
Domain Domain of the MSMQ host. Any java.lang.String value
Host  IP address or machine name of the MSMQ host. Any java.lang.String value
Password  Password for the specified user.  Any java.lang.String value
TransactionMode Indicates if the connection participates in a transaction when sending and receiving a message. Use single if the MSMQ queues are transactional. Single | None 
User Identifies a user. Any java.lang.String value

After setting these properties, you can optionally go to the “Connection Pool” tab and fine tune the connection pool, specifically the “Initial Capacity”, “Max Capacity” and “Capacity Increment” parameters. That’s it, this is the minimal configuration needed to start using the JCA adapter for MSMQ. The following section discusses some special considerations for this adapter.

Special considerations when using the JCA adapter for MSMQ

Using the JCA adapter for MSMQ in JDeveloper is equal to any other SOA Suite technology adapter. All you need to do is drag it from the components palette to the composite designer and inform the wizard about the JNDI name of the adapter and details about the queue.

msmq-adapter-config-wizard

Before deploying the application in the SOA server, review the follow recommendations to help ensure connectivity, high availability and performance:

* If you intend to access public queues and/or distribution lists, an Active Directory Domain Services (AD_DS) must be configured on a Windows 2008 Server system. This requirement does not apply for private queues.

* When the SOA server is not co-located with the MSMQ server or is installed in an operating system different than Windows (e.g.: Linux, Solaris) you need to use DCOM as access mode. in that case you need to set the value of the property “AccessMode” to “DCOM”. In addition, you need to install the MSMQ DCOM Proxy in the machine where the MSMQ server is running.

When the MSMQ Adapter needs to make an outbound connection to the MSMQ server, it must sign on with valid security credentials. In accordance with the J2CA 1.5 specification, the WebLogic server supports both container-managed and application-managed sign-on for outbound connections. The MSMQ adapter can leverage either of these methods to sign on to the EIS. The credentials must include a user that has proper permissions to interact with the MSMQ server otherwise you will get exceptions during deployment.

* The MSMQ adapter supports high availability through the active-active topology. It has a poller thread to poll the queue for the next available message. Each poller thread uses the MSMQ receive API and only removes the message from the queue after successful read. This ensures there is no message duplication when the MSMQ Adapter is deployed in an active-active topology.

* Use the adapter.msmq.dequeue.threads binding property to increase the number of poller threads during endpoint activation. The default value of this property is “1″ which is good for simple tests but with a higher value you can achieve a better degree of parallelism. You can set this property only at runtime using the Enterprise Manager FMW Control Console.

increasing-the-poller-threads-count

* Enabling streaming during MSMQ message consumption can significantly reduce the SOA server memory footprint when large payloads are used, specially if there is a mediator applying content-based routing rules before delivering the message for processing.

Conclusion

Enterprise application developers and architects today rarely ask the “Why SOA?” question. They are more often asking about how to implement SOA using best practices in order to build robust and scalable applications that maximize their SOA infrastructure investment. SOA Suite 12c has been designed to be the “Industrial SOA” solution that organizations need to deliver these solutions. Hopefully this blog has provided some useful information and best practices on integrating Microsoft-based applications with SOA Suite 12c using messaging.

11g Mediator – Diagnosing Resequencer Issues

$
0
0

In a previous blog post, we saw a few useful tips to help us quickly monitor the health of resequencer components in a soa system at runtime. In this blog post, let us explore some tips to diagnose mediator resequencer issues. During the diagnosis we will also learn some key points to consider for Integration systems that run Mediator Resequencer composites.

Please refer to the Resequencer White paper for a review of the basic concepts of resequencing and the interplay of various subsystems involved in the execution of Resequencer Mediator composites.

Context

In this blog post we will refer to the AIA Communications O2C Pre-Built Integration pack (aka O2C PIPs) as an example for understanding some issues that can arise at runtime with resequencer systems and how we can diagnose the cause of such issues. The O2C PIP uses resequencing-enabled flows. One such is the UpdateSalesOrder flow between OSM and Siebel. It is used to process the OSM status of Sales Orders in proper time sequence within the Siebel system.

Isolate the server within the soa cluster

Many a times the resequencer health check queries point us to an issue occurring only in one server within the soa cluster. While the Database queries mentioned here give us the containerId of the specific server, it does not specify the server name. This is because mediator uses a GUID to track a runtime server.

Trace Log messages generated by the Mediator can help us correlate this GUID to an individual server running in the cluster at runtime. The oracle.soa.mediator.dispatch runtime logger can be enabled from the FMW EM console to TRACE:32 level. Figure below shows the screenshot.

med_logger2Enabling this logger just for a few minutes will suffice and one can see messages such as below in soa servers’ diagnostic logs, once every lease refresh cycle. The default refresh cycle is 60s apart.


[APP: soa-infra] [SRC_METHOD: renewContainerIdLease] Renew container id [34DB0F60899911E39F24117FE503A156] at database time :2014-01-31 06:11:18.913


It implies, the server which logged the above message is running with a containerId of 34DB0F60899911E39F24117FE503A156 !

Locker Thread Analysis

When one observes excessive messages piling up with a status of GRP_STATUS=READY and MSG_STATUS=READY, it usually indicates that the locker thread is not locking the groups fast enough for processing the incoming messages. This could be due to Resequencer Locker thread stuck or performing poorly. For instance the locker thread could be stuck executing updates against the MEDIATOR_GROUP_STATUS table.

It is generally useful to isolate the server which is creating the backlog using health check queries and then isolate the server name by using the logger trace statements as described in previous section. Then a few thread dumps of this server, could throw more light on the actual issue affecting the locker thread.

Usually thread dumps show a stack such as below for a resequencer Locker thread.

"Workmanager: , Version: 0, Scheduled=false, Started=false, Wait time: 0 ms
" id=330 idx=0x1b0 tid=28794 prio=10 alive, sleeping, native_waiting, daemon
    at java/lang/Thread.sleep(J)V(Native Method)
    at oracle/tip/mediator/common/listener/<strong>DBLocker.enqueueLockedMessages</strong>(DBLocker.java:213)
    at oracle/tip/mediator/common/listener/DBLocker.run(DBLocker.java:84)
    at oracle/integration/platform/blocks/executor/WorkManagerExecutor$1.run(WorkManagerExecutor.java:120)
    at weblogic/work/j2ee/J2EEWorkManager$WorkWithListener.run(J2EEWorkManager.java:184)
    at weblogic/work/DaemonWorkThread.run(DaemonWorkThread.java:30)
    at jrockit/vm/RNI.c2java(JJJJJ)V(Native Method)

In the above thread the Locker is enqueing messages from locked groups into the in memory queue for processing by the worker threads.

During times of any issue, the Locker thread could be seen stuck doing database updates. If this is seen across thread dumps with no progress made by the thread, then it could point to a database issue which needs to be attended.

A poor performance of the locker query on the database side will adversely impact the Resequencer performance and hence decrease the throughput of the integration flow that uses Resequencers.

Recollect that the Locker thread runs an update query continuously attempting to lock eligible groups. Below shown is a sample FIFO Resequencer Locker query as seen database AWR reports.

update mediator_group_status a set a.status=7 where id in ( select id from (select distinct b.id, b.lock_time from 
mediator_group_status b, mediator_resequencer_message c where b.id=c.owner_id and b.RESEQUENCER_TYPE='FIFO' and 
b.status=0 and b.CONTAINER_ID=:1 and c.status=0 and b.component_status!=:2 ORDER BY b.lock_time) d where rownum<=:3 )

The Database AWR reports can also very useful to check the average Elapsed Time and other performance indicators for the locker query.

Huge data volume due to no proper purging strategy for Mediator tables is a common reason for deteriorated Locker query performance. Regular data purging, partitioning, statistics gathering and creation of required indexes on MEDIATOR_GROUP_STATUS will usually ensure good performance of locker query.

Note that there is only one Resequencer Locker thread running per server at runtime. Any database issue that impacts the locker thread will impair all the Mediator Composites that use the same resequencing strategy. The mediator resequencer uses database for storage, retrieval of messages to implement the reordering and sequencing logic. Hence, the proper and timely maintenance of SOA database goes a long way in ensuring a good performance.

Worker Thread Analysis

Recollect that Worker threads are responsible for processing messages in order. There are multiple worker threads per server to parallel-process multiple groups, while ensuring that each group is exclusively processed by only one worker thread to preserve the desired sequence. Hence, the number of worker threads configured in Mediator properties (from FMW EM console) is a key parameter for optimum performance.

Below sample snippets from server thread dumps show Resequencer Worker threads. The first stack shows a worker thread which is waiting for messages to arrive on the internal queue. As and when Locker thread, locks new eligible groups, such available worker threads will process the messages belonging to the locked groups.

Idle Worker Thread:
"Workmanager: , Version: 0, Scheduled=false, Started=false, Wait time: 0 ms
" id=208 idx=0x32c tid=26068 prio=10 alive, parked, native_blocked, daemon
    at jrockit/vm/Locks.park0(J)V(Native Method)
    at jrockit/vm/Locks.park(Locks.java:2230)
    at jrockit/proxy/sun/misc/Unsafe.park(Unsafe.java:616)[inlined]
    at java/util/concurrent/locks/LockSupport.parkNanos(LockSupport.java:196)[inlined]
    at java/util/concurrent/locks/AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)[optimized]
    at java/util/concurrent/<strong>LinkedBlockingQueue.poll</strong>(LinkedBlockingQueue.java:424)[optimized]
    at oracle/tip/mediator/common/listener/<strong>AbstractWorker.run</strong>(AbstractWorker.java:63)
    at oracle/integration/platform/blocks/executor/WorkManagerExecutor$1.run(WorkManagerExecutor.java:120)
    at weblogic/work/j2ee/J2EEWorkManager$WorkWithListener.run(J2EEWorkManager.java:184)
    at weblogic/work/DaemonWorkThread.run(DaemonWorkThread.java:30)
    at jrockit/vm/RNI.c2java(JJJJJ)V(Native Method)
    -- end of trace

The next partial stack shows a worker thread which is processing a message from a group that has been locked by the Locker.

Busy Worker Thread:
….
    at oracle/tip/mediator/service/BaseActionHandler.requestProcess(BaseActionHandler.java:75)[inlined]
    at oracle/tip/mediator/service/OneWayActionHandler.process(OneWayActionHandler.java:47)[optimized]
    at oracle/tip/mediator/service/ActionProcessor.onMessage(ActionProcessor.java:64)[optimized]
    at oracle/tip/mediator/dispatch/MessageDispatcher.executeCase(MessageDispatcher.java:137)[optimized]
    at oracle/tip/mediator/dispatch/InitialMessageDispatcher.processCase(InitialMessageDispatcher.java:500)[optimized]
    at oracle/tip/mediator/dispatch/InitialMessageDispatcher.processCases(InitialMessageDispatcher.java:398)[optimized]
    at oracle/tip/mediator/dispatch/InitialMessageDispatcher.processNormalCases(InitialMessageDispatcher.java:279)[inlined]
    at oracle/tip/mediator/dispatch/resequencer/ResequencerMessageDispatcher.processCases(ResequencerMessageDispatcher.java:27)[inlined]
    at oracle/tip/mediator/dispatch/InitialMessageDispatcher.dispatch(InitialMessageDispatcher.java:151)[inlined]
    at oracle/tip/mediator/dispatch/resequencer/ResequencerMessageHandler.handleMessage(ResequencerMessageHandler.java:22)[optimized]
    at oracle/tip/mediator/resequencer/<strong>ResequencerDBWorker.handleMessag</strong>e(ResequencerDBWorker.java:178)[inlined]
    at oracle/tip/mediator/resequencer/ResequencerDBWorker.process(ResequencerDBWorker.java:343)[optimized]
    at oracle/tip/mediator/common/listener/AbstractWorker.run(AbstractWorker.java:81)
    at oracle/integration/platform/blocks/executor/WorkManagerExecutor$1.run(WorkManagerExecutor.java:120)
    at weblogic/work/j2ee/J2EEWorkManager$WorkWithListener.run(J2EEWorkManager.java:184)
    at weblogic/work/DaemonWorkThread.run(DaemonWorkThread.java:30)
    at jrockit/vm/RNI.c2java(JJJJJ)V(Native Method)
    -- end of trace

It should be noted that all further processing of the message until the next transaction boundary happens in the context of this worker thread. For example, the diagram below shows the O2C UpdateSalesOrder Integration flow, from a threads perspective. Here, the BPEL ABCS processing, the calls to AIA SessionPoolManager, as well as the Synchronous invoke to the Siebel Webservice, all happen in the resequencer worker thread.

o2c_updso_diagramNow consider an example thread stack as shown below seen in the server thread dump. It shows a worker thread seen to be engaged in http communication with an external system.

Stuck Worker Thread:
"Workmanager: , Version: 0, Scheduled=false, Started=false, Wait time: 0 ms
 " id=299 idx=0x174 tid=72518 prio=10 alive, in native, daemon
  at jrockit/net/SocketNativeIO.readBytesPinned(Ljava/io/FileDescriptor;[BIII)I(Native Method)
  at jrockit/net/SocketNativeIO.socketRead(SocketNativeIO.java:32)[inlined]
  at java/net/SocketInputStream.socketRead0(Ljava/io/FileDescriptor;[BIII)I(SocketInputStream.java)[inlined]
  at java/net/<strong>SocketInputStream.read</strong>(SocketInputStream.java:129)[optimized]
  at HTTPClient/BufferedInputStream.fillBuff(BufferedInputStream.java:206)
  at HTTPClient/BufferedInputStream.read(BufferedInputStream.java:126)[optimized]
  at HTTPClient/StreamDemultiplexor.read(StreamDemultiplexor.java:356)[optimized]
  ^-- Holding lock: HTTPClient/StreamDemultiplexor@0x1758a7ae0[recursive]
  at HTTPClient/RespInputStream.read(RespInputStream.java:151)[optimized]
….
….
  at oraclele/tip/mediator/dispatch/resequencer/ResequencerMessageDispatcher.processCases(ResequencerMessageDispatcher.java:27)
  at oracle/tip/mediator/dispatch/InitialMessageDispatcher.dispatch(InitialMessageDispatcher.java:151)[optimized]
  at oracle/tip/mediator/dispatch/resequencer/ResequencerMessageHandler.handleMessage(ResequencerMessageHandler.java:22)
  at oracle/tip/mediator/resequencer/<strong>ResequencerDBWorker.handleMessage</strong>(ResequencerDBWorker.java:178)[inlined]
  at oracle/tip/mediator/resequencer/ResequencerDBWorker.process(ResequencerDBWorker.java:343)[optimized]
  at oracle/tip/mediator/common/listener/AbstractWorker.run(AbstractWorker.java:81)
  at oracle/integration/platform/blocks/executor/WorkManagerExecutor$1.run(WorkManagerExecutor.java:120)
  at weblogic/work/j2ee/J2EEWorkManager$WorkWithListener.run(J2EEWorkManager.java:184)
  at weblogic/work/DaemonWorkThread.run(DaemonWorkThread.java:30)
  at jrockit/vm/RNI.c2java(JJJJJ)V(Native Method)
  -- end of trace

If this thread remains at the same position across thread dumps spanning few minutes, it would indicate that the worker thread is blocked on external Webservice application. If such external system issues block a significant number of worker threads from the pool of available worker threads, it will impact the overall throughput of the system. There will be fewer workers available to process all the groups that are being locked by the Locker thread, across all composites that use resequencers. When the rate of incoming messages during such time is high, this issue will show up as a huge backlog of messages with status GRP_STATUS=LOCKED and MSG_STATUS=READY in the resequencer health check query.

Note that JTA timeout will not abort these ‘busy’ threads. Such threads may eventually return after the JTA transaction has rolled back, or in some cases depending on how sockets are handled by the external system, may not return at all.

For such integration flows, it is advisable to configure HTTP Connect and Read timeouts for Webservice calls at the composite’s Reference properties. Figure below shows a screenshot of the properties. This will ensure that worker threads are not held up due to external issues and affect processing of other components that rely on worker threads.

WSRef_timeoutsFew more Loggers

The below loggers can be enabled for trace logging to gather diagnostic information on specific parts of the Mediator/resequencer.

- Logger oracle.soa.mediator.dispatch  for Initial message storage, Group Creation, Lease Renew, Node failover

- Loggers oracle.soa.mediator.resequencer  and oracle.soa.mediator.common.listener for Resequencer Locker, Resequencer Worker, Load Balancer

Conclusion

We have explored into how problems at various different layers can manifest at the resequencer in an Integration system and how the cause of these issues can be diagnosed.

We have seen

- Useful pointers in diagnosing resequencer issues and where to look for relevant information

- How a good SOA database maintenance strategy is important for resequencer health

- How timeout considerations play a role in resequencer performance

 

-Shreeni


Threading Best Practices for Custom OEP Adapters

$
0
0

Introduction

If there is one universal truth about implementing parallelism using Java threads, it is that you will need to pay special attention when writing your code. Writing custom OEP adapters that spawn multiple threads is no different. If you have a strong requirement for handling high-throughput events, you have two choices:

1. Scaling out horizontally using multiple JVMs in cluster.

2. Scaling up a single JVM that spreads the adapter logic into multiple threads.

While scaling out is a good choice, sometimes you just can’t use this approach; or maybe you can but are constrained by lack of resources. When that happens, you need to leverage the scale up approach, especially if the machine where the OEP application will run has enough CPU cores and memory.

The idea behind scaling up using a single JVM is simple: you just need to create two or more threads and equally partition the amount of work between them, so they can process work in parallel. Let’s understand how this is possible considering the built-in JMS adapter available in OEP. The OEP JMS adapter has a property called concurrent-consumers. When you set this property to any positive integer greater than one, you are instructing the adapter to create parallel consumers, each one running on its own thread, which will increase message consumption and also application throughput.

Now consider the scenario where you need your own custom adapter, and you desire to implement parallelism. Implementing multi-threaded code in Java is fairly straightforward once you understand what is happening in the JVM. However, the truth is that even the most well written code in the world, has a good chance of blowing up and doing unexpected things when executing in the OEP runtime due to poor life cycle management of threads. This article will explain some of the possible issues that can occur when you implement threads on your own and, most importantly, how you can leverage the work manager support available in OEP to help ensure that you achieve the scalability and resiliency that you require.

This article will assume that you have some basic understanding about how to create a custom adapter in OEP. If you need information about how to create custom adapters before continue reading this article, I strongly recommend reviewing the product documentation section that covers this topic. Another great source of information is the book Getting Started with Oracle Event Processing 11g, written by some of the folks behind the OEP product at Oracle.

Testing a Simple OEP Application

Consider the following scenario: an event-driven application written in OEP that uses a custom inbound adapter, generates a random amount of events every five seconds. Those events will flow through simple pass-through channels and be queried out by a processor with the following CQL statement: SELECT * FROM inboundChannel [NOW]. Finally, those events will be printed out in the console by a custom outbound adapter. The picture below show the EPN of this application.

Aiming to generate events in parallel, the first version of the custom inbound adapter was written to perform its work using regular Java threads. The listing below shows the custom adapter implementation:

package com.oracle.fmw.ateam.soa;

import java.util.Random;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

import com.bea.wlevs.ede.api.RunnableBean;
import com.bea.wlevs.ede.api.StreamSender;
import com.bea.wlevs.ede.api.StreamSource;

public class CustomInboundAdapter implements RunnableBean, StreamSource {

	private static final int NUMBER_OF_THREADS = 4;

	private StreamSender streamSender;
	private ExecutorService executorService;
	private boolean suspended;

	@Override
	public void run() {

		executorService = Executors.newFixedThreadPool(NUMBER_OF_THREADS);

		for (int i = 0; i < NUMBER_OF_THREADS; i++) {

			RegularJavaThread thread = new RegularJavaThread();
			thread.setName(RegularJavaThread.class.getSimpleName() + "-" + i);

			executorService.execute(thread);

		}

	}

	@Override
	public synchronized void suspend() throws Exception {

		executorService.shutdown();
		this.suspended = true;

	}

	@Override
	public void setEventSender(StreamSender streamSender) {
		this.streamSender = streamSender;
	}

	private class RegularJavaThread extends Thread {

		@Override
		public void run() {

			final Random random = new Random(System.currentTimeMillis());
			int count = 0;

			try {

				while (!suspended) {

					count = random.nextInt(10);

					for (int i = 0; i < count; i++) {

						streamSender.sendInsertEvent(new Tick(getName()));

					}

					Thread.sleep(5000);

				}

			} catch (Exception ex) {

				ex.printStackTrace();

			}

		}

	}

}

As you can see in the listing above, a regular Java thread is created to perform the main logic of the adapter; which is to generate a random number of events every five seconds. Note that the thread is supposed to finish its work when the OEP application changes its status to suspended. This can happen in two ways: when the application is uninstalled from the server or when the user intentionally set its status as suspended through the OEP visualizer.

According to the run() method of the custom inbound adapter, four instances of the thread are scheduled to work using the java.util.concurrent.Executor service. This code runs perfectly well and produces the desired behavior, as you can see in the console output listed below:

<Aug 6, 2014 9:10:01 PM EDT> <Notice> <Deployment> <BEA-2045000> <The application bundle "wm-driven-threads-in-oep" was deployed successfully to file:/oracle/user_projects/domains/oep-development/defaultserver/applications/wm-driven-threads-in-oep/wm-driven-threads-in-oep.jar with version 1407373801915> 
<Aug 6, 2014 9:10:02 PM EDT> <Notice> <Spring> <BEA-2047000> <The application context for "wm-driven-threads-in-oep" was started successfully> 
Tick [uuid=13074bae-554d-49d1-934f-c2cadfadcaec, dateTime=Wed Aug 06 21:10:07 EDT 2014, threadName=RegularJavaThread-0]
Tick [uuid=377a896e-d26b-4a8c-ad6a-585495ce78c2, dateTime=Wed Aug 06 21:10:07 EDT 2014, threadName=RegularJavaThread-3]
Tick [uuid=488ea3c1-0515-471c-ba46-dd07f446437f, dateTime=Wed Aug 06 21:10:07 EDT 2014, threadName=RegularJavaThread-0]
Tick [uuid=b75c5a24-a82d-49d2-87df-49f3900059e1, dateTime=Wed Aug 06 21:10:07 EDT 2014, threadName=RegularJavaThread-3]
Tick [uuid=69df06ca-a57a-494a-b20b-976047537fce, dateTime=Wed Aug 06 21:10:07 EDT 2014, threadName=RegularJavaThread-0]
Tick [uuid=a5855897-eeef-44d1-85d8-480b4745ac20, dateTime=Wed Aug 06 21:10:07 EDT 2014, threadName=RegularJavaThread-3]
Tick [uuid=f71308ef-797a-4457-a5a6-7f91fe10d773, dateTime=Wed Aug 06 21:10:07 EDT 2014, threadName=RegularJavaThread-3]
Tick [uuid=2164390b-8c74-4f4a-942e-7c60a9656c57, dateTime=Wed Aug 06 21:10:07 EDT 2014, threadName=RegularJavaThread-0]
Tick [uuid=f82a2376-ce56-41e9-90a3-119e4e827ff5, dateTime=Wed Aug 06 21:10:07 EDT 2014, threadName=RegularJavaThread-3]
Tick [uuid=660e074e-991f-4082-8210-8960582aa129, dateTime=Wed Aug 06 21:10:07 EDT 2014, threadName=RegularJavaThread-0]
Tick [uuid=32b2a37e-235d-460d-82b2-8537dddc7703, dateTime=Wed Aug 06 21:10:07 EDT 2014, threadName=RegularJavaThread-2]
Tick [uuid=4bea1680-0d1e-4f7a-9ae4-e78a23bb1229, dateTime=Wed Aug 06 21:10:07 EDT 2014, threadName=RegularJavaThread-2]
Tick [uuid=215ef396-2210-40b5-88f1-4e1c86c4b93f, dateTime=Wed Aug 06 21:10:07 EDT 2014, threadName=RegularJavaThread-2]
Tick [uuid=27cfebbd-649c-4f00-ac8d-33948754298b, dateTime=Wed Aug 06 21:10:07 EDT 2014, threadName=RegularJavaThread-2]
Tick [uuid=cffdab2e-daca-4145-8e37-b808a5f4b412, dateTime=Wed Aug 06 21:10:07 EDT 2014, threadName=RegularJavaThread-2]
Tick [uuid=35a8ea83-130e-48ac-bd47-20bed9c20d82, dateTime=Wed Aug 06 21:10:07 EDT 2014, threadName=RegularJavaThread-1]
Tick [uuid=5c61c10a-e61e-42d6-8931-fbf3ee092b6f, dateTime=Wed Aug 06 21:10:07 EDT 2014, threadName=RegularJavaThread-1]
Tick [uuid=e5d070e2-bdb9-4fb9-bc12-7edf0165d258, dateTime=Wed Aug 06 21:10:07 EDT 2014, threadName=RegularJavaThread-1]
Tick [uuid=a3488aba-00ee-4dd0-93fa-29517f69ce7b, dateTime=Wed Aug 06 21:10:07 EDT 2014, threadName=RegularJavaThread-1]
Tick [uuid=1669f735-35a4-4b2f-bad3-02117ef94991, dateTime=Wed Aug 06 21:10:07 EDT 2014, threadName=RegularJavaThread-1]

If the code runs perfectly well, you are probably wondering what’s wrong with the current implementation. The next section will illustrate the problem.

Problem: Server Cannot Control the Allocated Threads

With the OEP application running on the server, watch the threads running on top of the JVM. There are many ways to accomplish this, from taking thread dumps to using more sophisticated tools. I chose to use the JRockit Mission Control which provides a nice view of the active threads:

As expected, there are four threads allocated in the JVM executing the work defined in the code. Now let’s consider for a moment the idea of the developer being able to define the number of threads used as an instance property that could be changed through a configuration change. Just like the concurrent-consumers property of the JMS adapter, it is scenario perfectly amenable to happen. Let’s not forget that the developer can also hard code the number of threads, making it impossible to change the value even at the configuration level.

The development practices explained above can lead to some potential problems. What if the developer defines a number of threads so high that it affects the performance of other OEP applications running in the same server? Or worse, what if the developer defines a number of threads so high that the JVM itself can’t handle comply due to a lack of resources?

To protect the health of the server and prevent those situations from happening, administrators can use work managers. A work manager is a OEP server feature that controls the threading behavior using constraints. Once created in the server, the administrator can associate the work manager with one or multiple OEP applications, but this is something that should be done manually. When you create a domain for the first time, a default work manager called JettyWorkManager is created, and it is primarily used by the Jetty engine: a Java web server used to deploy HTTP servlets and static resources. You can create and/or change work managers in the domain configuration file found in the config folder of your server.

Back to the scenario, let’s assume that a work manager named wm-driven-threads-in-oep is created in the domain configuration file and limits the number of threads to a minimum of one and a maximum of two. The listing below shows how this work manager should be defined.

<?xml version="1.0" encoding="UTF-8"?>
<n1:config xsi:schemaLocation="http://www.bea.com/ns/wlevs/config/server wlevs_server_config.xsd"
xmlns:n1="http://www.bea.com/ns/wlevs/config/server"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">

   <netio>
      <name>NetIO</name>
      <port>9002</port>
   </netio>

   <netio>
      <name>sslNetIo</name>
      <ssl-config-bean-name>sslConfig</ssl-config-bean-name>
      <port>9003</port>
   </netio>

   <work-manager>
      <name>wm-driven-threads-in-oep</name>
      <min-threads-constraint>1</min-threads-constraint>
      <max-threads-constraint>2</max-threads-constraint>
   </work-manager>

   <work-manager>
      <name>JettyWorkManager</name>
      <min-threads-constraint>5</min-threads-constraint>
      <max-threads-constraint>10</max-threads-constraint>
   </work-manager>

   <jetty>
      <name>JettyServer</name>
      <network-io-name>NetIO</network-io-name>
      <work-manager-name>JettyWorkManager</work-manager-name>
      <scratch-directory>Jetty</scratch-directory>
      <secure-network-io-name>sslNetIo</secure-network-io-name>
   </jetty>

   <!-- the rest of the file was dropped for better clarity -->

</n1:config>

Work managers can be associated with one or multiple applications, and also can be associated with one particular component of the application. Having said that, keep in mind that sharing the same work manager across multiple applications can limit throughput since the max-threads-constraint value will be the same for all of them, forcing some applications to queue requests while they keep waiting for available threads in the pool. And that could mean keep waiting for ever. As a best practice, if you run more than one application per server, consider creating one work manager for each application to manage its threads boundaries individually.

Once created in the domain configuration file, the work manager can be further configured using the OEP Visualizer:

Now we are all set. We can easily restart the server and test the OEP application again. With a work manager in place limiting the maximum number of threads to only two, even if the code tries to allocate four threads there will be only two threads running. That’s the theory anyway. Unfortunately, the reality is slightly different. Back to the JRockit Mission Control we still see four threads allocated in the JVM:

So, what is really happening here? Since those threads were created using standard JDK techniques, the server has no control over the number of threads. That’s why even creating a work manager and associating it with the OEP application, the constraints were ignored. Also, the main adapter code runs on its own separate thread managed by the server, as a result of implementing the com.bea.wlevs.ede.api.RunnableBean interface. That means that the spawned threads have no relationship with the adapter thread, forcing them to behave like deamon threads: instead of having its life cycle associated with the application, they will have its life cycle associated with the JVM. The problem can become even worse if the spawned threads hold references to objects belonging to the adapter thread. If for some reason the adapter thread dies, its garbage will be retained in memory since the spawned threads will still have references of it’s objects, causing memory leak problems that could lead to out of memory errors.

With this problem in mind, there is a clearly a need for some technique that would allow threads created from an adapter implementation to: have the chance to clean up its garbage independent of the run() method logic; respect the constraints imposed by an associated work manager and have the flexibility to be marked as deamon or non-deamon depending of the need. The solution for this problem will be explored in the next section.

Solution: Implementing the WorkManagerAware Interface

When designing custom adapters, if you need to spawn threads to perform some work and need server control over those threads, you can use the com.bea.wlevs.ede.spi.WorkManagerAware interface. It is available through the OEP API and all you need is to make sure that your adapter class is implementing this interface. This allows your adapter class to receive a reference of the work manager associated with the application. From this work manager reference, you can request work scheduling for threads in a safe manner. The listing below shows the updated custom adapter implementation.

package com.oracle.fmw.ateam.soa;

import java.util.Random;

import com.bea.wlevs.ede.api.RunnableBean;
import com.bea.wlevs.ede.api.StreamSender;
import com.bea.wlevs.ede.api.StreamSource;
import com.bea.wlevs.ede.spi.WorkManagerAware;
import commonj.work.Work;
import commonj.work.WorkManager;

public class CustomInboundAdapter implements WorkManagerAware, RunnableBean,
		StreamSource {

	private static int NUMBER_OF_THREADS = 4;

	private StreamSender streamSender;
	private WorkManager workManager;
	private boolean suspended;

	@Override
	public void run() {

		String threadName = null;

		try {

			for (int i = 0; i < NUMBER_OF_THREADS; i++) {

				threadName = WorkManagerBasedThread.class.getSimpleName() + "-" + i;

				workManager.schedule(new WorkManagerBasedThread(threadName));

			}

		} catch (Exception ex) {

			ex.printStackTrace();

		}

	}

	@Override
	public synchronized void suspend() throws Exception {
		this.suspended = true;
	}

	@Override
	public void setEventSender(StreamSender streamSender) {
		this.streamSender = streamSender;
	}

	@Override
	public void setWorkManager(WorkManager workManager) {
		this.workManager = workManager;
	}

	private class WorkManagerBasedThread implements Work {

		private String threadName;

		public WorkManagerBasedThread(String threadName) {
			this.threadName = threadName;
		}

		@Override
		public void run() {

			final Random random = new Random(System.currentTimeMillis());
			int count = 0;

			try {

				while (!suspended) {

					count = random.nextInt(10);

					for (int i = 0; i < count; i++) {

						streamSender.sendInsertEvent(new Tick(threadName));

					}

					Thread.sleep(5000);

				}

			} catch (Exception ex) {

				ex.printStackTrace();

			}

		}

		@Override
		public boolean isDaemon() {
			
			// This way you can inform to the
			// OEP runtime that this thread is
			// deamon but without losing the
			// association with the WM...
			
			return false;
			
		}

		@Override
		public void release() {
			
			// Here you can put the logic to clean up
			// any resources allocated by the thread,
			// in a safe and guaranteed manner...
			
		}

	}

}

As you can see in the listing above, no changes were made in the thread logic. The main difference is that the threads are scheduled through the work manager reference. Also, the threads have the isDeamon() and release() callback methods, which can be used to change the way the thread behaves regarding its life cycle and how resources are cleaned up. After installing the new version of the OEP application, it is possible to see that the constraints imposed by the work manager are now being respected:

The console output listed below also shows that there are only two threads now performing the work:

<Aug 7, 2014 1:47:20 PM EDT> <Notice> <Deployment> <BEA-2045000> <The application bundle "wm-driven-threads-in-oep" was deployed successfully to file:/oracle/user_projects/domains/oep-development/defaultserver/applications/wm-driven-threads-in-oep/wm-driven-threads-in-oep.jar with version 1407433640951> 
<Aug 7, 2014 1:47:23 PM EDT> <Notice> <Spring> <BEA-2047000> <The application context for "wm-driven-threads-in-oep" was started successfully> 
Tick [uuid=a33b7f84-2489-4737-9e33-18cabd6cca9f, dateTime=Thu Aug 07 13:47:23 EDT 2014, threadName=WorkManagerBasedThread-0]
Tick [uuid=c84e2f0c-37fa-4dca-a897-4f75fa55a7b9, dateTime=Thu Aug 07 13:47:23 EDT 2014, threadName=WorkManagerBasedThread-0]
Tick [uuid=74023209-4964-4181-8f18-40eef8ef2476, dateTime=Thu Aug 07 13:47:23 EDT 2014, threadName=WorkManagerBasedThread-0]
Tick [uuid=f9ccce52-fee6-49ad-91cc-6d9023d2c967, dateTime=Thu Aug 07 13:47:23 EDT 2014, threadName=WorkManagerBasedThread-0]
Tick [uuid=a065ebf9-1320-4cc0-a9c3-b2e004398931, dateTime=Thu Aug 07 13:47:23 EDT 2014, threadName=WorkManagerBasedThread-0]
Tick [uuid=43e4db5b-72a1-4760-8205-b92a8a784cbb, dateTime=Thu Aug 07 13:47:23 EDT 2014, threadName=WorkManagerBasedThread-1]
Tick [uuid=3bf7cbc3-09b9-4f75-99e4-d223b1a400c3, dateTime=Thu Aug 07 13:47:23 EDT 2014, threadName=WorkManagerBasedThread-1]
Tick [uuid=48e2ab1f-4806-4d3b-8474-10fdb51d346b, dateTime=Thu Aug 07 13:47:23 EDT 2014, threadName=WorkManagerBasedThread-1]
Tick [uuid=413803e2-c5c2-4f49-879e-06c356bca0c0, dateTime=Thu Aug 07 13:47:23 EDT 2014, threadName=WorkManagerBasedThread-1]

You can download the final implementation of the project used in this article here.

One-Way Authentication Policies in OSB

$
0
0

Introduction

Sometimes, using OSB, it may be necessary to attach credentials, such as username and password, on an outbound SOAP request to a remote server for authentication. While the OWSM policy store available with WebLogic Server provides policies that can inject username and password (e.g. “oracle/wss_username_token_over_ssl_client_policy”), OSB causes OWSM policies to be enforced on the outbound request message as well as the inbound response. This article will describe the steps to create the necessary WLS 9 policy and attach it to an OSB proxy service so that it is only enforced on the outbound request.

Consider the following business requirements:

  1. 1. Use OSB to communicate with an external Web Service over https.
  2. 2. The external Web Service requires authentication using a username and time-senstive, digested password in the HTTP header.
  3. 3. The external Web Service also requires a timestamp in the HTTP header.

Technology versions:

  • Oracle SOA Suite 11.1.1.7
  • Oracle Service Bus 11.1.1.7
  • WebLogic Server 10.3.6
  • Oracle Enterprise Pack for Eclipse 11gR1 (11.1.1.8)
  • WLS 9

Main Article

Summary of the steps to accomplish this task

Step 1. Enable PasswordDigest and https certificate exchange in WLS
Step 2. Create Custom Policy and ServiceAccount
Step 3. Attach the Policy and Service Account to the OSB Service

Step 1 Enable PasswordDigest and https certificate exchange in WLS

1. In the WLS Admin Console, navigate to Security Realms > myrealm > Providers > DefaultIdentityAsserter > Common
2. Shuttle “wsse:PasswordDigest” to “Chosen” in order to enable processing of Passwords using digest when a WLS 9 policy configured with PasswordDigest is used on an OSB service.
3. Shuttle “X.509” to “Chosen” in order to enable the server to authenticate other servers that identify themselves with certificates.

1_EnablePasswordDigestsInAssert

Note: The certificates will have to be imported into the server’s keystore if they are self-signed, or the CA is not recognized by the keystore (trust). For example:

From the keystore directory (e.g. c:\Oracle\Middleware\wlserver_10.3\server\lib), assuming the certificate is here as well:
keytool -import -trustcacerts -file myprivatecertname.pem -alias myprivatecertname -keystore DemoTrust.jks -storepass DemoTrustKeyStorePassPhrase

4. Still in the WLS Admin Console, navigate to Security Realms > myrealm > Providers > DefaultAuthenticator, check the “Enable Password Digests” checkbox, and click Save:

2_EnablePasswordDigestsInAuth

5. Restart the AdminServer as necessary.

 

Step 2 Create Custom Policy and ServiceAccount

Note: These instructions are for Eclipse, but the same process can be followed in the OSB Console.

1. Create a folder named ‘policy’ (this name is arbitrary) in the OSB Project to store the policy and service account.
2. Create a policy file by selecting the folder location and use the context menu > New > WS-Policy File:

3_CreatingWsPolicy

3. Name the file: “wls9_username_token_digested_password_timestamp_client_policy”.
Paste the following into the contents of the file and save:

<wsp:Policy
	xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy"
	xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"
	xmlns:wssp="http://www.bea.com/wls90/security/policy"
	wsu:Id="wls9_username_token_digested_password_timestamp_client_policy">
	<wsp:ExactlyOne>
		<wsp:All>
			<!--  Identity Assertion -->
			<wssp:Identity>
				<wssp:SupportedTokens>
					<!-- Use UsernameToken for authentication -->
					<wssp:SecurityToken IncludeInMessage="true" TokenType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#UsernameToken">
						<!-- Use a digested password -->
						<wssp:UsePassword Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordDigest"/>
					</wssp:SecurityToken>
				</wssp:SupportedTokens>
			</wssp:Identity>
			<!-- Timestamp -->
			<wssp:MessageAge Age="300"/>
		</wsp:All>
	</wsp:ExactlyOne>
</wsp:Policy>

4. Create a Service Account (these are the credentials the policy will inject into the SOAP header when calling the to remote web service) by right clicking the destination folder (can be same as location of policy file) > New > Service Account
5. Name the file, “ProtectedService-Credentials”, choose “Static”, and set the User Name and Password, and Save.

4_ServiceAccountConfig

6. It is not preferable to store the password in a plaintext file, so an alternative to Step 5 would be to simply create this file directly in the OSB console after the OSB project has been deployed so that it is protected by authenticated access to the OSB console.

 

Step 3 Attach the Policy and Service Account to the OSB Service
1. Go to the Policy configuration tab of the Business Service
2. Select “From Pre-defined Policy or WS-Policy Resource” (we are using a WS-Policy Resource).
3. Select the “Request Policies” node under the appropriate operation, “process” in the case of this example (there may be more than one in your implementation, in which case, the following steps should be repeated for each operation).
4. Click “Add”. We are attaching it ONLY to the request and NOT to the response. This is the very reason we are using a WLS 9 policy attachment instead of using an OWSM policy attachment. OWSM currently only allows policies to be applied at the service level, meaning both the request and the response require the same policy enforcement.
5. Click the “Browse” button, select the policy created earlier, and click “OK”:

5_AttachingWsPolicy

6. This is how it appears when you’ve successfully attached the policy:

6_AfterPolicyIsAdded

7. Now that the policy is attached, we need to specify the username and password credentials to use when enforcing this policy. Click the “Security” tab, which will only be present after the Policy has been added.
8. Browse to the service account created earlier, “ProtectedService-Credentials”, and click “OK”:

7_ChoosingServiceAccount

Expected Result
Once you have deployed the OSB Proxy, you can test the service using the OSB Console or Soap client. If testing using the OSB Console, test directly against the business service so you can see the request before and after the policy is applied (see screenshot blew). If testing using another Soap client, be sure to enable tracing on the business service (where the policy is attached) and call the proxy end of the service. In the server log, you should now see the addition of a wsse:Security node in the soap:Header, and it should contain a UsernameToken and a Timestamp child elements. The UsernameToken should contain 3 child elements, Password (which should not be the plain text password), Nonce, and Created.

8_ExpectedResult

References:
Securing Oracle Service Bus with Oracle Web Services Manager
Using WS-Policy in Oracle Service Bus Proxy and Business Services

Oracle BPM 12c just got Groovy – A Webcenter Content Transformation Example

$
0
0

Introduction

On the 27th June 2014 we released Oracle BPM 12c which included some exciting new features.
One of the less talked about of new features is the support of BPM Scripting which incorporates the Groovy 2.1 compiler and runtime.

So what is Groovy anyway?

Wikipedia describes Groovy as an object-oriented programming language for the Java platform and you can read the definition here.

In short though it is a Java like scripting language, which is simple to use. If you can code a bit of Java then you can write a bit of Groovy and most of the time only a bit is required.

If you can’t code in groovy yet don’t worry, you can just code in Java and that work most of the time too.

With great power comes great responsibility?

The benefits and possibilities of being able to execute snippets of groovy code in a BPM process execution are almost limitless. Therefore we must be responsible in its use and decide whether it makes sense from a BPM perspective in each case and always implement best practices which leverage the best of the BPM execution engine infrastructure.

If you can easily code, then it is easy to write code to do everything. But this goes against what BPM is all about. We must always first look to leverage the powerful middleware infrastructure that the Oracle BPM execution engine sits on, before we look to solve our implementation challenges with low level code.

One benefit of modelled BPM over scripting is Visibility. We know that ideally BPM processes should be modelled by the Business Analysts and Implemented by the IT department.

Business Process Logic should therefore be modelled into the business process directly and not implemented as low level code that the business will not understand nor be aware of at runtime. In this manner the logic always stays easily visible and understood by the Business. Overuse of logic in scripting will quickly transcend into a solution that will be hard to debug or understand in problem resolution scenarios.

If one argues that the business logic from your business process cannot be modelled directly in the BPM  process, then one should revisit the business process analysis and review whether the design actually makes really makes sense and can be improved.

 

What could could be a valid usecase for groovy in BPM?

One valid usecase of groovy scripting can be complex and dynamic data transformations. In Oracle BPM 12c we have the option to use the following mechanisms for transformations:

Data Association

Good for:

  • Top level transformations of the same or similar types
  • Simple transformations of a few elements
  • Lists and arrays
  • Performance

XSL transformation

Good for:

  • Large XML schema elements
  • Assignment of optional XML schema elements and attributes
  • Lists and arrays
  • Reuse

Groovy Scripting

Good for:

  • Generic XML schema types like xsd:any
  • Dynamic data structures
  • Complex logic
  • Error handling
  • Reuse

Java callouts using a mediator or Spring component

Good for:

  • Pure Java implementation requirements
  • Large batch processing

Each method have their own benefits and downsides, but in combination you can transform any payload. What to use is largely a case of:

  • Best practice within your organization
  • Best practice for BPM
  • The level of organized structure of your schemas

In practice, an efficiently implemented BPM process will be a combination of associations, xslt & bpm scripts.

 

tip3Tip: Always try to solve transformation tasks using using a data association first before turning to xslt or groovy. Use the right tool in your toolkit for the right job.

 

 Upgrading from BPM 10g

The inclusion of BPM scripting will also aid in the upgrade from BPM 10g processes. This should be seen as an opportunity to review and improve the implementation as opposed to blindly copying the existing functionality. This is a process that is beyond the scope of this post.

 

A Complex and Dynamic Webcenter Content SOAP Example

Invoking very generic SOAP services can be one instance where groovy can save the day. When a SOAP service is well defined it’s very easy to create a mapping using the xsl or data association mappers. But what if the element definition is very wide open with the use of schemas elements like xsd:any, xsd:anyType or xsd:anyAttribute.

To solve this transformation in XSLT could potentially be complex with lots of hand written, harder to read code.

The GenericRequest of the Webcenter Content SOAP service is an example of such a generic SOAP service. The flexibility of its use means that the payload required is very dynamic.

The actual schema element looks like this.

 

content.xsd

 

Now consider the situation where this payload for the GenericRequest needs to look like this and could potentially have lots of required logic.

 

soapui

This might be accomplished using a complex, hand coded xslt transformation.

Alternatively if you don’t have any xslt world champions on the team, anyone on your development team that can code code java can do this easily with groovy scripting.

Building the Transformation Demo

To demonstrate the transformation capabilities of groovy scripting we are going to create a simple synchronous BPM process based on the above usecase.

We send an Incident as a request and as a response will receive the transformed GenericRequest. In this manner it will be easy for us to see the whole transformed payload that we would normally send to Webcenter Content.

The finished process looks like this.

 

FinishedProcess

 

 

 

 

 

 

 

Create a new BPM Application and define Data Objects and Business Objects

We will create a new BPM application and define the:

  • Input arguments as an Incident
  • Output argument as a Webcenter GenericRequest

 

1) Download the schema zipfile called docs and extract to a local location. Then open Studio (JDeveloper) and from the top menu choose Application->New->BPM Application

 

NewApplication

 

 

 

 

 

 

 

 

2) Click OK, use the application name GroovyDemoApp and click Next

 

AppName

 

 

 

 

 

 

3) Use the Project Name GroovyDemo, then click Next

 

ProjectName

 

 

 

 

 

 

 

 

 

 

4) Now choose the Synchronous Service, name the process GroovyDemoProcess and click Next

 

SyncProcess

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Now we need to define and add the input and output arguments. Here we use some predefined schema elements in schema files that I provide. Firstly we define these as Business Objects, then we use these Business Objects as a definition for our arguments and Data Objects in the process itself.

 

5) Click on the green add icon to add a new argument, name the argument incidentARG

 

incidentARG

 

 

 

 

 

6) Choose Browse under Type and then click the Create Business Object Icon

 

CreateBO

 

 

 

 

 

7) Use the name IncidentBO and click the magnify icon choose a Destination Module

 

DestModule2

 

 

 

 

 

 

 

8) Click the Create Module icon and use the name Domain

 

Domain

 

 

 

 

 

 

 

 

 

 

 

 

9) Click OK twice to return back to the Create Business Object window

 

 

 

 

 

 

 

 

10) Select the checkbox Based on External Schema and the magnifying glass icon to choose a Type

 

TypeChooser

 

 

 

 

 

 

 

11) Click the Import Schema File icon, select the incidents.xsd schema file and OK

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

12) Click OK to localize the schema files to your composite project

 

localize

 

 

 

 

 

 

 

 

 

 

 

 

 

 

13) Select the Incident element from the Type Explorer and OK twice to return to Browse Types

 

type_explorer

 

 

 

 

 

 

 

 

 

 

 

 

14) Select the IncidentBO type and OK

 

IncidentBOSelect

 

 

 

 

 

 

 

 

15) To complete the In argument creation click OK

 

InArgumentFinal

 

 

 

 

 

16) Now click the output tab to define the GenericRequest type as a an Output

 

InArgComplete3

 

 

 

 

 

 

 

17) Using the same procedure as before create an output argument using the following values:

 

Output Argument Name GenericRequestARG
Type GenericRequestBO
Schema Filename content.xsd
Module Domain
Element GenericRequest

 

OutArg5

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

18) Click Finish to complete the initial definition of the GroovyDemoProcess BPM process.

 

DefinitionProcess

 

 

 

 

 

 

 

 

We have created a GroovyDemoProcess syncronous BPM process that has an Incident as a request and a GenericRequest as a response.

Next we need to define process variables based on the business objects that we have already created. These will be used to store the payload data in the BPM process.

 

19) Ensure the GroovyDemoProcess is selected in the Application Navigator, then  in the Structure Window right-click the Process Data Objects icon. Use the name incidentDO and select the IncidentBO as the Type.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

20) Similarly create another process data object called genericRequestDO of Type GenericRequestBO

 

GenericRequestDO

 

 

 

 

 

Performing Data Associations of the Data Objects

Now we have to assign the payload of the incidentARG argument to the data object we have just created. We do this in the Catch activity.

 

21) Right-click the Start catch activity and select Properties. Select the Implementation tab and click the Data Associations link.

 

DataAssociations

 

Now we need to assign the incidentARG argument to the incidentDO data object.

Since we have defined these to be the same type it is easy. All we need to do is a top level assignment and not even worry about optional sub-elements.

21) Drag from the incidentARG to the incidentDO nodes and click OK twice to complete and close the Start node property definition.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Now we need to associate the GenericRequestDO data object to the response.

This is in the Properties of the Throw End node.

22) Create a Copy association from the genericRequestDO to the GenericRequestARG nodes.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Defining the Groovy Expression in the BPM Script

Now at last we are ready to start defining the groovy code that will be responsible for the transformation.

Drag a Script Activity and place it just after the Start node. Re-name this to Transform Request

 

transform

 

 

 

 

 

 

 

 

 

 

 

 

 

Transform2

 

 

 

 

 

 

 

 

 

23) Right-click the Transform Request Script Activity and select Go To Script 

 

 

GoToScript

 

 

 

 

 

 

 

 

 

tip3Tip: The Script Activity must not have any implementation defined when it is being used for Groovy scripting. It functions as a container for the groovy script

 

Before we can start scripting we have to define the imports for the script, similar to what we would do in Java. First lets take a look at the Scripting Catalog to see what is already there. This will help us understand what we need to import.

 

24) In the Scripting Catalog expand the oracle–>scripting nodes to see what is already available to us.

 

Here we can see the Business Objects we have already created and all the elements that are included in the schema files that we imported.

 

ScriptingCatalog

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Now we need to recall what is the format of the GenericRequest that is the target data structure of our transformation. We need to know this so we can choose the correct imports for our Groovy script.

 

soapui

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Above we can see that a GenericRequest contains the following elements:

 

  • Service–>Document–>Field
  • Service–>Document–>File–>Contents

 

25) Now return back to the Scripting tab and enter the following in the Script Editor window. This as you can see are comments and printing output to the console. This will be seen directly in the weblogic server diagnostic log.

 

//You can add comments like this
//You can print to the console like during your development/testing procedures&nbsp;
println("Starting transformation of Incident to Generic Request")

 

tip3Tip: printing to the console log like this should only be used in development scenarios and should be removed for production. Alternatively we could add some logic to conditionally log messages only by specifying a payload value or composite mbean.

 

Selecting the Scripting Imports

Now we need to add in the imports for the elements that we will be using.

26) Click the Select Imports button on the top right of the editor to open the Select Imports window

SelectImports

 

 

 

 

 

 

27) Click the green Add icon and click with the mouse cursor in the new import row that appears

 

SelectImports2

 

 

 

 

 

 

 

 

28) Type oracle. (oracle and a dot)

 

OracleDot

 

 

 

 

 

 

 

 

The context menu will now open up to help you find the correct package path.

 

ConextMenu

 

 

 

 

 

 

 

 

 

 

tip3Tip: Do not use the cursor keys until you have clicked inside the context menu with your mouse since this will cause the context menu to disappear.

 

29) Now use the cursor keys to choose the following oracle.scripting.xml.com.oracle.ucm.type.Service, or type it in directly and click the Add icon to add another import.

 

Imports

 

 

 

 

 

 

 

 

30) Add the following imports and click OK

 

oracle.scripting.xml.com.oracle.ucm.type.Service
oracle.scripting.xml.com.oracle.ucm.type.File
oracle.scripting.xml.com.oracle.ucm.elem.Field
oracle.scripting.xml.com.oracle.ucm.type.Service.Document

 

Writing the Groovy Expression

31) Return back to the Groovy Script editor window.

 

Now we need to define the classes we need to use to build our GenericRequest. We define a Service, Document, Field, File and two arrays for the lists of files & fields.

 

tip3Tip: In essence here we are just instantiating POGO (plain old groovy objects) objects that are a Groovy representation of our GenericRequest element

 

32) Now enter in the following code after the debug code you entered earlier

 

//Define the message element types for data population

//The Service element
Service service = new Service()
//The Document element
Document document = new Document()
//The File element (base64 message embedded attachment)
File file = new File()
//The filed element
Field field = new Field()
//An array of type Field
List<Object> fields = new ArrayList()
//An array of type File
List<Object> files = new ArrayList()

 

Now we have created our POGO objects. Now we need to populate them with real data. Since we are transforming from an Incident to a GenericRequest, most of our data comes from the data object incidentDO, which we have populated from the argument.

We will start by creating each of the individual Field elements and assigning them to the array, since these constitute the bulk of our message.

Our first field looks like this.

 

FirstField

 

It contains an XML Schema attribute called name and a value which is the Internal BPM process ID of the in flight process.

Type field.set (field dot set) in the expression editor to show the context list of the available methods for the field object. We can see that the methods to set and get data from the field POGO already exist.

 

FieldDot

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

32) Type in the following expression to populate the first Field element and add it to the array at position 0 (appears first in the message)

 

//sDocName element containing BPM process instance ID
field.setName("dDocName")
field.setInner_content(predef.instanceId)
fields.add(field)

 

tip3Tip: We could get the BPM process instance ID by executing an xpath expression in a data association. However BPM 12c conveniently provides several pre-defined variables which are available from predef, of which some can be also updated in a groovy expression.  See the full list here.

 

The next field that we need to populate in the GenericRequest is the dDocTitle, which comes from the incident subject.

The transformed element looks like this.

 

SecondField

 

This time we get the value from the process data object incidentDO by directly calling the get method.

 

33) Add the following expression to the end of the script.

 

//dDocTitle from the incident subject
field = new Field()
field.setName("dDocTitle")
field.setInner_content(this.incidentDO.subject)
fields.add(field)

 

Now this is really straight forward right? Actually, with the power of groovy expressions it really is.

Now imagine that you wanted to implement some complicated if/then logic to only conditionally display some elements. All you need to do is write some simple logic into the script. Perhaps you need to format some dates or concatenate some strings values or convert some data types, again easy as pie.

Consider the xincidentDate field below. Here we get a date and convert it into a Webcenter Content required format in a few lines.

 

ConvertDate

 

 

 

 

 

 

 

 

34) Now add the remaining field definitions to the expression.

 

field = new Field()
field.setName("dDocAuthor")
field.setInner_content(this.incidentDO.reporter)
fields.add(field)
   
field = new Field()
field.setName("dDocAccount")
field.setInner_content("incident");
fields.add(field)
  
field = new Field()
field.setName("dSecurityGroup")
field.setInner_content("webcenter")
fields.add(field)
  
field = new Field()
field.setName("dDocType")
field.setInner_content("Incident")
fields.add(field)
  
field = new Field()
field.setName("xClbraRoleList");
field.setInner_content(":CaseMgr(RW),:CaseWorker(RW),:ActionOfficer(RW)");
fields.add(field)
  
field = new Field()
field.setName("xClbraUserList");
field.setInner_content("&${this.incidentDO.getReporter()}(RW)");
fields.add(field)
  
field = new Field()
field.setName("xIdcProfile")
field.setInner_content("IncidentRecord")
fields.add(field)
  
field = new Field()
field.setName("xComments")
fields.add(field)
  
field = new Field()
field.setName("xCitizenName")
field.setInner_content(this.incidentDO.name);
fields.add(field)
  
field = new Field()
field.setName("xEMail")
field.setInner_content(this.incidentDO.email);
fields.add(field)
  
field = new Field()
field.setName("xCity")
field.setInner_content(this.incidentDO.city)
fields.add(field)
  
field = new Field()
field.setName("xGeoLatitude")
field.setInner_content(this.incidentDO.geoLatitude)
fields.add(field)
  
field = new Field();
field.setName("xGeoLongitude");
field.setInner_content(this.incidentDO.geoLongitude);
fields.add(field);

field = new Field()
field.setName("xIncidentDate")
Calendar nowCal = this.incidentDO.getDate().toGregorianCalendar()
Date now = nowCal.time
String nowDate = now.format('M/d/yy HH:mm aa')
field.setInner_content(nowDate)
fields.add(field)
  
field = new Field()
field.setName("xIncidentDescription")
field.setInner_content(this.incidentDO.description)
fields.add(field)
  
field = new Field()
field.setName("xIncidentStatus")
field.setInner_content(this.incidentDO.incidentStatus)
fields.add(field);
  
field = new Field()
field.setName("xIncidentType")
field.setInner_content(this.incidentDO.incidentType)
fields.add(field)
  
field = new Field();
field.setName("xLocationDetails")
field.setInner_content(this.incidentDO.locationDetails)
fields.add(field)
  
field = new Field()
field.setName("xPhoneNumber")
field.setInner_content(this.incidentDO.phoneNumber.toString())
fields.add(field)
  
field = new Field()
field.setName("xStreet")
field.setInner_content(this.incidentDO.street)
fields.add(field)
  
field = new Field();
field.setName("xStreetNumber");
field.setInner_content(this.incidentDO.streetNumber);
fields.add(field);
  
field = new Field()
field.setName("xPostalCode")
field.setInner_content(this.incidentDO.getPostalCode());
fields.add(field)
  
field = new Field()
field.setName("xTaskNumber")
field.setInner_content(this.incidentDO.taskNumber)
fields.add(field)

 

The next element to add is the embedded base64 attachment. We add this in a similar fashion.

 

34) Add the following expression.

 

file.setContents(this.incidentDO.attachment.file)
file.setName("primaryFile")
file.setHref(this.incidentDO.attachment.name)
files.add(file)

 

Now we are nearly finished our groovy script. All we need to do is:

 

  • Add the arrays to the Document element
  • Add the Document element to the Service element
  • Add the Service to the process data object genericRequestDO

 

35) Add the following expression for the Document, Service and gerericRequestDO

//Add Field and Files
document.setField(fields)
document.setFile(files)

//Add Document to Service
service.setDocument(document)
service.setIdcService("CHECKIN_UNIVERSAL")

//Add the Service element to data object genericRequestDO
genericRequestDO.setWebKey("cs")
genericRequestDO.setService(service)

 

The BPM script is now complete and your Studio Application should look similar to this.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Deploying the Process

Now we need to deploy the BPM process to our BPM server so we can test it. We are going to deploy to the new BPM 12c Integrated Weblogic Server that comes with studio, but another server can be used if preferred.

 

tip3If this is the first time deployment to the Integrated Weblogic Server then Studio will ask for parameters and then create the domain first before deployment.

 

36) In the Application Explorer Right-click the GroovyDemo project and select deploy–>GroovyDemo–>Deploy to Application Server–>Next–>Next–>IntegratedWeblogicServer–>Next–>Next–>Finish

 

deploy1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

deploy2

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The deployment log should should complete successfully.

 

 

 

 

 

 

 

 

 

 

 

Testing the Deployed Process

Now it is time to test the process. We will invoke our BPM process through the web service test page.

37) Open a browser window and go to the Web Services Test Client page http://localhost:7101/soa-infra/ and login with the weblogic user.

Click on the Test GroovyDemoProcess.service link .

 

 

 

 

 

 

 

 

38) Click on the start operation

 

teststartopp

 

 

 

 

 

 

 

 

 

39) Click on the Raw Message button to enter a raw XML SOAP payload.

 

raw

 

In the text box paste the following sample Webcenter Content GenericRequest payload.

 

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:gro="http://xmlns.oracle.com/bpmn/bpmnProcess/GroovyDemoProcess" xmlns:v1="http://opengov.com/311/citizen/v1">
   <soapenv:Header/>
   <soapenv:Body>
      <gro:start>
         <v1:Incident>
            <v1:Name>Joe Bloggs</v1:Name>
            <v1:Email>joe.blogs@mail.net</v1:Email>
            <v1:PhoneNumber>12345</v1:PhoneNumber>
            <v1:Reporter>03a7ee8a-ae3f-428b-a525-7b50ac411234</v1:Reporter>
            <v1:IncidentType>Animal</v1:IncidentType>
            <v1:IncidentStatus>OPEN</v1:IncidentStatus>
            <v1:Date>2014-09-17T18:49:45</v1:Date>
            <v1:Subject>There is a cow in the road</v1:Subject>
            <v1:Description>I have seen a big cow in the road. What should I do?</v1:Description>
            <v1:GeoLatitude>37.53</v1:GeoLatitude>
            <v1:GeoLongitude>-122.25</v1:GeoLongitude>
            <v1:Street>500 Oracle parkway</v1:Street>
            <v1:StreetNumber>500</v1:StreetNumber>
            <v1:PostalCode>94065</v1:PostalCode>
            <v1:City>Redwood City</v1:City>
            <v1:LocationDetails>Right in the middle of the road</v1:LocationDetails>
            <v1:Attachment>
               <v1:File>aGVsbG8KCg==</v1:File>
               <v1:Name>hello.txt</v1:Name>
               <v1:Href/>
            </v1:Attachment>
         </v1:Incident>
      </gro:start>
   </soapenv:Body>
</soapenv:Envelope>

 

40) Click the Invoke button in the bottom right hand corner

 

invoke

 

 

 

 

 

 

 

 

41) Scroll down to the bottom to see the Test Results

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Congratulations! We can see that the Incident request we sent to Oracle BPM 12c has been transformed to a Webcenter Content GenericRequest using Groovy Scripting.

 

 tip3Tip: The Web Services Test Client is an lightweight method for testing deployed web services without using Enterprise Manager. For full instance debugging and instance details use Enterprise Manager or the Business Process Workspace

 

If we track this instance in Enterprise Manager we can see what happened at runtime in the Graphical form.

 

graph

 

 

 

 

 

 

 

 

 

 

 

 

 

We can also look at the log from the Integrated Weblogic Server in Studio, which shows the debug expression we included.

 

 

 

 

 

 

 tip3Tip: This process could be easily remodelled to be asynchronous or re-usable and the transformed GenericRequest could be used in the input association for a Service Activity to actually invoke the Webcenter Content SOAP Service.

The actual implemented process where this example comes from in the B2C Scenario looks like this. It is a reusable process that waits for the the upload to Webcenter Content to complete before querying the final documenting details and returning to the main BPM process.

CreateContent

 

 

 

 

Summary

In this blog we introduced Groovy BPM Scripting in BPM 12c. Firstly we learned how to model a synchronous BPM process based on predefined XML schema types.

We learned how to do the following using BPM Scripting:

  • Where and how we should use BPM scripting in a BPM process.
  • How to import classes
  • Instantiate and declare groovy objects
  • Print debug messages to the weblogic log file
  • Use process data objects
  • Use predefined variables
  • Format data
  • Dynamically build data object data structures
  • Programmatically transform data between different XML schemas types
  • Deploy and test using the Web Services Test Client tool

 

In the next blog in this series I will demonstrate how to define and use BPM scripting in Business Objects and Exception handling in BPM scripting.

 

tip3Tip: For more information on BPM Scripting (e.g. the list of predefined variables) see the section Writing BPM Scripts in the official BPM documentation

 

 

 

 

 

 

 

BPM 10g-12c Migration: Handling Excel Files as Input

$
0
0

Introduction

With the introduction of BPM 12c comes the long-awaited migration tool to migrate BPM 10g projects to BPM 12c.

The A-Team have been heavily involved with the effort to create collateral around this tool – patterns, approaches, samples, tutorials, labs etc.

One of the common patterns in BPM 10g is using an Excel spreadsheet as input to a process which led me to investigate how this could be replicated in 12c. What follows is a step-by-step guide to achieving an example of this. Note that this blog will not deliver an enterprise production solution but will at least provide a working example which can be built upon as required.

Approach

Handling files in SOA Suite 11g & 12c is standard functionality with the file and ftp adapters… so we’ll use the file adapter for this example.

Handling CSV files is also straightforward, they can be specified as input in the file adapter wizard… so we can use a CSV file as input to the process.

Apache POI is a standard open source approach to converting Excel to a.n.other file format…. so we can use this to convert the Excel to CSV.

The file adapter and FTP adapter in 12c (and 11g) provide a feature known as “pipelines and valves” for pre-processing (and post-processing) of files prior to delivery to the composite…. so we can use this as the point of conversion for our file.

Given we now know the approach we can begin to build the example….

The Example Project

Examine the Input Spreadsheet / CSV File

We’ll be using a simple excel spreadsheet of orders….

E2C26

…which we’ll convert into CSV in a “valve”….

E2C27

…and then feed into the process via the file adapter.

Create the Application/Project

We’ll create a simple application/project as a basis for testing our “Excel to CSV” valve….

Create a new “BPM Application”….

E2C01

E2C02

E2C03

E2C04

….leave it as “asynchronous” for now and click “Finish”….

E2C05

Create the BPM Process

Add a human activity and change the “End” to “none”….

E2C06

Add a File Adapter to the Composite

In the composite view add a “File Adapter” (if you don’t see the “Components” window click on the “Window” tab and select it)….

E2C07

E2C08

E2C09

E2C10

E2C11

E2C12

…or whatever location is appropriate for you….

E2C13

E2C14

E2C15

E2C16

Here we specify the CSV file as it will be after conversion from Excel….

E2C17

E2C18

E2C19

E2C20

E2C21

E2C22

E2C23

Now we return to the file adapter wizard….

E2C24

…and we’re done….

E2C25

 Complete the Process

In the properties of the “Start” node choose “Use Interface”….

E2C28

….click on the magnifying glass for the “Reference” and choose the file adapter we’ve just created….

E2C29

…remove the unwanted WSDL….

E2C30

…notice the composite reflects the change….

E2C31

Create a data object to hold the orders….

E2C32

…click on “browse” in the dropdown and notice in the subsequent dropdown that the “Orders” type created as part of the file adapter is available….

E2C33

…choose it….

E2C34

Do the data association for the “Start” activity….

E2C35

Create the human task for the user activity….

E2C36

Open the human task and auto-generate the task form….

E2C38

E2C39

…accept defaults for creating the table in the form.

Deploy the UI….

E2C40

 Create the “Excel to CSV” Valve

For this we’ll need a Java project… I’ll simply add this to my existing application however it can be wherever makes sense….

E2C41

E2C42E2C43

Create a new Java class….

E2C44

E2C45

Add Required JARs

For this class we will need three JARs…

BPM-Infra.jar – which has the classes necessary to implement the valve – this can be found in <middleware home>/soa/soa/modules/oracle.soa.fabric_11.1.1

poi-ooxml-3.10.1-20140818.jar – which is the Apache POI JAR for converting Excel 2007- spreadsheets – here you will need to download the JAR relevant to the kind of spreadsheet you are using.

poi-3.10.1-20140818.jar – the generic Apache POI JAR

…let’s add these JARs to the project….

E2C46

Complete the Java Class

We’ll need the following code to implement the Valve….

package excelvalves;

import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.InputStream;
import oracle.tip.pc.services.pipeline.AbstractValve;
import oracle.tip.pc.services.pipeline.InputStreamContext;
import oracle.tip.pc.services.pipeline.PipelineException;
import org.apache.poi.xssf.usermodel.XSSFWorkbook;
import org.apache.poi.xssf.usermodel.XSSFSheet;
import org.apache.poi.ss.usermodel.*;

import java.util.Iterator;

public class ExcelToCsv extends AbstractValve {
        
        public InputStreamContext execute(InputStreamContext inputStreamContext) throws IOException,
                                                                                        PipelineException {
            System.out.println("The valve will begin executing the inputstream");
            // Get the input stream that is passed to the Valve
            InputStream originalInputStream = inputStreamContext.getInputStream();
            ByteArrayOutputStream bos = new ByteArrayOutputStream();
                        
            // Read workbook into HSSFWorkbook
            XSSFWorkbook my_xls_workbook = new XSSFWorkbook(originalInputStream); 
            
            // Read worksheet into HSSFSheet
            XSSFSheet my_worksheet = my_xls_workbook.getSheetAt(0);
            
            // To iterate over the rows
            Iterator<Row> rowIterator = my_worksheet.iterator();
            
            //store the csv string
            StringBuffer data = new StringBuffer();
            
            //Loop through rows.
            while(rowIterator.hasNext()) {
                Row row = rowIterator.next(); 
                Cell cell;
                // For each row, iterate through each columns
                Iterator<Cell> cellIterator = row.cellIterator();
                while (cellIterator.hasNext()) {
                    cell = cellIterator.next();
                    switch (cell.getCellType()) {
                        //not a complete list
                        case Cell.CELL_TYPE_BOOLEAN:
                            data.append(cell.getBooleanCellValue() + ",");
                            break;
                        case Cell.CELL_TYPE_NUMERIC:
                            data.append(cell.getNumericCellValue() + ",");
                            break;
                        case Cell.CELL_TYPE_STRING:
                            data.append(cell.getStringCellValue() + ",");
                            break;
                        case Cell.CELL_TYPE_BLANK:
                            data.append("" + ",");
                            break;
                        default:
                            data.append(cell + ",");
                    }
                }
                //end of line, remove trailing ","
                data.deleteCharAt(data.length()-1);
                data.append("\r\n");
            }
            System.out.println("snippet from stream: " + data.toString());
            ByteArrayInputStream bin = new ByteArrayInputStream(data.toString().getBytes());
            inputStreamContext.setInputStream(bin);
            System.out.println("done processing the stream in the valve");
            return inputStreamContext;
        }

    @Override
    public void finalize(InputStreamContext inputStreamContext) {
        // TODO Implement this method
    }

    @Override
    public void cleanup() throws PipelineException, IOException {
        // TODO Implement this method

    }
}

…we can see by looking at the code that we simply extend the “AbstractValve” interface, step through the Excel spreadsheet using the POI classes building up the StringBuffer as we go and finally return the CSV file to the file adapter.

 Package the Valve

We can now package the Valve as a JAR file for use by the BPM project….

E2C47

E2C48

E2C49

…now build it….

E2C50

Use the ExcelToCSV Valve in the BPM Project

Make the JAR Accessible

The first thing we need to do is make the Valve JAR we have just built available to the BPM project… there are a number of ways to do this, some more appropriate than others in a production system, but for the sake of this blog we’ll just add it to the SCA-INF/lib directory of the BPM project….

E2C51

Create a Pipeline in the BPM Project

We will need an xml file to define the pipeline/valve in the BPM project at the level of the “SOA” directory…

E2C52

E2C53

…in it we just need to reference the fully qualified class we created for the valve earlier….

E2C54

E2C55

Modify the File Adapter to Use the Pipeline/Valve

Now we can change the properties of the file adapter to use the pipeline/valve….

Open the “.jca” file associated with the file adapter….

E2C56

…and add a property referencing the XML file we’ve just created….

E2C57

Deploy the BPM Project

E2C58

Test the Deployed Project

First let us allocate a user to the process owner role so that the human task can be assigned….

E2C59

…now we can copy the excel file to the input directory….

E2C60

…and let’s see what has happened by first looking at the server log (we had some println in our Java code)….

E2C61

…looking good, what about the human task in the workspace….

E2C62

…excellent !

Summary

We have seen in this project how to reproduce a common BPM 10g pattern in BPM 12c…. using an Excel file as input to a process.

Now this may not be production ready, we’d probably need to determine the best way of packaging and referencing the JAR files, we’d probably need to complete the “valve” code to handle more complex cell types in Excel and to handle any potential errors, we may need to handle the Excel rows one-at-a-time, i.e. each row creates one process instance…. etc. What we do have here is a simple example of how to handle Excel files which we can build upon.

The packaged project can be found here….AppExcelInput

The Excel and CSV files can be found here….ExcelAndCSV

BPM 11g: Instance Patching Revisited: Inability to Create New Instances

$
0
0

Introduction

Back in 2012 after the release of BPM 11g PS4FP I wrote a blog entry on Instance Patching, what it was and how it worked.

Remember, instance patching is redeployment of a composite on the same Revision ID with “keep running instances” option, as opposed to instance migration which follows deployment of a new composite with a new Revision ID and selected instances migrated from the old to the new revision.

I’ve decided to revisit the subject in a little more detail on the back of an issue a customer had with instance patching…. they’d redeployed a composite after making a very small “compatible” change to the BPM process and subsequently found that they could not instantiate new instances. This blog will detail why this situation happened and how to recover from it.

Walk-through of the Issue

Revisit the Process

From the previous blog entry we had a very simple process with three human activities and file write….

IP14_01

….we instantiated several instances and progressed them to various different human activities….

IP14_02

 

Make an Incompatible Change to the Process

As in the prior blog entry, if we make an incompatible change to the process model then instances should not be able to be automatically patched.

Add an embedded subprocess with a new human activity (exactly as we did in the prior blog)….

IP14_03

…deploy this with the same “Revision ID’ and “Keep running instances” option….

IP14_04

 

Now if we go into the BPM Workspace we should see the BPM component suspended (as in the prior blog)….

IP14_05

…strange, no pending components….

IP14_06

…and all instances successfully patched !

This is an improvement since the PS4FP release of the product, now the engine can handle more complicated changes to the process when instance patching.

Let’s try removing the embedded subprocess and redeploy….

IP14_07

IP14_08

…it failed to deploy that time. We’ll need to “force” it as per the prior blog entry….

IP14_09

…and now it deploys….

IP14_10

 Investigate the Suspended Instances

Looking in the BPM Workspace under the “Process Tracking” tab we can see the process is “Pending”….

IP14_11

… and the instances are all suspended….

IP14_12

Try Creating a New Process Instance

Now, it may be that investigation needs to be done on these existing instances before it can be decided what to do with them.

In the meantime, let us create a new instance (from the EM “test” or wherever) and look at the flow trace….

IP14_13

… this is strange ! The Service (WS endpoint) has been invoked but the process itself has not… what is happening here ?

Let us look at the state of the composite and the process in EM….

IP14_14

… the process is clearly “Up”.

IP14_15

…and so is the composite.

Very strange !

Examine the Composite

Maybe there is a problem with the composite itself. The best way to check this is to export the deployed composite from EM….

IP14_16

IP14_17

IP14_18

…save the exported composite and open it up….

IP14_19

…and let us look at the source, in particular the BPM component….

IP14_20

That looks like the culprit…. the BPM component has been suspended within the composite.

Why is this ? This is standard behavior, if all instances have not been patched then new instances cannot be created.

Resume the Suspended Instances

Let us resume (in bulk) all the suspended instances from the BPM Workspace….

IP14_21

IP14_22

That has worked….

IP14_23

no “Pending Components”….

IP14_24

…and no suspended process instances.

Let us try creating a new instance (from EM for example)…

IP14_25

…it works !

(Wrong) Conclusion !

That’s all we need to do then, patch all the existing instances and we can create new instances.

Well…. not really, what if for business reasons we can’t patch all the suspended instances until we’ve investigated them individually but we still need to create new instances ?

Look Deeper

Let us create some more suspended instances by deploying with another embedded subprocess (same Revision ID and “keep instances running”)….

IP14_26

…which is compatible.

And deploying again (same Revision ID and “keep instances running”) after removing the embedded subprocess….

IP14_27

…which is an incompatible change….

IP14_28

Now, we can’t simply patch all the suspended instances this time because we need to investigate them individually first…. what can we do ?

Find the MBean

The property we found earlier in the extracted composite….

<bpel.config.suspend>

…is also available in an MBean….

Within the Enterprise Manger navigate to the “System MBean Browser”….

IP14_29

…and within this, navigate to….

<Application Defined MBeans><oracle.soa.config><Server: server-name><SCAComposite><composite-name><SCAComposite.SCAComponent><component-name>

…where the text in red is relevant to your server/composite/component.

IP14_33

…Expand attribute 14…

IP14_30

IP14_31

…there it is !

We can set it to “false” here and “apply” the change.

Creating a new process instance (in EM)…

IP14_32

…works !

NB: This attribute change will not survive a server restart !

 

(Another Wrong) Conclusion !

So that’s it… all we need to do is to change the attribute to “false” in the MBean and we’re good to go.

In fact what we have learned is that, if the change is compatible then all instances are automatically patched and we have nothing to worry about.

Not exactly.

Automatic Patching Limit

I know of several customers having encountered this.

Essentially, only 100 instances are patched automatically for a compatible change.

You may think your change is compatible and you may expect all the instances to be patched, but you may be wrong.

You should always check, in the BPM Workspace or via the API, for the success or otherwise of the deployment and instance patching.

What possible justification can there be for this limit of 100 ?

On redeployment the BPM engine will attempt to patch all instances in one transaction. In the past, before the limit was implemented, some customers ran into problems with OOMs, and rollbacks etc… As a result a (seemingly arbitrary) limit of 100 was implemented to avoid these issues.

You must be aware that if you have more than 100 running instances of the composite then not all of the instances will be automatically patched. You will need to patch the remaining instances either individually or in bulk from either the BPM Workspace or via the API.

Summary

Instance patching is a very powerful feature of BPM 11g (and 12c) but customers need to be aware of how it works and what limitations there are to it. In this blog post we have delved deeper into the workings of instance patching and provided mechanisms for manipulating it as a customer’s business requires.

The Parking Lot Pattern

$
0
0

The parking lot pattern is a strategy in Oracle SOA Suite to stage data in an intermediary store prior to complete processing by SOA Suite itself.  This pattern was spearheaded years ago by Deepak Arora and Christian Weeks of the Oracle SOA A-Team.  It has been implemented in various verticals to address processing challenges which include batch, complex message correlation/flows, throttling, etc.  To detail the pattern, this write-up discusses the components of a batch-related implementation.

The Parking Lot

The implementation of the “parking lot” can be done using various storage technologies like JMS, database, or Coherence (just to mention a few).  However, Oracle strongly recommends that a database table be used for simplicity.  The table structure typically contains state tracking and metadata relating to the payload that will be processed.  In our batch-processing example the table would contain: a row identifier column, a batch identifier column, a state column, maybe a type identifier column, maybe a priority indicator column, and finally the data/payload column.

 

Column

Content

Special properties

LOT_ID The identifier for the parking lot row. Usually some sort of sequence identifier. Primary key for the table
BATCH_ID An identifier for the batch. It would be shared across all rows within the batch.
STATE The state of the row: commonly this is a single character representing the states the row transitions through.  This field is usually used by the database adapter’s polling functionality as a “logical delete” indication. Example values:
N: new
R: reserved
P: processing
C: complete
SUBTYPE (optional) An optional subtype indicator: some sort of meta property about the input row. Note: don’t overload this to process both new orders and bulk inventory updates.  There should be separate parking lots for truly separate types.
PRIORITY (optional) An optional priority indication to allow the database adapter to pull these rows first for processing.
DATA (alternative 1) A CLOB containing a string of the data in XML form. See discussion
DATA (alternative 2) This would be a reference to data populated elsewhere in the system. For example, the order could be stored in a separate “pending orders” table and this could be an identifier for that other row. See discussion
DATA (alternative 3) Inline the data as columns directly within the parking lot table (combine the table from alternative 2 with the parking lot table, effectively). See discussion

 

Some things to note:

  • There should be one parking lot per general type, do not overload a single parking lot with multiple types (for example orders and inventory updates).
  • The parking lot table is anticipated to be busy. Ensure you clean up stale data through regular purging.

Data Representation Within the Parking Lot

There are at least three possible alternatives for storing the actual data within the parking lot.  Each option has different properties that need to be considered:

1. Store the data as a CLOB in XML form. This is the simplest approach, especially for complex data types. It adds some additional overhead writing and reading the CLOB as well as transforming between the XML and the CLOB. Note that these costs would be associated with XMLTYPE as well, and since there is no need for visibility into this while data it is in the database, it doesn’t provide any benefit.
2. Store the data separately in other tables with fully realized columns. This solution is most appropriate if the application is already doing it. That is, if the de-batching process is already copying the input payload to a tabular format in the database table, then this data format could be leveraged for the parking lot.
3. Combine the table that might otherwise exist in #2 with the parking lot itself. While this solution might prove to be the most performant, it can only work for simple data structures in the parking lot.

Database Adapter Usage

The parking lot process would be implemented as a SOA composite with a database adapter and a BPEL process.  The database adapter would read and dispatch individual rows to the BPEL process, creating an instance per order.

The database adapter supports various polling strategies.  Oracle recommends using the “logical delete” strategy, whereby a particular value of the STATE column would be asserted as part of the polling operation: SELECT <column list> FROM PARKING_LOT WHERE STATE=’N’.  The query is additionally enhanced with pessimistic locking function that allow for parallel execution from many separate nodes simultaneously—allowing this to work seamlessly in a cluster. Finally, a “reserved value” should be specified for full distributed polling support (the reserve value is updated during the poll so that the row is no longer a candidate on other nodes, until the transaction can complete).

There is an alternative database polling approach known as “SKIP LOCKING” (see http://docs.oracle.com/cd/E21764_01/integration.1111/e10231/adptr_db.htm#BGBIJHAC and DB Adapter – Distributed Polling (SKIP LOCKED) Demystified ).  While the skip locking approach has several advantages, it does not allow the intermediate states to be committed to the database.  The result is that it does not give the same stateful visibility to other processes that may be interested in the current state within the parking lot; for example, an OSB status monitoring service that provides the user with a means to check the status of the batch they submitted.

The database adapter supports various tuning properties that give very fine-grain control over its behavior, such as the number of poller threads, the number of rows to read per cycle, the number of rows to pass to the target BPEL process, and so on.  For more information about the database adapter, please refer to http://docs.oracle.com/cd/E21764_01/integration.1111/e10231/adptr_db.htm.  The Oracle Fusion Middleware Performance and Tuning Guide also covers database adapter tuning at http://docs.oracle.com/cd/E21764_01/core.1111/e10108/adapters.htm#BABDJIGB.

B2B Event Queue Management for Emergency

$
0
0

Executive Overview

Many customers face a crisis in production system when, for some reason, they end up with several B2B messages stacked up in the system, that may not be of a high priority to be processed at that point in time. In other words, it would greatly help many customers if, in such critical situations, they had an option to flush the backed-up messages from the system for later resolution and simply continue with processing of the current messages.
A-Team has been involved with different customers worldwide helping them implement such a solution for emergency use. Without getting into too much technical details, a high-level approach for such a solution is discussed here. The methodology accomplishes two key tasks, that are of primary importance during an emergency crisis within a B2B production gateway:

  • Allows to flush the event queue while the gateway is down, so that the gateway can be brought up quickly
  • Introspect the messages created from the event queue for resubmission or rejection

The primary objective of this approach is to allow the B2B engine to come back up quickly after flushing the messages from the event queue. The recovery or resubmission of messages is usually reviewed manually by the operations and business teams off-line and takes a longer cycle to complete. But this should not affect the down-time of the production system after the fast removal of the messages from the event queue. The downtime, thus encountered, is only driven by the first task, as listed above.

Solution Approach

Overview

The solution consists of immediate cleanup of messages from the system. The entries will be stored in files. After the files are created, the gateway will be ready for normal processing without any impact of messages that were previously present in the system.
After the gateway is opened for normal business, the analysis of the file contents can be carried out, in parallel, to decide which messages will be resubmitted or discarded. This analysis can be done via scripts to extract relevant pieces of business data for the messages removed. The scripts are decoupled for various types of transient message data and built on basic query utilities. The basic building blocks for data introspection are typically custom scripts, that are created based on specific business needs for analysis.
The analysis will create 2 lists of message IDs – one for resubmission and the other for rejection. Existing command-line utilities can be invoked to resubmit the messages in a scripted loop with configurable delays in between the resubmissions. For rejection, there is typically no processing required. However, the list of IDs will be used to update the database to reflect a final state for the appropriate messages.

Tasks and Activities

I. Preparation of Environment

If the gateway is down, it is important to bring it up in a maintenance mode, so that the cleanup of transient messages in the system can be completed. Otherwise, if the gateway is running, it has to be restarted for enabling maintenance mode. This can be achieved with the following sequence:

  • If the SOA/B2B environment is not up and running, start the Admin Server. Otherwise, this step can be skipped.
  • Pause the consumption of messages coming in to the B2B engine via external and internal listening channels.
  • Change the startup mode of SOA managed server to ADMIN mode.
  • Change the startup mode of SOAJMSServer to pause at server startup.
  • For a running environment, stop SOA managed servers and restart Admin Server. Otherwise, this step can be skipped.
  • Start SOA Managed Servers.

II. Cleanup of Transient Messages

There are four areas that require attention when there is a gateway outage and the whole B2B cluster is down. The four areas are:

  • B2B Event Queue
  • SOA Quartz Table
  • B2B Sequencing Manager Table
  • B2B Pending Batch Table

These four areas require attention since they contain information about in-flight messages that have not been processed to their final states. Based on the specific environment, the cleanup could be a maximum of four-step process, where only the first step is mandatory.

  • The B2B Event queue contents will be exported to a file for later analysis and the queue contents will be purged thereafter.
  • The SOA Quartz tables key contents will be exported to a file for later analysis and purged (optional – only applicable to message retries).
  • The B2B Sequence Manager table key contents will be exported to a file for later analysis and purged (optional – only applicable to scheduled TP downtime).
  • The B2B Pending Batch table key contents can be exported to a file for later analysis and purged (optional – only applicable to batching use cases)

After the above-mentioned 4 steps are completed, the B2B gateway can be started in normal processing mode. One of the key metrics for the solution, will be to determine how soon can these 4 steps be completed, so that the gateway can be brought up for ongoing business. Only step 1 above requires the preparation described in Section I (Preparation of Environment).
Steps 2, 3, and 4 can be performed only with the database up (i.e. Admin and Managed server are both down)

III. Message Data Analysis

After the gateway is up and running, the analysis of all the entries backed up can be carried out for further resubmission or rejection. The main objective of the analysis phase is to gather sufficient business data for each message ID to help operational analysis. The analysis for the backed up messages will be addressed based on the source.

A. B2B Event Queue – Mandatory
  • Shell script based utilities can be used to read message IDs from the JMS export file
  • Entries existing in b2b_instancemessage view: Message IDs can be joined with the view to get desired information about messages for business analysis (for the most part, new incoming or outgoing messages referenced by the B2B Event Queue would not be available in the b2b_instancemessage view)
  • Entries not existing in the b2b_instancemessage view: All such message IDs can be scanned to save the payload into a file, that can be processed by a customized shell script to extract any field for further analysis.
  • Other system level entries (optional): Can be put back in the event queue via JMS Import utility in Weblogic console.
B. SOA Quartz Table – Optional
  • Message IDs from Quartz table can be joined with b2b_instancemessage view for data analysis via custom script utilities.
C. B2B Sequence Manager – Optional
  • Message IDs from Sequence Manager table can be joined with b2b_instancemessage view like shown in Section B above.
D. B2B Pending Batch messages – Optional
  • Message IDs from b2b_pending_message table can be joined with b2b_instancemessage view like shown in Section B above.

IV. Message Resubmission/Rejection

At the end of the analysis phase, the list of Message IDs for resubmission and rejection will be available. The resubmission list can then be read by custom shell scripts to process individual messages via existing command-line utility, driven by parameters to control pause interval and looping criterion.
In general, no further action should be required for rejected messages. In certain exceptional situations, a database script can be run to change the state of such messages to a final state.

Summary

The above approach has been successfully implemented and used in production systems by customers for many years and is a well-proven technique. The entire package has been delivered as a consulting solution and the customer is responsible for all the scripts and artifacts developed. However, as newer versions of B2B are released, there could be other alternate options available as well. For further details, please contact the B2B Product Management team or SOA/B2B group within A-Team.

Acknowledgements

B2B Product Management and Engineering teams have been actively involved in the development of this solution for many months. It would not have been possible to deliver such a solution to the customers without their valuable contribution.


Using OSB 12.1.3 Resequencer

$
0
0

Resequencer feature has been added to Oracle Service Bus 12c (12.1.3), it utilises the same resequencer engine as Oracle Mediator.  The objective of this feature is to provide you with the ability to resequence the incoming messages that arrive in random order and send them to the target services in an orderly manner.  In this blog, I will give you a bit more information about this new feature in OSB and how to debug if you encounter an issue.

As mentioned in the official doc, the resequencer doest not support any XML and any SOAP service type, you need to define a WSDL in order to use the resequencer feature in OSB, and this WSDL must be only one-way, and must not contain any response elements.

The OSB Resequencer Strategies work in the same manner as Oracle Mediator;  it supports Standard, FIFO and Best Effort.  The differences between the resequencer implementation in Oracle Mediator and OSB are the ways in which both dispatch the message.  In OSB, pipeline acts as a Resequencer component. User cannot configure resequencer at any other OSB component.  After resequencing, the ordered messages will be processed further in the pipeline.  As soon as the message is pushed to the resequencer, caller will get a successful response. Though resequencer is part of the pipeline configuration, it will be invoked just before the pipeline is invoked.

Just like the Oracle Mediator,  OSB Resequencer also relies on the database for processing messages.  The database tables are automatically created when you run the repository creation utility (RCU) while creating the OSB domain.  The JNDI name used by the OSB resequencer is jdbc/SOADataSource.  The tables used by the resequencer are shown below:

OSB Resequencer tables

You can use the Enterprise Manager to configure the throughput for resequenced messages.  Following are the properties specific to OSB resequencer:

  • Resequencer Maximum Groups Locked : Maximum number of groups locked by Resequencer in each attempt it makes to obtain locks on the groups. Locks are obtained on the groups so that only one managed server node processes the group at a time.
  • Resequencer Locker Thread Sleep : The number of seconds the Resequencer would pause between each iteration to obtain locks on the groups.
  • Purge Completed Messages : Delete message after successful execution. The default value will be set as true.

OSB Resequencer-Global Setting

Through the EM console, you will be able to see all the successful (if instance tracking is on) and the faulted instances.  You can also recover the faulted groups either by resubmitting the failed message or skipping to the next message by aborting the faulted message. In addition, you are also able to recover the timed-out groups by skipping to the next available message, however if the missing message comes in after the timeout then it will not be executed by the resequencer, in this case you need to manually execute this message through EM.   However, due to bug 19826309 in the current release (12.1.3), you will not be able to recover or abort resequencer messages in the Enterprise Manager ->SOA->Service-Bus->Resequence Messages tab, you will notice the payload is missing when you recover/abort a resequence message (see screenshot below).  This issue only occurs if you are using the production distribution to setup your environment, you will not encounter this issue in the integrated server in JDEV or a compact domain using the quick start installation.

OSB Resequencer-messages tab

To reprocess/recover the faulted resequence message, you need to update the STATUS column to “0” in both OSB_GROUP_STATUS and OSB_RESEQUENCER_MESSAGE tables, below are some of the valid combination status values for the group and message STATUS columns:

Group Status Message Status Description
0 0 Ready – The messages which are ready for processing and eligible to be Locked and processed by the resequencer.
1 0 Locked- The messages within groups which are being currently processed.
0 2 Completed – Message processed successfully.
3 3 Error – Faulted message and faulted group
4 4 Timeout – Timeout for message and group
0 5 Aborted – Messages manually aborted by the administrator
6 - Group Error – Faulted Group

 

Some design considerations:

  • OSB Resequencing is supported for one way (xml type) messages only.
  • Supported Service types:
    • WSDL Web Service – OSB Resequencing will only be allowed for operations with only request type.
    • Message Type – Request message type to be xml and Response message type as None
  • OSB Resequencing for large message is not supported as the payload needs to be persisted in the DB before it is sequenced.

Debugging

In order to isolate a problem, you need to enable tracing for the pipeline by changing the OSB ODL configuration: oracle.osb.debug.pipeline to “TRACE: 32”, and use the Log action with debug severity to see the resequenced messages.  But you have to disable the “Purge Completed Message” configuration first.  This will enable you to retain the completed messages in the OSB_RESEQUENCER_MESSAGE table, thus making it easier for you to investigate the error.

OSB Resequencer-pipeline debugOSB Resequencer-purge completed message

Submitting an ESS Job Request from BPEL in SOA 12c

$
0
0

Introduction

SOA Suite 12c added a new component: Oracle Enterprise Scheduler Service (ESS). ESS provides the ability to run different job types distributed across the nodes in an Oracle WebLogic Server cluster. Oracle Enterprise Scheduler runs these jobs securely, with high availability and scalability, with load balancing and provides monitoring and management through Fusion Middleware Control. ESS was available as part of the Fusion Applications product offering. Now it is available in SOA Suite 12c. In this blog, I will demonstrate how to use a new Oracle extension, “Schedule Job”, in JDeveloper 12c to submit an ESS job request from a BPEL process.

 

Set up a scheduled job in Enterprise Scheduler Service

1. Create a SOA composite with a simple synchronous BPEL process, HelloWorld.
2. Deploy HelloWorld to Weblogic.
3. Logon to Fusion Middleware Enterprise Manager.
4. Go to Scheduling Services -> ESSAPP -> Job Metadata -> Job Definitions. This takes you to the Job Definitions page.

2

 

5. Click the “Create” button, this takes you to Create Job Definition page. Enter:

Name: HelloWorldJob

Display Name: Hello World Job

Description: Hello World Job

Job Type: SyncWebserviceJobType

Then click “Select Web Service…”. It pops up a window for the web service.

39

6. On the “Select Web Service” page, select Web Service Type, Port Type, Operation, and Payload. Click “Ok” to finish creating job definition.

8

Secure the Oracle Enterprise Scheduler Web Service

The ESS job cannot be run as an anonymous user, you need to attach a WSM security policy to the ESS Web Service:

1. In Fusion Middleware Enterprise Manager, go to Scheduling Services -> ESSAPP, right click, select “Web Services”.

3

2. In Web Service Details, click on the link “ScheduleServiceImplPort”.

4

3. Open tab “WSM Policies” and click on “Attach/Detach”.

5

4. In “Available Policies”, select “oracle/wss_username_token_service_policy”, click “Attach” button to attach the policy and then click on “Ok” to finish the policy attachment.

6

5. You should see the policy attached and enabled.

7

Create a SOA Composite to Submit a HelloWorldJob

1. Create a new SOA Application/Project with an asynchronous BPEL (2.0) process, InvokeEssJobDemo, in JDeveloper 12c.

2. Create a SOA_MDS connection.

14

3. Enter SOA MDS database connection and test connection successfully.

15

4. Add a Schedule Job from Oracle Extensions to InvokeEssJobDemo BPEL process.

16

5. Double click the newly added Schedule Job activity. This brings up the Edit Schedule Job window.

6. Enter Name “ScheduleJobHelloWorld”, then click “Select Job” button.

17

7. This brings up the Enterprise Scheduler Browser. Select the MDS Connection and navigate down the ESS Metadata to find and select “HelloWorldJob”.

18

8. To keep it simple, we did not create a job schedule. So there is no job schedule to choose. If you have job schedules defined and would like to use them, you can choose a Schedule from the MDS connections.

9. Set Start Time as current date time, and click OK.

19

10. You may see this pop up message.

20

11. Click “Yes” to continue. In the next several steps we will replace WSDL URL with concrete binding on the reference binding later to fix this.

12. In EM, go to Scheduling Services -> Web Services.

21

13. Click on link “SchedulerServiceImplPort”

22

14. Click on link “WSDL Document SchedulerServiceImplPort”.

23

15. It launches a new browser window displaying the ESSWebService wsdl. WSDL URL is in the browser address.

24

16. Update EssService WSDL URL.

25

17. You need to attach WSM security policy to EssService request.

26

18. Add Security Policy: oracle/wss_username_token_client_policy.

27

19. Setting up the credential store for policy framework is beyond the scope of this blog. We will use a short cut, the default weblogic user and password, as Binding Properties on the EssService reference binding to which the security policy is attached.

40

 

20. Build and deploy InvokeEssJobDemo.

21. Test InvokeEssJobDemo web service.

29

22. It should show that the web service invocation was successful.

34

23. Launch flow trace. We can see that Job 601 was successfully submitted.

32

24. Go ESSAPP -> Job Requests -> Search Job Requests. Find Job 601. Job was executed successfully.

35

 

Summary

In this blog, we demonstrated how to set up a SOA web service ESS job and how to invoke ESS web service to submit a job request from BPEL process in SOA Suite 12c.

 

MFT – Setting up SFTP Transfers using Key-based Authentication

$
0
0

Executive Overview

MFT supports file transfers via SFTP. Often MFT customers receive a public key from their partners and want to use them to receive files via SFTP. This blog describes the setup required to enable such an MFT flow that would receive files from partners using key-based authentication.

MFT includes an embedded SFTP server. We will configure it with the supplied public key to receive files from remote partners. Upon receipt of a file, a simple MFT transfer will initiate and place the file in a pre-defined directory within the local filesystem.

Solution Approach

Overview

The overall solution consists of the following steps:

  • Generate public-private key pair on the remote machine and copy the public key to MFT server
  • Generate public-private key pair on the machine running MFT server
  • Import the private key from MFT machine in MFT keystore
  • Import the public key from partner machine in MFT keystore
  • Configure SFTP server with private key alias
  • Configure MFT users and corresponding SFTP directories to be used by remote partners
  • Enter SSH Keystore password
  • Restart MFT Server
  • Create Embedded SFTP Source
  • Create File Target
  • Create an MFT transfer using the above source and target
  • Deploy and Test

Task and Activity Details

The following sections will walk through the details of individual steps. The environment consists of the following machines:

  • VirtualBox image running MFT 12c on OEL6 (oel6vb)
  • Remote Linux machine used for initiating the transfer via SFTP client (slc08vby)

SFTPIn

I. Generate public-private key pair on the remote machine and copy the public key to MFT server

To generate a private-public key pair, we use the command-line tool ssh-keygen. The tool creates 2 files for private and public key. For our purposes in this exercise, we will only be using the public key by copying it to the MFT machine from here. As a best practice, all the key files are saved in $HOME/.ssh/authorized_keys directory. A transcript of a typical session is shown below.

[slahiri@slc08vby authorized_keys]$ pwd
/home/slahiri/.ssh/authorized_keys
[slahiri@slc08vby authorized_keys]$ ssh-keygen \-t rsa \-b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/home/slahiri/.ssh/id_rsa): sftpslc
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in sftpslc.
Your public key has been saved in sftpslc.pub.
The key fingerprint is:
56:db:55:48:4c:db:c4:e1:8b:70:40:a8:bf:12:07:94 slahiri@slc08vby
The key’s randomart image is:
+–[ RSA 2048]—-+
|        . oo +o++|
|       E .  . +=.|
|      . . .. .o..|
|       o . oo.. .|
|        S . .. . |
|       o o       |
|        o .      |
|       . .       |
|        .        |
+—————–+
[slahiri@slc08vby authorized_keys] ls
sftpslc  sftpslc.pub
[slahiri@slc08vby authorized_keys] scp sftpslc.pub oracle@10.159.179.84:/home/oracle/.ssh/authorized_keys
oracle@10.159.179.84’s password:
sftpslc.pub                                   100%  398     0.4KB/s   00:00
[slahiri@slc08vby authorized_keys]

II. Generate public-private key pair on the machine running MFT server

As shown in the previous step, ssh-keygen is used on the MFT machine to generate a key pair. From the pair generated here, we will only be using the private key for our exercise. The session transcript is shown below.

[oracle@oel6vb authorized_keys]$ pwd
/home/oracle/.ssh/authorized_keys
[oracle@oel6vb authorized_keys]$ ssh-keygen \-t rsa \-b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa): sftpmft
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in sftpmft.
Your public key has been saved in sftpmft.pub.
The key fingerprint is:
36:a8:ac:a7:0c:bd:34:c9:bd:cd:1b:fe:05:a8:1d:47 oracle@oel6vb
The key’s randomart image is:
+–[ RSA 2048]—-+
| |
| |
| E |
| + |
| + S |
| o + + + o |
|. * = o . |
| + +.= . . |
| =o. =o. |
+—————–+
[oracle@oel6vb authorized_keys]$ ls
sftpmft sftpmft.pub
[oracle@oel6vb authorized_keys]$

III. Import the private key from MFT machine in MFT keystore

The private key from Step II is imported into MFT keystore using WLST utility. It must be noted that for MFT, a different version of WLST is shipped and installed with the product. It is found in /mft/common/bin directory. The version of WLST in this directory must be used. The WLST session should be connected to the MFT Server port using an Administrative credential. A typical session transcript is shown below.

[oracle@oel6vb authorized_keys]$ cd /u01/oracle/SOAInstall/mft/common/bin
[oracle@oel6vb bin]$ ./wlst.sh
CLASSPATH=:/u01/oracle/SOAInstall/mft/modules/oracle.mft_12.1.3.0/core-12.1.1.0.jar

Initializing WebLogic Scripting Tool (WLST) …

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

wls:/offline> connect(“weblogic”,”welcome1″,”t3://localhost:7003″)
Connecting to t3://localhost:7003 with userid weblogic …
Successfully connected to managed Server “mft_server1″ that belongs to domain “base_domain”.

Warning: An insecure protocol was used to connect to the
server. To ensure on-the-wire security, the SSL port or
Admin port should be used instead.

wls:/base_domain/serverConfig> importCSFKey(‘SSH’, ‘PRIVATE’, ‘MFTAlias’, ‘/home/oracle/.ssh/authorized_keys/sftpmft’)
CSF key imported successfully.
wls:/base_domain/serverConfig> listCSFKeyAliases(‘SSH’, ‘PRIVATE’)
Key Details
————————————————————————–
‘MFTAlias’, Format PKCS#8, RSA

IV. Import the public key from partner machine in MFT keystore

The same WLST session can be used to import the public key copied over from the remote machine in Step I. It must be noted that the public key alias used here should be the same as the userID that is to be used by the remote SFTP client to connect to the embedded SFTP server. Transcript of a sample session is shown below.

wls:/base_domain/serverConfig> importCSFKey(‘SSH’, ‘PUBLIC’, ‘MFT_AD’, ‘/home/oracle/.ssh/authorized_keys/sftpslc.pub’)
CSF key imported successfully.
wls:/base_domain/serverConfig> listCSFKeyAliases(‘SSH’, ‘PUBLIC’)
Key Details
————————————————————————–
‘MFT_AD’, Format X.509, RSA

wls:/base_domain/serverConfig> exit()

Exiting WebLogic Scripting Tool.

[oracle@oel6vb bin]$

V. Configure SFTP server with private key alias

After logging in to MFT UI, go to Administration Tab. Under Embedded Servers, go to sFTP tab and complete the following:

  1. enable SFTP
  2. set Public Key as authenticationType
  3. set KeyAlias to the private key alias set during import in Step III.
  4. save settings

Example screenshot is shown below.

BSrvr

VI. Configure MFT users and corresponding SFTP directories to be used by remote partners

From MFT UI, under Administration Tab, configure the user and the SFTP root directory, that will be used by in remote SFTP client session. Note that the userID will be the same as the Public Key Alias, used while importing the public key in Step IV.

Sample screenshots for user and directory are shown below.

BUser

VII. Enter SSH-Keystore Password

From the MFT UI, go to Administration tab and select KeyStore node in the left navigator tree.

Enter the password for SSH-Keystore as the same passphrase used during key pair generation on local machine in Step II.

Example screenshot is given below.

BKstr

VIII. Restart MFT Server

MFT Server should be restarted for most of the changes made in the earlier steps to take effect. This wraps up the administrative setup necessary for the exercise. The following sections are part of a simple MFT design process to create a source, target and transfer.

IX. Create Embedded SFTP Source

From MFT UI, go to the Designer tab. Create a SFTP Source pointing to the directory created in Step VI. Sample screenshot is shown below.

BSrc

X. Create File Target

For the sake of simplicity, a local file directory is chosen as the directory. From the MFT UI, navigate to the Designer tab and create a target as shown below.

BTrgt

XI. Create a transfer using the above source and target

From the Designer tab within MFT UI, create a transfer using the source and target created in Steps IX and X. Sample screenshot is shown below.

BTrfr

XII. Deploy and Test

After deploying the transfer, we are ready to test the entire flow.

We initiate the test by starting a simple, command-line SFTP client in the remote machine (slc08vby) and connecting to the embedded SFTP server running within MFT. The userID is the one specified in Step IV and VI (MFT_AD). The passphrase is the same as that used in generating the key pair in the remote machine during Step I.

After the sftp session is established, we put a file into the SFTP root directory of the user on MFT server machine, as specified in Step VI. The transcript from a sample SFTP client session is shown below.

[slahiri@slc08vby ~]$ cat ~/.ssh/config.sftp
Host 10.159.179.84
Port 7522
PasswordAuthentication no
User MFT_AD
IdentityFile /home/slahiri/sftpslc
[slahiri@slc08vby ~]$

[slahiri@slc08vby ~]$ sftp -F ~/.ssh/config.sftp 10.159.179.84
Connecting to 10.159.179.84…
Enter passphrase for key ‘/home/slahiri/sftpslc':
sftp> pwd
Remote working directory: /MFT_AD
sftp> put sftptest.txt
Uploading sftptest.txt to /MFT_AD/sftptest.txt
sftptest.txt                                  100%   24     0.0KB/s   00:00
sftp> quit
[slahiri@slc08vby ~]$

After the SFTP operation is completed, the MFT transfer takes over. MFT picks up the file from the embedded SFTP source and places it in the directory within the local file system, defined as target. Example screenshot from Monitoring Tab of MFT UI is shown below.

BFlow

Finally, we verify that our test file is saved in the local directory specified as the target in Step X.

[oracle@oel6vb in]$ pwd
/home/oracle/in
[oracle@oel6vb in]$ ls
sftptest.txt
[oracle@oel6vb in]$

Summary

The test case described here is one way to establish secure transfers with MFT. There are other use cases as well and will be discussed in other articles of this blog series on MFT. For further details, please contact the MFT Product Management team or SOA/MFT group within A-Team.

Acknowledgements

MFT Product Management and Engineering teams have been actively involved in the development of this solution for many months. It would not have been possible to deliver such a solution to the customers without their valuable contribution.

Throttling in SOA Suite via Parking Lot Pattern

$
0
0

The Parking Lot Pattern has been leveraged in many Oracle SOA Suite deployments to handle complex batching, message correlation, and complex processing flows. One scenario that is a frequent topic of discussion is throttling SOA Suite so as not to overwhelm slower downstream systems. Most often this is accomplished via the tuning knobs within SOA Suite and WebLogic Server. However, there are times when the built-in tuning cannot be tweaked enough to stop flooding slower systems. SOA design patterns can be leveraged when product features do not address these edge use cases. This blog will focus on using The Parking Lot Pattern as one implementation for throttling. Also note a working example is provided.

Throttling Parking Lot

The key piece to this pattern is the database table that will be used for the parking lot. The table is very simple and comprised of 3 columns:

Column Description
ID (NUMBER) This is the unique ID/key for the row in the table.
STATE (VARCHAR)
This will be used for state management and logical delete with the database adapter. There are three values this column will hold:
1. N – New (Not Processed)
2. P – Processing (In-flight interaction with slower system)
3. C – Complete (Slower system responded to interaction)
The database adapter will poll for ‘N’ew rows and will mark the row as ‘P’rocessing when it hands it over to a BPEL process.
PAYLOAD (CLOB) The message that would normally be associated with a component is stored here as an XML clob.

The Use Case Flow

Without the parking lot, the normal flow for this use case would be:

1. Some client applications call SOA Suite via Web Service, JMS, etc.
2. An asynchronous BPEL instance is created and invokes the slower system for every client request within the tuning parameters of the SOA engine
3. The slower system cannot handle the volume and gets flooded

How the flow is changed with the parking lot:

1. Some client applications call SOA Suite via Web Service, JMS, etc.
2. Each client request is inserted into the parking lot table as an XML clob with STATE = ‘N’.
3. A composite containing a polling database adapter will select 1 row with STATE = ‘N’ and the count of rows with STATE = ‘P’ are less than a throttle value (e.g., 5).
4. If the in-flight interactions with the slower system are less than the throttle value, the database adapter gets the next available row and marks it as being processed (STATE = ‘P’).
5. This row is handed off to an asynchronous BPEL process that will invoke a different BPEL process responsible for interacting with the slower system.
6. When the slower system responds and this response propagates back to the initiating BPEL process, the row is marked as complete (STATE = ‘C’).
7. Go to step 3 until all records have been processed.

The throttle control value represents the maximum number of in-flight BPEL processes that are interacting with the slower system. We will see later how this value can be changed at runtime through the SOA Suite Enterprise Manager console.

Configuring the Polling Database Adapter

The database adapter is the gate for flow control via the polling SQL statement. And “expert polling” configuration is required in order to setup the appropriate SQL statement. This configuration is a combination of getting artifacts created in JDeveloper using the DBAdapter Configuration Wizard and then manually tweaking the generated artifacts. The important steps in the wizard consist of:

1. Operation Type: Poll for New or Changed Records in a Table
2. After Read: Delete the Row(s) that were Read
3. Make sure Distributed Polling is checked
4. Add Parameter: MaxRowsProcessing

When the wizard finishes and the artifacts are created, there will be a file with the following naming convention: [Service Name]-or-mappings.xml (please note that you may have to edit this file outside of JDeveloper with 12c). It is in this file we will make changes that are considered “expert polling” configuration steps. The steps are not complicated:

1. Locate the <query …> element. If there are any child <criteria …></criteria> elements, remove them and all their children elements.
2. Between the <query …> element and <arguments> element, add <call xsi:type=”sql-call”></call>
3. Within the <call …> element add a <sql></sql>
4. Within the <sql> element add the polling query. The blog example looks like:
SELECT
    ID,
    "STATE",
    PAYLOAD
FROM
    THROTTLE_PARKINGLOT 
WHERE
    (((SELECT COUNT("STATE") FROM THROTTLE_PARKINGLOT WHERE "STATE" = 'P') &lt; #MaxRowsProcessing) AND
    ("STATE" = 'N'))
ORDER BY ID ASC FOR UPDATE SKIP LOCKED
5. Locate the closing queries element (</queries>)
6. Between the </queries> element and </querying> element insert <delete-query></delete-query>
7. Within the <delete-query> element, add a <call xsi:type=”sql-call”></call>
8. Within the <call …> element add a <sql></sql>
9. Within the <sql> element add the logical delete query. The blog example looks like:
<delete-query>
    <call xsi:type="sql-call">
      <sql>
      UPDATE THROTTLE_PARKINGLOT SET "STATE" = 'P' WHERE (ID = #ID)
      </sql>
    </call>
</delete-query>

Other Components

Now that the polling adapter is configured, we need an asynchronous BPEL process to handle the state management of the message. In the blog example, it is a very straightforward process:

1. Convert CLOB into payload for the slow system
2. Invoke the slow system
3. Receive the response from the slow system
4. Update row in the database with a complete state

ThrottleParkingLot12c_001

The state update is done through another DBAdapter configuration where the Operation Type is Update Only and the column is the STATE column. The state management BPEL process simply updates the STATE to ‘C’ using the row ID it already has as the key.
The blog example has one more BPEL process called SlowSystemSimulatorBPELProcess. This is an asynchronous BPEL process that will randomly generate a wait time in seconds between 20 and 240. It then uses a Wait activity to simulate a very slow and sporadic downstream system.

The Example

I have provided two SOA Suite 12c projects for the example:

1. ThrottleParkingLotTableLoader (sca_ThrottleParkingLotTableLoader_rev1.0.jar)
2. ThrottleParkingLotBlogExample (sca_ThrottleParkingLotBlogExample_rev1.0.jar)

Each project contains the necessary SQL scripts to get things setup in the database. Once the user and the table are set up, you will have to configure your database adapter for accessing the THROTTLE_PARKINGLOT table via the ATeam_Example user. To make it easier on you, use eis/DB/ATeamExample as the JNDI Name for the DBAdapter. Otherwise this will need to be changed in the .jca files before deploying the projects to your SOA server.

Once the projects are deployed, you can run a stress test on the ThrottleParkingLotTableLoader / AddPayloadToParkingLotMediator_ep to fill the parking lot with records. Once the parking lot has records they should start being processed by the ThrottleParkingLotBlogExample composite. The initial setting for the MaxRowsProcessing property is 5, so the number in in-flight instances will be limited to 5:

ThrottleParkingLot12c_002

Within the SOA Suite Enterprise Manager, we can change the value of MaxRowsProcessing:

ThrottleParkingLot12c_003

Now we see that the number of in-flight instances has changed:

ThrottleParkingLot12c_004

This will allow runtime tweaking of load on the downstream system. The value for MaxRowsProcessing can also be set to 0 (zero) to stop messages flowing to the downstream system. If you noticed, the polling sequence also leverages SKIP LOCKED which should allow this to work in a clustered environment. However, I have not tested this so feel free to try it out and provide feedback on your findings

I do hope you find this a valuable option for finer grained throttling within SOA Suite.

Passing User Context When Invoking ADF BC SOAP Web Services

$
0
0

Introduction

ADF web applications often use session-scoped user context information that is also used in the ADF Business Components (ADF BC) layer. For example, the user language might be used to query language-specific data from the database. This article explains how this user context can be set up when accessing the ADF BC layer through its SOAP-based web service layer rather than through a web application.

Main Article

To access session information in the ADF BC layer, ADF developers typically use the ADFContext object. For example, when the preferred user language is stored on the HTTP Session under the key “userLanguage”, this information can be accessed in the ADF BC layer using the following statement:

String language = (String)ADFContext.getCurrent().getSessionScope().get("userLanguage");

This value can then be used in the prepareSession method of an application module method to set the user language in a database context package. This saves us the tedious work of passing in the user language as a bind variable to every database query.

When accessing an application module as a SOAP web service using the SDO service interface we need to set up the same data in the ADFContext to ensure the database queries can be executed correctly.This can be done by creating a SOAP Message Handler. A SOAP message handler provides a mechanism for intercepting the SOAP message and header in both the request and response of the Web service. To add a SOAP message handler to our ADF BC service interface, we add the HandlerChain annotation at the top of our application module SDO service class:

@Interceptors(
  { ServiceContextInterceptor.class})
@Stateless(name="oracle.ateam.hr.soapdemo.model.common.HRServiceServiceBean"
           , mappedName="HRServiceServiceBean")
@Remote(HRServiceService.class)
@PortableWebService(targetNamespace="/oracle/ateam/hr/soapdemo/model/common/"
                    , serviceName="HRServiceService",
  portName="HRServiceServiceSoapHttpPort"
, endpointInterface="oracle.ateam.hr.soapdemo.model.common.serviceinterface.HRServiceService")
@HandlerChain(file = "handlers.xml")
public class HRServiceServiceImpl extends ServiceImpl implements HRServiceService
{

The HandlerChain annotation specifies an XML configuration file that you can use to specify one or more handlers for your web services. The value of the file attribute is a URL, which may be relative or absolute. Relative URLs are relative to the location of the Java Web Service class that contains the annotation. We used a relative path, so we need to store the handlers.xml file in the same location as the Java class. Here is the content of the handlers.xml file:

<?xml version="1.0" encoding="windows-1252" ?>
<handler-chains xmlns="http://java.sun.com/xml/ns/javaee">
  <handler-chain>
    <handler>
      <handler-name>UserContextHandler</handler-name>
      <handler-class>oracle.ateam.hr.soapdemo.model.UserContextHandler</handler-class>
    </handler>
  </handler-chain>
</handler-chains>

The UserContextHandler class needs to implement the SOAPHandler<SOAPMessageContext> interface and looks like this:

package oracle.ateam.hr.soapdemo.model;

import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.Set;

import javax.xml.namespace.QName;
import javax.xml.ws.handler.MessageContext;
import javax.xml.ws.handler.soap.SOAPHandler;
import javax.xml.ws.handler.soap.SOAPMessageContext;

import oracle.adf.share.ADFContext;

public class UserContextHandler implements SOAPHandler<SOAPMessageContext> {

    public boolean handleMessage(SOAPMessageContext mc) {
        Map headers = (Map) mc.get(mc.HTTP_REQUEST_HEADERS);
        Object lang = headers.get("Language");
        if (lang != null && lang instanceof List ) {
            String language = (String) ((List) lang).get(0);
            ADFContext.getCurrent().getSessionScope().put("userLanguage", language);
        }
        return true;
    }

    public Set<QName> getHeaders() {
        return Collections.emptySet();
    }

    public void close(MessageContext mc) {
    }

    public boolean handleFault(SOAPMessageContext mc) {
        return true;
    }
}

You might be a bit puzzled by the cast to java.util.List, this i needed because the value of each header parameter is contained by a List object for some strange reason. As you can see,we can directly access and use the ADFContext object here which makes it all very easy.To test whether everything works, we first create a prepareSession method in the application class like this:

@Override
public void prepareSession(SessionData sessionData) {
    String language = (String) ADFContext.getCurrent().getSessionScope().get("userLanguage");
    System.err.println("USER LANGUAGE: "+language);
    // TODO add code to set language as DB context
    super.prepareSession(sessionData);        
}

Now we can use a tool like SOAP UI to call the web service:

SoapUi

Note the header parameter Language that we configured. After we invoked the web service using SOAP UI,the JDeveloper log proves that the language value is available in the prepareSession method:

jdevlog

That’s all, a simple yet powerful technique to reuse existing view objects that rely on ADFContext session information with SOAP-based web services.

You can download the sample project here.

 

 

 

 

 

Viewing all 228 articles
Browse latest View live