Tuesday, December 1, 2009

Processing huge payload using fusion middleware

One of the main design requirements is to baseline the expected volume of data flowing through the middleware layer. Recently one of my clients asked me to review the design of some interfaces which were failing the load testing and was not able to pass the peak volume testing . They had issues of server crashing and JVM out of memory errors. On analysis the peak volumes were quite huge for these interfaces to handle. The payload file size used to touch a peak of 25 mb. As per the design the process used to receive the XML payload as a string, which when get converted to a variable and transformed used to clog the heap space which may finally result in running out of heap space(JVM).

There were multiple options which came to the table. I am listing a few of them

1. split the payload into smaller chunks before feeding it to the BPEL process and process it one by one.

+ve--> The BPEL process will be left untouched as flow will be same as the payload size will be manageable.

-ve -->There will be more additional components which will be performing the splitting of payload into smaller chunks. The more components involved adds to more points of exception handling.

-ve --> The overhead of maintaining the atomicity of the transactions are too high and it would complicate the entire system

-ve --> In case process uses control tables it will involve multiple calls to control tables which would maintain the entire end to end processing of the document.

2. To go for a complete java alternative which will be parsing the XML file and converting it into JAVA POJO's and directly inserting into database or writing to a file.

+ve --> Stable since it XML parsers and java option is tried and tested option.

-ve --> Moving away from the common architecture of using middleware components of BPEL ,ESB to more conservative approach.

3. To parse the XML document in smaller chunks in XSL transformation so that we directly address the clogging of heap space due to transformation of XML document. The option was to process smaller chunks of the document within a while in the BPEL process.

+ve--> The process will have better end to end control as the while loop is within and will be able to use all the transaction management capabilities with BPEL to maintain the atomicity of the transaction.

-ve --> In case process uses control tables it will involve multiple calls to control tables which would maintain the entire end to end processing of the document.

The 3rd option is the best work around to handle the huge payload transformation and design issues. The huge payload was processes successfully and it passed the negative and load testing with flying colours.

Saturday, November 7, 2009

SOA 11g installation

The installation is pretty straight forward. At first look SOA Suite 11g is too heavy ,but solid and is resource savy , so requires >3GB memory .

The installation requires following 11G software's

1. RCU(Repository Creation Utility)

2. Weblogic Server:wls1031_win32.exe

3. SOA Suite: ofm_soa_generic_11.1.1.1.0_disk1_1of1.zip

4. JDeveloper Studio, base install and SOA Extension for JDeveloper

From a database perspective I used XE database and it looks fine .The Oracle database -Oracle XE Universal database version 10.2.0.1.

If you are using XE, you must update database parameters if you have never done this for your database installation. You only have to do this once after installing. Make sure processes parameter is >=200.
sqlplus sys/oracle@XE as sysdba
SQL> show parameter session
SQL> show parameter processes
SQL> alter system reset sessions scope=spfile sid='*';
SQL> alter system set processes=200 scope=spfile;
SQL> shutdown immediate
SQL> startup

For my first installation I didnt use RCU, so had to manually create the schema objects .The scripts can be found @ $ORACLE_HOME\rcu\integration\soainfra\sql

Another important step is to set the memory setting.

This value is dependent on your machine resources and may need to be adjusted for your machine. Allocating more memory for startup will give you help you in resolving the Out of memory error while staring up the server.

set DEFAULT_MEM_ARGS=-Xms1024m -Xmx1024m

Friday, August 14, 2009

Out of memory error in BPEL/ESB

Out of memory error is thrown from java.util.zip.ZipFile.open method.

Stacktrace

oracle.tip.esb.server.common.exceptions.BusinessEventRetriableException: An unhandled exception has been thrown in the ESB system. The exception reported is: "java.lang.OutOfMemoryError

at java.util.zip.ZipFile.open(Native Method)

at java.util.zip.ZipFile.<init>(ZipFile.java:203)

at java.util.jar.JarFile.<init>(JarFile.java:132)

at java.util.jar.JarFile.<init>(JarFile.java:97)

at oracle.classloader.SharedJar.doOpen(SharedJar.java:208)

at oracle.classloader.SharedCodeSource.open(SharedCodeSource.java:1136)

at oracle.classloader.SharedCodeSource.ensureOpen(SharedCodeSource.java:948)

at oracle.classloader.SharedCodeSource.getResourceBytes(SharedCodeSource.java:967)

This is inherent bug with JDk1.4 , still open in jdk1.5 and is fixed only in Jdk 1.6. So the only option is to get rid of the old JDK and replace it with JDK1.6. JDK1.6 looks like the best bet for all the issues related to java JDK. Even though 10.1.3.3 is not certified with java 1.6 its always better to use java 1.6 since this bug is only fixed in java 1.6.

Friday, July 31, 2009

New features in 10.1.3.4 adapters

Recently I got a query on whether we can use adapter to copy/rename the files. The adapters in 10.1.3.3 doesn’t support all these features whereas in 10.1.3.4 supports this feature.

In the adapter jar there is a new class oracle.tip.adapter.file.outbound.FileIoInteractionSpec which has a TYPE parameter where we can define MOVE, COPY and DELETE. Based on this parameters the adapter will perform respective actions.

Refer http://download.oracle.com/docs/cd/E12524_01/relnotes.1013/e12523/adapters.htm#BCFJAGIF

for additional features with 10.1.3.4

Tuesday, July 14, 2009

Too many open files error while running a process

In one of my last projects we used to get an error telling “Too many open files”

Stack trace

SEVERE: Could not check for resources in: /OracleAS_1/lib/dsv2.jar (from in /OracleAS_1/j2ee/OC4J_SOA/config/server.xml). Caught java.util.zip.ZipException: Too many open files
SEVERE: Could not check for resources in: /OracleAS_1/j2ee/OC4J_SOA/connectors/MQSeriesAdapter/MQSeriesAdapter/MQSeriesAdapter.jar (from in /OracleAS_1/j2ee/OC4J_SOA/connectors/MQSeriesAdapter/MQSeriesAdapter). Caught java.util.zip.ZipException: Too many open files

Reason

The no.of available open files configured at UNIX level is low compare to no.of files required for SOA suite to run.

Type ulimit –a in the server to see the no.of open files limit.

By default it will be 1024

open files (-n) 1024

Solution

Set the nofiles (descriptors) to unlimited (or at least 65536):

$ ulimit -n unlimited

or

$ ulimit -n 65536

Hope this helps. Refer the prereq guide for installation of SOA suite where this parameters have been speciied