Overview of Concurrent Processing in E-Business Suite R12
What is PCP
PCP is the method by which we configure the Concurrent Manager in a multi tier environment with 2 or more concurrent nodes. This allows concurrent processing load to be distributed across the nodes and provides high availability in case of node failure. Managers migrate to surviving node (failover)when one of the concurrent nodes goes down and migrate back (failback) when the failed node comes back.
Each node with concurrent managers may or may not be running an ORACLE instance. The concurrent manager(s) connect via sqlnet to database using tns alias specified by TWO_TASK in adcmctl.sh and gsmstart.sh on each concurrent node.
When the primary node fails, the ICM will restart the manager on the secondary node. If the ICM’s node fails, an Internal Monitor on surviving node can spawn a new ICM on that node.
Services/Managers move back to their primary nodes when those nodes come back up.
Role of ICM in PCP:
Internal Manager (ICM) monitors, activates and deactivates all managers. ICM migrates managers during node and/or instance failures and needs to be active for failover/failback to work. ICM uses the Service Manager (FNDSM) to spawn and terminate all concurrent manager processes, and to manage GSM services like Workflow mailer, Output Post Processor, etc. ICM will contact the APPS TNS Listener on each local and remote concurrent processing node to start the Service Manager on that node. ICM will not attempt to start a Service Manager if it is unable to TNS ping the APPS TNS Listener. One Service Manager is defined for each application node registered in FND_NODES. Each service/manager may have a primary and a secondary node. Initially, a concurrent manager is started on its primary node. In case of node failure, all concurrent managers on that node migrate to their respective secondary nodes.
Service Manager in PCP:
Service manager (FNDSM process) is used to manage services/managers on each concurrent node. It is a requirement in all concurrent processing environments and is therefore an integral part of PCP. PCP cannot be implemented without Service manager.
The Service Manager is spawned from the APPS TNS Listener
The APPS TNS Listener must be started on every application node in the system, and started by the user that starts ICM (e.g. applmgr)
TNS Listener spawns Service Manager to run as agent of ICM for the local node
The Service Manager is started by ICM on demand when needed. If no management actions are needed on a node, Service Manager will not be started by ICM until necessary. When ICM exits its Service Managers exit as well.The Service Manager environment is set by gsmstart.sh and APPSORA.env as defined in listener.ora
Internal Monitor in PCP:
The function of Internal Monitor (FNDIMON process) is to check if ICM is running and restart failed ICM on local node.
Internal Monitors are seeded on every registered node by default by autoconfig. Activate Internal Monitor on each concurrent node where the ICM can start in case of a failure. By default, Internal Monitor is deactivated. If the ICM goes down, the Internal Monitor will attempt to start a new ICM on the local node.
If multiple ICMs are started, only the first will stay active. The others will gracefully exit.
Configuring Parallel Concurrent Processing (PCP) in R12
Step 1 : Backup all the .ora files.
Backup the .ora files present in 10.1.2 and 10.1.3 ORACLE_HOME’s.
Step 2 : Edit the Application Context file.
Shutdown the application tier services. Edit the context file using vi editor or Oracle Application Manager and perform the below changes.
==> APPLDCP to ON
==> s_applcsf to common mount point shared across the application servers.
==> s_appltmp to common mount point shared across the application servers.
Step 3: Add/ Modify the entries in spfile.
Login as sqlplus “/ as sysdba”
alter system set “_lm_global_posts”=true scope=spfile;
alter system set “_immediate_commit_propagation”=true scope=spfile;
*._lm_global_posts if set to TRUE deliver global posts to remote nodes.
*._immediate_commit_propagation is set to TRUE, propagate commit SCN immediately.
Step 4: Edit UTL_FILE_DIR variable.
alter system set
Step 5: Bounce the Database instances.
srvctl stop database -d <db_name>
srvctl start database -d <db_name>
Step 6: Execute Autoconfig.
Step 7: Validate tnsnames.ora, listener.ora, sqlnet.ora files.
Validate the .ora files in 10.1.2 ORACLE_HOME and check for the existence of FNDFS and FNDSM entries for all the Concurrent Manager nodes.
Step 8:Perform the following steps using AD Administration.
Relink AD executables (Use AD Relink for this step)
Relink Applications programs
Generate message files
Generate form files
Generate report files
Generate jar file
Step 9: Start the Application Tier services.
Login to Oracle Applications using SYSADMIN and choose the System Administrator Responsibility.
Navigate to Concurrent > Manager > Define, and set up the primary and secondary node names for all the concurrent managers according to the desired configuration for each node workload.
Verify that the Internal Monitor and Service Manager for each node is defined properly, with correct primary node specification, and work shift details.
For example, Internal Monitor: Host1 must have primary node as host1.
Assign a standard work shift with one process to both managers.
Also ensure that the Internal Monitor manager is activated:
this can be done from Concurrent > Manager > Administrator.
Set Concurrent: TM Transport Type profile
Navigate to Profile > System, change the profile option ‘Concurrent: TM Transport Type’ to ‘QUEUE’, and verify that the transaction manager works across the Oracle RAC instance.
If any of the transaction managers are in deactivated status, activate them from Concurrent > Manager > Administrator
Managers with no primary node assignment will be assigned a default target node. In general this will be the node where the ICM is currently running.
Test Case #1
1. Bring down Apps Listener on CM node 2 using kill -9
2. Kill all FNDLIBR process on CM node 2 using kill -9
3. Start Apps listener on CM node 2 after 5 mins.
4. Monitor CM2, you should see all the managers on CM2 should come up automatially
Test Case #2
1. Bring down apps listener on CM node2 using kill -9.
2. Start the managers on CM1, you should see managers of CM2 will start on CM node 1.
3. Bring up Apps listenr on CM2.
4. You should see, Managers of CM2 should fall back to CM2 from CM1.
Test Case #3
1. Bring down host of CM node 2. It should be shutdown.
2. You should see all the managers of CM2 should be failed over to CM1
3. Bring up host of CM node2.
4. Bring up apps listener on cm node2.
5. You should see all the managers of CM2 should fall back to CM2 fro CM1.
Note 241370.1 - Concurrent Manager Setup and Configuration Requirements in an 11i RAC Environment
Note: 388495.1 - How to Set Up Parallel Concurrent Processing (PCP) in Apps 11i?
Note: 602899.1 - Some More Facts On How to Activate Parallel Concurrent Processing
Note: 271090.1 - Parallel Concurrent Processing Failover/Failback Expectations
Note: 752604.1 - Failover Does Not Occur To The Secondary Node While The Primary Node Is Up
Note: 729883.1 - How to Create a Second OPP Concurrent Manager in a Node Different Than The Primary Node