Skip to content
November 29, 2018 / Shivananda Rao P

Configuring TDE in 12.1 RAC database with dataguard enabled

This article demonstrates implementing TDE in a RAC environment with standby database configured. The method used is same as what was done in my previous post, but uses a new feature of 12c called “administer key management”.

 

Environment:
2 node primary RAC: 12cnode1, 12cnode2
2 node standby RAC: 1212drnode1, 1212drnode2
Primary database name: srprim
Standby database name: srpstb

 

Just ensuring before implementing TDE that the standby database is in sync with the primary database.

 

DGMGRL> show configuration

Configuration - dgconfig

  Protection Mode: MaxPerformance
  Members:
  srprim - Primary database
    srpstb - Physical standby database 

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 6 seconds ago)

 

Create the directory structure where you need to place the wallet files and add the following entry in the SQLNET.ORA file on all the nodes of the primary database. This entry is needed to determine the wallet location.
In my case, I have decided to place the wallet files in the location “/u01/app/oracle/product/12.1.0.2/db_1/network/admin/$ORACLE_SID/wallet” and so have I created the same.

 

[oracle@12cnode1 ~]$ mkdir -p /u01/app/oracle/product/12.1.0.2/db_1/network/admin/srprim1/wallet
[oracle@12cnode2 ~]$ mkdir -p /u01/app/oracle/product/12.1.0.2/db_1/network/admin/srprim2/wallet

 

ENCRYPTION_WALLET_LOCATION = 
 (SOURCE = 
   (METHOD = FILE)
   (METHOD_DATA =
     (DIRECTORY = /u01/app/oracle/product/12.1.0.2/db_1/network/admin/$ORACLE_SID/wallet)
    )
  )

 

The instances show the default location of the wallet when queried the GV$ENCRYPTION_WALLET view. So, in order to get the instances reflect the right wallet location as per the SQLNET.ORA file, we need to bounce the instances.

 

SYS@srprim1>select * from gv$encryption_wallet order by inst_id;

   INST_ID WRL_TYPE   WRL_PARAMETER                                                STATUS          WALLET_TYPE          WALLET_OR FULLY_BAC     CON_ID
---------- ---------- ------------------------------------------------------------ --------------- -------------------- --------- --------- ----------
         1 FILE       /u01/app/oracle/product/12.1.0.2/db_1/admin/srprim/wallet    NOT_AVAILABLE   UNKNOWN              SINGLE    UNDEFINED          0
         2 FILE       /u01/app/oracle/product/12.1.0.2/db_1/admin/srprim/wallet    NOT_AVAILABLE   UNKNOWN              SINGLE    UNDEFINED          0
		 

 

[oracle@12cnode1 ~]$ srvctl stop instance -instance srprim1 -database srprim
[oracle@12cnode1 ~]$ 
[oracle@12cnode1 ~]$ 
[oracle@12cnode1 ~]$ 
[oracle@12cnode1 ~]$ srvctl start instance -instance srprim1 -database srprim

 

SYS@srprim1>select * from gv$encryption_wallet order by inst_id;

   INST_ID WRL_TYPE   WRL_PARAMETER                                                        STATUS          WALLET_TYP WALLET_OR FULLY_BAC     CON_ID
---------- ---------- -------------------------------------------------------------------- --------------- ---------- --------- --------- ----------
         1 FILE       /u01/app/oracle/product/12.1.0.2/db_1/network/admin/srprim1/wallet/  NOT_AVAILABLE   UNKNOWN    SINGLE    UNDEFINED          0
         2 FILE       /u01/app/oracle/product/12.1.0.2/db_1/admin/srprim/wallet            NOT_AVAILABLE   UNKNOWN    SINGLE    UNDEFINED          0
		 

 

Similarly, restart the second instance as well.

 

[oracle@12cnode1 ~]$ srvctl stop instance -instance srprim2 -database srprim
[oracle@12cnode1 ~]$ 
[oracle@12cnode1 ~]$ 
[oracle@12cnode1 ~]$ srvctl start instance -instance srprim2 -database srprim

 

SYS@srprim1>select * from gv$encryption_wallet order by inst_id;

   INST_ID WRL_TYPE   WRL_PARAMETER                                                          STATUS          WALLET_TYPE  WALLET_OR FULLY_BAC     CON_ID
---------- ---------- ---------------------------------------------------------------------- --------------- ------------ --------- --------- ----------
         1 FILE       /u01/app/oracle/product/12.1.0.2/db_1/network/admin/srprim1/wallet/    NOT_AVAILABLE   UNKNOWN      SINGLE    UNDEFINED          0
         2 FILE       /u01/app/oracle/product/12.1.0.2/db_1/network/admin/srprim2/wallet/    NOT_AVAILABLE   UNKNOWN      SINGLE    UNDEFINED          0
		 

 

Now since the WRL_PARAMETER is reflecting the right directory where the wallet needs to be stored, lets move in creating the wallet keystore. Use the “administer key management” clause to create the keystore in the wallet directory location. This needs to be done only on one instance. (In my case, I’m running it on the first instance).

 

SYS@srprim1>administer key management create keystore '/u01/app/oracle/product/12.1.0.2/db_1/network/admin/srprim1/wallet/' identified by "oracle123";

keystore altered.

 

Now when you query the gv$encryption_wallet view, you should see that status as CLOSED for the first instance where as the second instance should reflect it as “NOT_AVAILABLE”.

 

SYS@srprim1>select * from gv$encryption_wallet order by inst_id;

   INST_ID WRL_TYPE   WRL_PARAMETER                                                          STATUS          WALLET_TYPE  WALLET_OR FULLY_BAC     CON_ID
---------- ---------- ---------------------------------------------------------------------- --------------- ------------ --------- --------- ----------
         1 FILE       /u01/app/oracle/product/12.1.0.2/db_1/network/admin/srprim1/wallet/    CLOSED          UNKNOWN      SINGLE    UNDEFINED          0
         2 FILE       /u01/app/oracle/product/12.1.0.2/db_1/network/admin/srprim2/wallet/    NOT_AVAILABLE   UNKNOWN      SINGLE    UNDEFINED          0

 

Now, open the keystore that was created previously on the instance where it was created.

 

SYS@srprim1>administer key management set keystore open identified by "oracle123"; 

keystore altered.

 

SYS@srprim1>select * from gv$encryption_wallet order by inst_id;

   INST_ID WRL_TYPE   WRL_PARAMETER                                                          STATUS          WALLET_TYPE  WALLET_OR FULLY_BAC     CON_ID
---------- ---------- ---------------------------------------------------------------------- --------------- ------------ --------- --------- ----------
         1 FILE       /u01/app/oracle/product/12.1.0.2/db_1/network/admin/srprim1/wallet/    OPEN_NO_MASTER_ PASSWORD     SINGLE    UNDEFINED          0
                                                                                             KEY

         2 FILE       /u01/app/oracle/product/12.1.0.2/db_1/network/admin/srprim2/wallet/    NOT_AVAILABLE   UNKNOWN      SINGLE    UNDEFINED          0

 

As we have opened the keystore, the status should reflect as OPEN, but in 12c, you need to have a MASTER KEY as well which is why the STATUS column reflects as “OPEN_NO_MASTER_KEY” where gv$encryption_wallet view is queried.

 

So, create a MASTER KEY using the “administer key management” clause as stated below. The MASTER KEY would be stored in the same location where the wallet file location was specified in the sqlnet.ora file.

 

		 
SYS@srprim1>administer key management set key identified by "oracle123" with backup;

keystore altered.

 

SYS@srprim1>select * from gv$encryption_wallet order by inst_id;

   INST_ID WRL_TYPE   WRL_PARAMETER                                                          STATUS          WALLET_TYPE  WALLET_OR FULLY_BAC     CON_ID
---------- ---------- ---------------------------------------------------------------------- --------------- ------------ --------- --------- ----------
         1 FILE       /u01/app/oracle/product/12.1.0.2/db_1/network/admin/srprim1/wallet/    OPEN            PASSWORD     SINGLE    NO                 0
         2 FILE       /u01/app/oracle/product/12.1.0.2/db_1/network/admin/srprim2/wallet/    NOT_AVAILABLE   UNKNOWN      SINGLE    UNDEFINED          0

 

Once the MASTER KEY is created, the status for the corresponding instance turns out to be OPEN when “gv$encryption_wallet” is queried.

 

As seen, the wallet file “ewallet.p12” does also contains the master key. This needs to be copied on to the remaining nodes of the cluster.

 

		 
[oracle@12cnode1 ~]$ ls -lrt /u01/app/oracle/product/12.1.0.2/db_1/network/admin/srprim1/wallet/
total 8
-rw-r--r--. 1 oracle oinstall 2400 Aug 29 11:46 ewallet_2018082906160156.p12
-rw-r--r--. 1 oracle oinstall 3848 Aug 29 11:46 ewallet.p12
[oracle@12cnode1 ~]$ 
[oracle@12cnode1 ~]$ 
[oracle@12cnode1 ~]$

 

Copying the wallet file to node 12cnode2.

 

 
[oracle@12cnode1 ~]$ cd /u01/app/oracle/product/12.1.0.2/db_1/network/admin/srprim1/wallet/
[oracle@12cnode1 wallet]$ scp ewallet.p12 oracle@12cnode2:/u01/app/oracle/product/12.1.0.2/db_1/network/admin/srprim2/wallet/
ewallet.p12                                                                                                           100% 3848     3.8KB/s   00:00    
[oracle@12cnode1 wallet]$ 

 

Now query gv$encryption_wallet to check the status with respect to each instance. We can see that the status now reflects as “OPEN” for the second instance on 12cnode2 as well.

 

SYS@srprim1>select * from gv$encryption_wallet order by inst_id;

   INST_ID WRL_TYPE   WRL_PARAMETER                                                          STATUS     WALLET_TYPE     WALLET_OR FULLY_BAC     CON_ID
---------- ---------- ---------------------------------------------------------------------- ---------- --------------- --------- --------- ----------
         1 FILE       /u01/app/oracle/product/12.1.0.2/db_1/network/admin/srprim1/wallet/    OPEN       PASSWORD        SINGLE    NO                 0
         2 FILE       /u01/app/oracle/product/12.1.0.2/db_1/network/admin/srprim2/wallet/    OPEN       PASSWORD        SINGLE    NO                 0

 

Please note that at this point of time, we have setup only manual wallet management which means that when the database/instance is started, we need to open the wallet manually.

 

		 
[oracle@12cnode1 wallet]$ srvctl stop database -database srprim
[oracle@12cnode1 wallet]$ 
[oracle@12cnode1 wallet]$ 
[oracle@12cnode1 wallet]$ srvctl start database -database srprim

 

SYS@srprim1>select * from gv$encryption_wallet order by inst_id;

   INST_ID WRL_TYPE        WRL_PARAMETER                                                          STATUS     WALLET_TYP WALLET_OR FULLY_BAC     CON_ID
---------- --------------- ---------------------------------------------------------------------- ---------- ---------- --------- --------- ----------
         1 FILE            /u01/app/oracle/product/12.1.0.2/db_1/network/admin/srprim1/wallet/    CLOSED     UNKNOWN    SINGLE    UNDEFINED          0
         2 FILE            /u01/app/oracle/product/12.1.0.2/db_1/network/admin/srprim2/wallet/    CLOSED     UNKNOWN    SINGLE    UNDEFINED          0

SYS@srprim1>

 

Now, let me create auto login setup which automatically opens the wallet upon the instance/database retart. This creates a file with name “cwallet.sso” under the same location that is specified in the sqlnet.ora file.

 

SYS@srprim1>administer key management create auto_login keystore from keystore '/u01/app/oracle/product/12.1.0.2/db_1/network/admin/srprim1/wallet/' identified by "oracle123";

keystore altered.

 

SYS@srprim1>host
[oracle@12cnode1 wallet]$ ls -lrt /u01/app/oracle/product/12.1.0.2/db_1/network/admin/srprim1/wallet/
total 12
-rw-r--r--. 1 oracle oinstall 2400 Aug 29 11:46 ewallet_2018082906160156.p12
-rw-r--r--. 1 oracle oinstall 3848 Aug 29 11:46 ewallet.p12
-rw-r--r--. 1 oracle oinstall 3893 Aug 29 12:29 cwallet.sso

 

[oracle@12cnode1 wallet]$ srvctl stop database -d srprim

 

Copy this auto login wallet file (cwallet.sso) on to the second node 12cnode2 as well and then start the database. Upon this, the wallets should be opened automatically for both the instances.

 

[oracle@12cnode1 wallet]$ scp cwallet.sso oracle@12cnode2:/u01/app/oracle/product/12.1.0.2/db_1/network/admin/srprim2/wallet/
cwallet.sso                                                                                                           100% 3893     3.8KB/s   00:00    
[oracle@12cnode1 wallet]$ srvctl start database -d srprim

 

SYS@srprim1>select * from gv$encryption_wallet order by inst_id;

   INST_ID WRL_TYPE   WRL_PARAMETER                                                          STATUS     WALLET_TYPE     WALLET_OR FULLY_BAC     CON_ID
---------- ---------- ---------------------------------------------------------------------- ---------- --------------- --------- --------- ----------
         1 FILE       /u01/app/oracle/product/12.1.0.2/db_1/network/admin/srprim1/wallet/    OPEN       AUTOLOGIN       SINGLE    NO                 0
         2 FILE       /u01/app/oracle/product/12.1.0.2/db_1/network/admin/srprim2/wallet/    OPEN       AUTOLOGIN       SINGLE    NO                 0

 

With the primary setup, the standby nodes should not be a problem. Perform the same initial steps as did on primary nodes of adding the below entry (speaking about the wallet location) in the sqlnet.ora file on all the nodes of the standby cluster.
Please note that if the directory structure is different, act accordingly in specifiying the appropriate available location.

 

ENCRYPTION_WALLET_LOCATION = 
 (SOURCE = 
   (METHOD = FILE)
   (METHOD_DATA =
     (DIRECTORY = /u01/app/oracle/product/12.1.0.2/db_1/network/admin/$ORACLE_SID/wallet)
    )
  )

 

  
[oracle@1212drnode1 admin]$ pwd
/u01/app/oracle/product/12.1.0.2/db_1/network/admin
[oracle@1212drnode1 admin]$ mkdir -p srpstb1/wallet

 

[oracle@1212drnode2 admin]$ pwd
/u01/app/oracle/product/12.1.0.2/db_1/network/admin
[oracle@1212drnode2 admin]$ mkdir -p srpstb2/wallet

 

Since we haven’t bounced the standby instances after adding the above entry in sqlnet.ora file, the wrl_parameter yet points to the inital location. Let’s copy the wallet files from the primary node and then bounce the standby instances at one go.

 

SQL> select * from gv$encryption_wallet order by 1;

   INST_ID WRL_TYPE   WRL_PARAMETER                                                          STATUS          WALLET_TYP WALLET_OR FULLY_BAC     CON_ID
---------- ---------- ---------------------------------------------------------------------- --------------- ---------- --------- --------- ----------
         1 FILE       /u01/app/oracle/product/12.1.0.2/db_1/admin/srpstb/wallet              NOT_AVAILABLE   UNKNOWN    SINGLE    UNDEFINED          0
         2 FILE       /u01/app/oracle/product/12.1.0.2/db_1/admin/srpstb/wallet              NOT_AVAILABLE   UNKNOWN    SINGLE    UNDEFINED          0

 

Scp all the wallet files (master key file and autologin file) from any one of the primary node to the first standby node.

 

[oracle@12cnode1 ~]$ cd /u01/app/oracle/product/12.1.0.2/db_1/network/admin/srprim1/wallet/
[oracle@12cnode1 wallet]$ ls -lrt
total 12
-rw-r--r--. 1 oracle oinstall 2400 Aug 29 11:46 ewallet_2018082906160156.p12
-rw-r--r--. 1 oracle oinstall 3848 Aug 29 11:46 ewallet.p12
-rw-r--r--. 1 oracle oinstall 3893 Aug 29 12:29 cwallet.sso

 

[oracle@12cnode1 wallet]$ scp *wallet.* oracle@1212drnode1:/u01/app/oracle/product/12.1.0.2/db_1/network/admin/srpstb1/wallet/
Warning: the RSA host key for '1212drnode1' differs from the key for the IP address '192.168.0.120'
Offending key for IP in /home/oracle/.ssh/known_hosts:14
Matching host key in /home/oracle/.ssh/known_hosts:16
Are you sure you want to continue connecting (yes/no)? yes
oracle@1212drnode1's password: 
cwallet.sso                                                                                                          100% 3893     3.8KB/s   00:00    
ewallet.p12                                                                                                          100% 3848     3.8KB/s   00:00    
[oracle@12cnode1 wallet]$ 

 

Similarly scp the same wallet files from the primary node to the second node of the standby cluster as well.

 

[oracle@12cnode1 wallet]$ scp *wallet.* oracle@1212drnode2:/u01/app/oracle/product/12.1.0.2/db_1/network/admin/srpstb2/wallet/
The authenticity of host '1212drnode2 (192.168.0.127)' can't be established.
RSA key fingerprint is 0a:f2:b7:22:3b:2b:fa:32:ea:dc:17:c1:22:7c:4e:e3.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '1212drnode2,192.168.0.127' (RSA) to the list of known hosts.
oracle@1212drnode2's password: 
cwallet.sso                                                                                                          100% 3893     3.8KB/s   00:00    
ewallet.p12                                                                                                          100% 3848     3.8KB/s   00:00    
[oracle@12cnode1 wallet]$ 

 

[oracle@1212drnode1 ~]$ srvctl status database -d srpstb -v -f
Instance srpstb1 is running on node 1212drnode1. Instance status: Mounted (Closed).
Instance srpstb2 is running on node 1212drnode2. Instance status: Mounted (Closed).
[oracle@1212drnode1 ~]$ 
[oracle@1212drnode1 ~]$ 

 

Now its time to bounce the standby database and check the wallet status.

 

[oracle@1212drnode1 ~]$ srvctl stop database -d srpstb
[oracle@1212drnode1 ~]$ 
[oracle@1212drnode1 ~]$ 
[oracle@1212drnode1 ~]$ srvctl start database -d srpstb
[oracle@1212drnode1 ~]$ srvctl status database -d srpstb -v -f
Instance srpstb1 is running on node 1212drnode1. Instance status: Mounted (Closed).
Instance srpstb2 is running on node 1212drnode2. Instance status: Mounted (Closed).
[oracle@1212drnode1 ~]$ 

 

Glad to see that the wallets are opened on both the standby nodes and that the wallet type represents as “AUTOLOGIN”.

 

[oracle@1212drnode1 ~]$ sql

SQL*Plus: Release 12.1.0.2.0 Production on Sat Sep 8 12:14:18 2018

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> set linesize 300
SQL> col wrl_type for a10
SQL> col wallet_type for a10
SQL> col status for a15
SQL> col wrl_parameter for a70
SQL> select * from gv$encryption_wallet order by 1;

   INST_ID WRL_TYPE   WRL_PARAMETER                                                          STATUS          WALLET_TYP WALLET_OR FULLY_BAC     CON_ID
---------- ---------- ---------------------------------------------------------------------- --------------- ---------- --------- --------- ----------
         1 FILE       /u01/app/oracle/product/12.1.0.2/db_1/network/admin/srpstb1/wallet/    OPEN            AUTOLOGIN  SINGLE    NO                 0
         2 FILE       /u01/app/oracle/product/12.1.0.2/db_1/network/admin/srpstb2/wallet/    OPEN            AUTOLOGIN  SINGLE    NO                 0

SQL> 

 

Finally, ensure that the standby database is in sync with the primary database.

 

[oracle@12cnode1 u02]$ dgmgrl /
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDG.
DGMGRL> show configuration

Configuration - dgconfig

  Protection Mode: MaxPerformance
  Members:
  srprim - Primary database
    srpstb - Physical standby database 

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

 

 

 

COPYRIGHT

© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.

 

 

 

DISCLAIMER

The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organisation. The views expressed by visitors on this blog are theirs solely and may not reflect mine

 

 

September 19, 2018 / Shivananda Rao P

Configuring TDE in 11.2 RAC database with dataguard enabled

In this post, I’m writing on how to implement TDE in a 11g RAC environment with Dataguard

 

Environment:
Oracle RDBMS version: 11.2.0.3
Primary nodes: node1, node2
Standby nodes: drnode1, drnode2
Primary Databaase: crmsdb
Standby Database: crmdb

 

Firstly, let me check if the standby database is in sync with the primary database.

 

[oracle@node1]$ dgmgrl /
DGMGRL for Linux: Version 11.2.0.3.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected.
DGMGRL> show configuration

Configuration - crmdb_dg

  Protection Mode: MaxPerformance
  Databases:
    crmsdb - Primary database
    crmdb  - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

 

Now, add the below entry in the SQLNET.ORA file on all the nodes of the Primary database. The SQLNET.ORA file needs to be present in the location where the TNS_ADMIN points.

 

The value for the DIRECTORY parameter in the below entry specifies the location where the wallet needs to be stored. In my case, I’m placing it ata “$ORACLE_HOME/network/admin//wallet”. If the specified directory structure does not exist, then you need to create one.

 

ENCRYPTION_WALLET_LOCATION=
(SOURCE=
  (METHOD=FILE)
  (METHOD_DATA=(DIRECTORY=/u01/app/oracle/product/11.2.0.3/db_1/network/admin/$ORACLE_SID/wallet))
)

 

Please note that in a RAC environment, the wallet can be placed in a shared location where all the nodes of the cluster database can access it (Ideally, it can be placed on ASM storage which I shall expalin in my coming posts). In this environment, I’m having a separate wallet for each node.

 

Once you add the entry in SQLNET.ORA file, you need to bounce the instances so that wallet location is upated correctly when you query the gv$encryption_wallet view.

 

As of now, I have not bounced the 2nd instance and due to which the wallet location is pointing to wrong location when I query the gv$encryption_wallet view.
Hence a restart of the 2nd instance is needed.

 

SQL> select * from gv$encryption_wallet order by inst_id;

   INST_ID WRL_TYPE        WRL_PARAMETER                                                          STATUS
---------- --------------- ---------------------------------------------------------------------- ------------------
         1 file            /u01/app/oracle/product/11.2.0.3/db_1/network/admin/$ORACLE_SID/wallet CLOSED
         2 file            /u01/app/oracle/product/11.2.0.3/db_1/admin/crmsdb/wallet              CLOSED
		 

 

[oracle@node1 admin]$ srvctl stop instance -i crmsdb2 -d crmsdb
[oracle@node1 admin]$ 
[oracle@node1 admin]$ srvctl start instance -i crmsdb2 -d crmsdb
[oracle@node1 admin]$ 

 

Now, querying the gv$encryption_wallet provides the expected result.

 

SQL> select * from gv$encryption_wallet order by inst_id;

   INST_ID WRL_TYPE        WRL_PARAMETER                                                          STATUS
---------- --------------- ---------------------------------------------------------------------- ------------------
         1 file            /u01/app/oracle/product/11.2.0.3/db_1/network/admin/$ORACLE_SID/wallet CLOSED
         2 file            /u01/app/oracle/product/11.2.0.3/db_1/network/admin/$ORACLE_SID/wallet CLOSED
		 

 

On any one instance of the primary database, create the wallet. Here, I’m running it on the first node of the primary database.

 

SQL> alter system set encryption key identified by "oracle123";

System altered.

 

Just crosscheck the wallet file presence under the wallet location.

 

[oracle@node1 ~]$ cd /u01/app/oracle/product/11.2.0.3/db_1/network/admin/crmsdb1/wallet
[oracle@node1 wallet]$ ls -lrt
total 4
-rw-r--r--. 1 oracle oinstall 1573 Aug 25 13:41 ewallet.p12

 

Now, SCP the wallet file

 

[oracle@node1 wallet]$ scp ewallet.p12 oracle@node2:/u01/app/oracle/product/11.2.0.3/db_1/network/admin/crmsdb2/wallet/
ewallet.p12                                                                                                           100% 1573     1.5KB/s   00:00    
[oracle@node1 wallet]$ 

 

You now manually need to open the wallet

 

SQL> alter system set encryption wallet open identified by "oracle123";

System altered.

 


SQL> select * from gv$encryption_wallet order by 1;

   INST_ID WRL_TYPE             WRL_PARAMETER                                                          STATUS
---------- -------------------- ---------------------------------------------------------------------- ------------------
         1 file                 /u01/app/oracle/product/11.2.0.3/db_1/network/admin/$ORACLE_SID/wallet OPEN
         2 file                 /u01/app/oracle/product/11.2.0.3/db_1/network/admin/$ORACLE_SID/wallet OPEN

 

In order to close the wallet manually, you can use the below command.

 

SQL> alter system set encryption wallet close identified by "oracle123";

System altered.

 

It is recommended to enable the wallet for auto login. This will ensure that there is no need to open or close the wallet manually. Oracle will automatically open the wallet when an encrypted object is accessed or created. This can be done by using the “orapki” utility.

 

[oracle@node1 wallet]$ orapki wallet create -wallet /u01/app/oracle/product/11.2.0.3/db_1/network/admin/crmsdb1/wallet -auto_login
Oracle PKI Tool : Version 11.2.0.3.0 - Production
Copyright (c) 2004, 2011, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:            
   
[oracle@node1 wallet]$

 

When prompted for the wallet password, provide the password that was used previously to create the key.

 

[oracle@node1 ~]$ cd /u01/app/oracle/product/11.2.0.3/db_1/network/admin/crmsdb1/wallet/
[oracle@node1 wallet]$ ls -lrt
total 8
-rw-r--r--. 1 oracle oinstall 1573 Aug 25 13:41 ewallet.p12
-rw-------. 1 oracle oinstall 1651 Aug 25 13:47 cwallet.sso

 

[oracle@node1 wallet]$ scp cwallet.sso oracle@node2:/u01/app/oracle/product/11.2.0.3/db_1/network/admin/crmsdb2/wallet/
cwallet.sso                                                                                                           100% 1651     1.6KB/s   00:00   

 

Now, bouncing the database will not stop you for opening the wallet as the above autologin option opens it automatically.

 

[oracle@node1 ~]$ srvctl stop database -d crmsdb
[oracle@node1 ~]$ 
[oracle@node1 ~]$ 
[oracle@node1 ~]$ srvctl start database -d crmsdb

 

[oracle@node1 ~]$ sql

SQL*Plus: Release 11.2.0.3.0 Production on Sat Sep 8 10:55:31 2018

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> set linesize 300
SQL> col wrl_parameter for a70
SQL> select * from gv$encryption_wallet order by 1;

   INST_ID WRL_TYPE             WRL_PARAMETER                                                          STATUS
---------- -------------------- ---------------------------------------------------------------------- ------------------
         1 file                 /u01/app/oracle/product/11.2.0.3/db_1/network/admin/$ORACLE_SID/wallet OPEN
         2 file                 /u01/app/oracle/product/11.2.0.3/db_1/network/admin/$ORACLE_SID/wallet OPEN

 

Now on the DR nodes, follow the same steps as done on the primary nodes.
Add same entry as above in sqlnet.ora file on all nodes of the standby and then restart the instances to get the wallet_location updated.

 

[oracle@drnode1 admin]$ srvctl stop instance -i crmdb1 -d crmdb
[oracle@drnode1 admin]$ srvctl start instance -i crmdb1 -d crmdb

 

[oracle@drnode1 admin]$ srvctl stop instance -i crmdb2 -d crmdb
[oracle@drnode1 admin]$ srvctl start instance -i crmdb2 -d crmdb
[oracle@drnode1 admin]$ 

 

Once restarted, query the gv$encryption_wallet view to verify if WRL_PARAMETER value corresponds to that mentioned in the WALLET_LOCATION value in the SQLNET.ORA file.

 

SQL> select * from gv$encryption_wallet;

   INST_ID WRL_TYPE        WRL_PARAMETER                                                          STATUS
---------- --------------- ---------------------------------------------------------------------- ------------------
         1 file            /u01/app/oracle/product/11.2.0.3/db_1/network/admin/$ORACLE_SID/wallet CLOSED
         2 file            /u01/app/oracle/product/11.2.0.3/db_1/network/admin/$ORACLE_SID/wallet CLOSED

 

On the standby, MRP may terminate

 

Errors with log +FRA/crmdb/archivelog/2018_08_25/thread_1_seq_64.376.985096395
MRP0: Background Media Recovery terminated with error 28365
Errors in file /u01/app/oracle/diag/rdbms/crmdb/crmdb1/trace/crmdb1_mrp0_12241.trc:
ORA-28365: wallet is not open
Managed Standby Recovery not using Real Time Apply
Recovery interrupted!
Recovered data files to a consistent state at change 1361738
MRP0: Background Media Recovery process shutdown (crmdb1)

 

Now, do not create any key on the standby database.

 

		 
[oracle@node1 wallet]$ scp ewallet.p12 oracle@drnode1:/u01/app/oracle/product/11.2.0.3/db_1/network/admin/crmdb1/wallet/
oracle@drnode1's password: 
ewallet.p12                                                                                                           100% 1573     1.5KB/s   00:00    
[oracle@node1 wallet]$ 
[oracle@node1 wallet]$ 
[oracle@node1 wallet]$ scp ewallet.p12 oracle@drnode2:/u01/app/oracle/product/11.2.0.3/db_1/network/admin/crmdb2/wallet/
The authenticity of host 'drnode2 (192.168.0.123)' can't be established.
RSA key fingerprint is 48:83:c6:32:28:0d:25:a8:5a:25:03:c7:21:96:6e:64.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'drnode2,192.168.0.123' (RSA) to the list of known hosts.
oracle@drnode2's password: 
ewallet.p12                                                                                                           100% 1573     1.5KB/s   00:00    
[oracle@node1 wallet]$ 

 

You can now open the wallet manually on the standby database using the below command:

 

SQL> alter system set encryption wallet open identified by "oracle123";

System altered.

 

But since I have enabled autologin on the primary database, I’d like the same to be enabled on the standby as well. So shall copy the cwallet.sso file from primary to both the nodes of the standby.

 

[oracle@node1 ~]$ cd /u01/app/oracle/product/11.2.0.3/db_1/network/admin/crmsdb1/wallet/
[oracle@node1 wallet]$ ls -lrt
total 8
-rw-r--r--. 1 oracle oinstall 1573 Aug 25 13:41 ewallet.p12
-rw-------. 1 oracle oinstall 1651 Aug 25 13:47 cwallet.sso
[oracle@node1 wallet]$ 
[oracle@node1 wallet]$ scp cwallet.sso oracle@drnode1:/u01/app/oracle/product/11.2.0.3/db_1/network/admin/crmdb1/wallet/
oracle@drnode1's password: 
cwallet.sso                                                                                                           100% 1651     1.6KB/s   00:00    
[oracle@node1 wallet]$ 
[oracle@node1 wallet]$ scp cwallet.sso oracle@drnode2:/u01/app/oracle/product/11.2.0.3/db_1/network/admin/crmdb2/wallet/
oracle@drnode2's password: 
cwallet.sso                                                                                                           100% 1651     1.6KB/s   00:00    
[oracle@node1 wallet]$ 

 

[oracle@drnode1 ~]$ srvctl stop database -d crmdb
[oracle@drnode1 ~]$ 
[oracle@drnode1 ~]$ srvctl start database -d crmdb

 

[oracle@drnode1 ~]$ sql

SQL*Plus: Release 11.2.0.3.0 Production on Sat Sep 8 11:17:50 2018

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> col wrl_parameter for a70
SQL> set linesize 300
SQL> select * from gv$encryption_wallet order by 1;

   INST_ID WRL_TYPE             WRL_PARAMETER                                                          STATUS
---------- -------------------- ---------------------------------------------------------------------- ------------------
         1 file                 /u01/app/oracle/product/11.2.0.3/db_1/network/admin/$ORACLE_SID/wallet OPEN
         2 file                 /u01/app/oracle/product/11.2.0.3/db_1/network/admin/$ORACLE_SID/wallet OPEN

 

	 
SQL> select status,instance_name,database_role from v$database,v$instance;

STATUS       INSTANCE_NAME    DATABASE_ROLE
------------ ---------------- ----------------
MOUNTED      crmdb1           PHYSICAL STANDBY

 

SQL> select process,status,sequence#,inst_id,thread# from gV$managed_standby;

PROCESS   STATUS        SEQUENCE#    INST_ID    THREAD#
--------- ------------ ---------- ---------- ----------
ARCH      CONNECTED             0          1          0
ARCH      CONNECTED             0          1          0
ARCH      CONNECTED             0          1          0
ARCH      CONNECTED             0          1          0
MRP0      APPLYING_LOG         71          1          2
ARCH      CLOSING              87          2          1
ARCH      CLOSING              70          2          2
ARCH      CONNECTED             0          2          0
ARCH      CONNECTED             0          2          0
RFS       IDLE                  0          2          0
RFS       IDLE                  0          2          0

PROCESS   STATUS        SEQUENCE#    INST_ID    THREAD#
--------- ------------ ---------- ---------- ----------
RFS       IDLE                 88          2          1
RFS       IDLE                  0          2          0
RFS       IDLE                 71          2          2
RFS       IDLE                  0          2          0

15 rows selected.

 

[oracle@drnode1 ~]$ dgmgrl /
DGMGRL for Linux: Version 11.2.0.3.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected.
DGMGRL> show configuration

Configuration - crmdb_dg

  Protection Mode: MaxPerformance
  Databases:
    crmsdb - Primary database
    crmdb  - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

 

 

 

COPYRIGHT

© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.

 

 

 

DISCLAIMER

The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organisation. The views expressed by visitors on this blog are theirs solely and may not reflect mine

 

 

April 28, 2018 / Shivananda Rao P

PDB Cloning in a Dataguard Environment

In this article, I’m demonstrating on how or what impact would be seen on the Physical standby database when you clone a Pluggable Database on the Primary.

 

Here is the environment detail.

 

Primary database name: targetdb
Standby database name: tarstdb
PDB name already plugged into CDB: targetpdb
PDB name to be created: targetpdb1

 

Please note that both TARGETPDB and TARGETPDB1 would be under the same container.

 

Before I start, let me give a brief outcome of the setup.
I have DG broker configured for the setup and I could see below that the standby database is in sync with the primary database.

 

[oracle@ora1-1 ~]$ dgmgrl /
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDG.
DGMGRL> show configuration

Configuration - dgtest

  Protection Mode: MaxPerformance
  Members:
  targetdb - Primary database
    tarstdb  - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 49 seconds ago)

 

TARGETPDB1 will be created from TARGETPDB and would be plugged into the same ROOT container as TARGETPDB
Below shows the list of datafiles that TARGETPDB has.

 

SYS@targetdb> select status,instance_Name,database_role from v$database,v$Instance;

STATUS       INSTANCE_NAME    DATABASE_ROLE
------------ ---------------- ----------------
OPEN         targetdb         PRIMARY

 

SYS@targetdb> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 TARGETPDB                      READ WRITE NO

		 
SYS@targetdb> alter session set container=targetpdb;

Session altered.

 

SYS@targetdb> col file_name for a99
SYS@targetdb> select file_name from dba_data_files;

FILE_NAME
--------------------------------------------------------------------------------
+DATA/TARGETDB/2DBEA8AC28E81AACE0536538A8C0F5B7/DATAFILE/system.272.906204413
+DATA/TARGETDB/2DBEA8AC28E81AACE0536538A8C0F5B7/DATAFILE/sysaux.290.906204413
+DATA/TARGETDB/2DBEA8AC28E81AACE0536538A8C0F5B7/DATAFILE/test.271.907334287
+DATA/TARGETDB/2DBEA8AC28E81AACE0536538A8C0F5B7/DATAFILE/myobjects.291.930751499
+DATA/TARGETDB/2DBEA8AC28E81AACE0536538A8C0F5B7/DATAFILE/migtbs1.279.930751523
+DATA/TARGETDB/2DBEA8AC28E81AACE0536538A8C0F5B7/DATAFILE/users.273.913039191

6 rows selected.

 

Let me clone TARGETPDB to create TARGETPDB1. In order to clone, I need the TARGETPDB to be opened in READ ONLY mode which is done in the below steps.
Logging into the CDB$ROOT container, I OPEN TARGETPDB in READ ONLY mode followed by which I issue the CREATE PLUGGABLE DATABASE command. Please note that while creating the PDB, I’m also using the clause “STANDBY=NONE”.

 

In case if I was cloning from PDB$SEED, then the datafiles of the newly created PDB would automatically get replicated on the physical standby side too without using the “STANDBY=NONE” clause. But, when a PDB is cloned from another PDB under the same container in a dataguard environment, then, the datafiles may not get replicated to the Physical standby side if the physical standby database is in MOUNT mode. In such a case, use the “STANDBYS=NONE” clause which would defer the recover of the newly created PDB on the physical standby database. The datafiles for the new PDB would be marked as OFFLINE/RECOVER in physical standby database and any additional redo for that PDB will be ignored. This is interpreted as at some point in the future, the PDBs files will be copied to the physical standby database and recovery for the PDB will be enabled.

 

Here are the cloning and creation steps:
On Primary, place the “targetpdb” in read only mode.

 

SYS@targetdb> alter pluggable database targetpdb close immediate;

Pluggable database altered.

SYS@targetdb> alter pluggable database targetpdb open read only;

Pluggable database altered.

SYS@targetdb>

 

Now, create pluggable database “targetpdb1” with the clause “standbys=none”.

 

SYS@targetdb> create pluggable database targetpdb1 from targetpdb standbys=none;

Pluggable database created.

SYS@targetdb>		 

SYS@targetdb> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 TARGETPDB                      READ ONLY  NO
         4 TARGETPDB1                     MOUNTED
SYS@targetdb>

 

As seen above, since “targetpdb” is still in READ ONLY mode, we need to revert it back to “READ WRITE” mode. Also, we need to open the newly created pdb “targetpdb1”.

 

SYS@targetdb> alter pluggable database targetpdb close immediate;

Pluggable database altered.

SYS@targetdb> alter pluggable database targetpdb open;

Pluggable database altered.

SYS@targetdb> alter pluggable database targetpdb1 open;

Pluggable database altered.

SYS@targetdb> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 TARGETPDB                      READ WRITE NO
         4 TARGETPDB1                     READ WRITE NO
SYS@targetdb>

 

Moving on to the physical standby database side, let’s check the status of the PDBs.

 

SYS@tarstdb> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       MOUNTED
         3 TARGETPDB                      MOUNTED
         4 TARGETPDB1                     MOUNTED
SYS@tarstdb>

 

On checking the recovery status of each of the PDBs that are plugged into the physical standby container database, we see that the recovery of the newly created PDB “targetpdb1” is “DISABLED” and this is because we had used the clause “standbys=none” while creating the pdb.

 

SYS@tarstdb> select name, recovery_status from v$pdbs;

NAME                           RECOVERY
------------------------------ --------
PDB$SEED                       ENABLED
TARGETPDB                      ENABLED
TARGETPDB1                     DISABLED

SYS@tarstdb> alter session set container=targetpdb1;

Session altered.

SYS@tarstdb> select name, status from v$datafile;

NAME                                                    STATUS
------------------------------------------------------- -------
/u01/app/oracle/product/12.1.0.2/db_1/dbs/UNNAMED00021  SYSOFF
/u01/app/oracle/product/12.1.0.2/db_1/dbs/UNNAMED00022  RECOVER
/u01/app/oracle/product/12.1.0.2/db_1/dbs/UNNAMED00023  RECOVER
/u01/app/oracle/product/12.1.0.2/db_1/dbs/UNNAMED00024  RECOVER
/u01/app/oracle/product/12.1.0.2/db_1/dbs/UNNAMED00025  RECOVER
/u01/app/oracle/product/12.1.0.2/db_1/dbs/UNNAMED00026  RECOVER

6 rows selected.

 

When I query the v$recovery_file, I see that all the files of the newly created PDB seem to be errored out as “Missing” on the standby.

 

SYS@tarstdb> select * from v$recover_file;

     FILE# ONLINE  ONLINE_ ERROR                             CHANGE# TIME          CON_ID
---------- ------- ------- ------------------------------ ---------- --------- ----------
        21 OFFLINE OFFLINE FILE MISSING                            0                    4
        22 OFFLINE OFFLINE FILE MISSING                            0                    4
        23 OFFLINE OFFLINE FILE MISSING                            0                    4
        24 OFFLINE OFFLINE FILE MISSING                            0                    4
        25 OFFLINE OFFLINE FILE MISSING                            0                    4
        26 OFFLINE OFFLINE FILE MISSING                            0                    4

6 rows selected.

 

Ok, so how do we get these files on to the standby ? Simple option would be to restore them from the Primary PDB (Targetpdb) through RMAN using the “net service”.

 

I have DG Broker configured for my setup, so just making sure that the standby database is in sync with the primary database.

 

[oracle@ora1-1 ~]$ dgmgrl /
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDG.
DGMGRL> show configuration

Configuration - dgtest

  Protection Mode: MaxPerformance
  Members:
  targetdb - Primary database
    tarstdb  - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 9 seconds ago)

 

Now, it’s time for me to restore the files of the newly created PDB (targetpdb1) from primary to the standby site. I restore it using the “service” clause connecting to the primary.

 

I use the “set newname for pluggable database targetpdb1 to new” to get the OMF format for the datafiles to be restored which would be done to the value specified in the “db_create_file_dest” parameter. Followed by this, is the “restore pluggable database targetpdb1 from service targetdb” suggesting to restore that particular PDB using the NET SERVICE to connect to primary. And finally switching the datafiles restored.

 

[oracle@ora1-4 dbs]$ rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Sat Mar 17 12:22:59 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: TARGETDB (DBID=1297732342, not open)

RMAN> run
2> {
3> set newname for pluggable database targetpdb1 to new;
4> restore pluggable database targetpdb1 from service targetdb;
5> switch datafile all;
6> }

executing command: SET NEWNAME

Starting restore at 17-MAR-18
Starting implicit crosscheck backup at 17-MAR-18
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=51 device type=DISK
Crosschecked 14 objects
Finished implicit crosscheck backup at 17-MAR-18

Starting implicit crosscheck copy at 17-MAR-18
using channel ORA_DISK_1
Crosschecked 11 objects
Finished implicit crosscheck copy at 17-MAR-18

searching for all files in the recovery area
cataloging files...
no files cataloged

using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using network backup set from service targetdb
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00021 to +DATA_NEW
channel ORA_DISK_1: restore complete, elapsed time: 00:00:26
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using network backup set from service targetdb
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00022 to +DATA_NEW
channel ORA_DISK_1: restore complete, elapsed time: 00:00:35
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using network backup set from service targetdb
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00023 to +DATA_NEW
channel ORA_DISK_1: restore complete, elapsed time: 00:00:04
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using network backup set from service targetdb
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00024 to +DATA_NEW
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using network backup set from service targetdb
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00025 to +DATA_NEW
channel ORA_DISK_1: restore complete, elapsed time: 00:00:04
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using network backup set from service targetdb
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00026 to +DATA_NEW
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03
Finished restore at 17-MAR-18

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of switch command at 03/17/2018 12:26:38
ORA-19563: datafile copy header validation failed for file +DATA_NEW/TARSTDB/6796FDF7366926AEE0536538A8C0B057/DATAFILE/myobjects.282.971012125

RMAN>

 

Quite strange !! The datafiles of “targetpdb1” are restored but errors out while performing the switch operation. This datafile was plugged in as part of transportable tablespace migration. Suspecting that could be the case but unsure why it failed during the execution of the switch command. I shall investigate on that and post in the upcoming days.

 

As a workaround, I thought to rename those files manually. In order to do that, I set the “StandbyFileManagement” parameter to “Manual” on standby database and stop the RECOVERY on the standby database (tarstdb).

 

DGMGRL> edit database tarstdb set property StandbyFileManagement =Manual;
Property "standbyfilemanagement" updated
DGMGRL> edit database tarstdb set state='apply-off';
Succeeded.

 

The datafile status and it’s name for the newly created PDB (targetpdb1) on standby site needs to be captured.

 

SYS@tarstdb> select name,status from v$datafile;

NAME                                                                             STATUS
-------------------------------------------------------------------------------- -------
/u01/app/oracle/product/12.1.0.2/db_1/dbs/UNNAMED00021                           SYSOFF
/u01/app/oracle/product/12.1.0.2/db_1/dbs/UNNAMED00022                           RECOVER
/u01/app/oracle/product/12.1.0.2/db_1/dbs/UNNAMED00023                           RECOVER
/u01/app/oracle/product/12.1.0.2/db_1/dbs/UNNAMED00024                           RECOVER
/u01/app/oracle/product/12.1.0.2/db_1/dbs/UNNAMED00025                           RECOVER
/u01/app/oracle/product/12.1.0.2/db_1/dbs/UNNAMED00026                           RECOVER

6 rows selected.

 

From the alert log of the standby database, I get the OMF format of the files restored in the above step for the new PDB (targetpdb1).

 

Sat Mar 17 13:34:38 2018
Full restore complete of datafile 21 to datafile copy +DATA_NEW/TARSTDB/6796FDF7366926AEE0536538A8C0B057/DATAFILE/system.268.971012063.  Elapsed time: 0:00:13
  checkpoint is 10105360
  last deallocation scn is 8421271
  Undo Optimization current scn is 10090649
Sat Mar 17 13:35:19 2018
Full restore complete of datafile 22 to datafile copy +DATA_NEW/TARSTDB/6796FDF7366926AEE0536538A8C0B057/DATAFILE/sysaux.270.971012089.  Elapsed time: 0:00:30
  checkpoint is 10105405
  last deallocation scn is 4478083
Full restore complete of datafile 23 to datafile copy +DATA_NEW/TARSTDB/6796FDF7366926AEE0536538A8C0B057/DATAFILE/test.281.971012125.  Elapsed time: 0:00:00
  checkpoint is 10105471
  last deallocation scn is 1594145
Full restore complete of datafile 24 to datafile copy +DATA_NEW/TARSTDB/6796FDF7366926AEE0536538A8C0B057/DATAFILE/myobjects.282.971012125.  Elapsed time: 0:00:01
  checkpoint is 10105481
  last deallocation scn is 907776
Full restore complete of datafile 25 to datafile copy +DATA_NEW/TARSTDB/6796FDF7366926AEE0536538A8C0B057/DATAFILE/migtbs1.283.971012127.  Elapsed time: 0:00:00
  checkpoint is 10105484
  last deallocation scn is 907776
Full restore complete of datafile 26 to datafile copy +DATA_NEW/TARSTDB/6796FDF7366926AEE0536538A8C0B057/DATAFILE/users.263.971012129.  Elapsed time: 0:00:01
  checkpoint is 10105489
  last deallocation scn is 3
Sat Mar 17 13:37:45 2018
Checker run found 13 new persistent data failures

 

I then rename each of these datafiles manually by connecting to the CDB$ROOT of the standby database. I understand that this is a tedious process, but for time being I had to go with this.

 

SYS@tarstdb>  alter database rename file '/u01/app/oracle/product/12.1.0.2/db_1/dbs/UNNAMED00021' to '+DATA_NEW/TARSTDB/6796FDF7366926AEE0536538A8C0B057/DATAFILE/system.268.971012063';

Database altered.

SYS@tarstdb>  alter database rename file '/u01/app/oracle/product/12.1.0.2/db_1/dbs/UNNAMED00022' to '+DATA_NEW/TARSTDB/6796FDF7366926AEE0536538A8C0B057/DATAFILE/sysaux.270.971012089';

Database altered.

SYS@tarstdb> alter database rename file '/u01/app/oracle/product/12.1.0.2/db_1/dbs/UNNAMED00023' to '+DATA_NEW/TARSTDB/6796FDF7366926AEE0536538A8C0B057/DATAFILE/test.281.971012125';

Database altered.

SYS@tarstdb> alter database rename file '/u01/app/oracle/product/12.1.0.2/db_1/dbs/UNNAMED00024' to '+DATA_NEW/TARSTDB/6796FDF7366926AEE0536538A8C0B057/DATAFILE/myobjects.282.971012125';

Database altered.

SYS@tarstdb> alter database rename file '/u01/app/oracle/product/12.1.0.2/db_1/dbs/UNNAMED00025' to '+DATA_NEW/TARSTDB/6796FDF7366926AEE0536538A8C0B057/DATAFILE/migtbs1.283.971012127';

Database altered.

SYS@tarstdb> alter database rename file '/u01/app/oracle/product/12.1.0.2/db_1/dbs/UNNAMED00026' to '+DATA_NEW/TARSTDB/6796FDF7366926AEE0536538A8C0B057/DATAFILE/users.263.971012129';

Database altered.

 

Now, let me check the datafile names after renaming them. All looks good as below.

 

SYS@tarstdb>  select name,status from v$datafile;

NAME                                                                                   	STATUS
-------------------------------------------------------------------------------------  	-------
+DATA_NEW/TARSTDB/6796FDF7366926AEE0536538A8C0B057/DATAFILE/system.268.971012063       	SYSOFF
+DATA_NEW/TARSTDB/6796FDF7366926AEE0536538A8C0B057/DATAFILE/sysaux.270.971012089       	RECOVER
+DATA_NEW/TARSTDB/6796FDF7366926AEE0536538A8C0B057/DATAFILE/test.281.971012125         	RECOVER
+DATA_NEW/TARSTDB/6796FDF7366926AEE0536538A8C0B057/DATAFILE/myobjects.282.971012125		RECOVER                                              +DATA_NEW/TARSTDB/6796FDF7366926AEE0536538A8C0B057/DATAFILE/migtbs1.283.971012127		 RECOVER
+DATA_NEW/TARSTDB/6796FDF7366926AEE0536538A8C0B057/DATAFILE/users.263.971012129         RECOVER

6 rows selected.

 

We now need to enable the recovery for the newly created PDB targetpdb1 on standby database. For this, connect to the targetpdb1 container and enable the recovery.

 

SYS@tarstdb> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       MOUNTED
         3 TARGETPDB                      MOUNTED
         4 TARGETPDB1                     MOUNTED
SYS@tarstdb>
SYS@tarstdb> alter session set container=targetpdb1;

Session altered.

SYS@tarstdb> alter pluggable database enable recovery;

Pluggable database altered.

 

The changes made above to the parameter “StandbyFileManagement” needs to be reverted back to the value “AUTO” and start MRP on the standby database.

 

DGMGRL> edit database tarstdb set property StandbyFileManagement =AUTO;
Property "standbyfilemanagement" updated
DGMGRL> edit database tarstdb set state='apply-on';
Succeeded.

 

Crosscheck if all looks fine.

 

SYS@tarstdb> select name, recovery_status from v$pdbs;

NAME                           RECOVERY
------------------------------ --------
PDB$SEED                       ENABLED
TARGETPDB                      ENABLED
TARGETPDB1                     ENABLED

 

SYS@tarstdb> alter session set container=targetpdb1;

Session altered.

SYS@tarstdb> col name for a90
SYS@tarstdb>
SYS@tarstdb> select * from v$recovery_status;

no rows selected

 

Verfiy if the standby database is in sync with the primary database.

 

On Primary:

 

SYS@targetdb> select max(sequence#) from v$archived_log;

MAX(SEQUENCE#)
--------------
           335

SYS@targetdb> alter system switch logfile;

System altered.

SYS@targetdb> /

System altered.

SYS@targetdb> /

System altered.

SYS@targetdb> select max(sequence#) from v$archived_log;

MAX(SEQUENCE#)
--------------
           338

 

On standby:

 

SYS@tarstdb> select process,status,sequence# from v$managed_standby;

PROCESS   STATUS        SEQUENCE#
--------- ------------ ----------
ARCH      CLOSING             337
ARCH      CONNECTED             0
ARCH      CLOSING             336
ARCH      CLOSING             338
RFS       IDLE                  0
RFS       IDLE                339
MRP0      APPLYING_LOG        339

7 rows selected.

 

SYS@tarstdb> select max(sequence#) from v$archived_log where applied='YES';

MAX(SEQUENCE#)
--------------
           337

 

All looks good !!

 

 

 

COPYRIGHT

© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.

 

 

 

DISCLAIMER

The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organisation. The views expressed by visitors on this blog are theirs solely and may not reflect mine

 

 

February 12, 2018 / Shivananda Rao P

RAC Node Deletion in 12c

In this post, I shall be discussing on the steps involved in removing a node from RAC cluster in oracle 12c.

 

Environment:

 

Cluster Nodes: 12cnode1, 12cnode2, 12cnode3
Database instances on the above nodes:
srprim1 on 12cnode1
srprim2 on 12cnode2
srprim3 on 12cnode3

 

Grid Infrastructure version: 12.1.0.2
RDBMS version: 12.1.0.2
Node to be removed: 12cnode3

 

Here is a brief detail on the environment. The list of nodes captured using “olsnodes”.

 

[oracle@12cnode3 ~]$ olsnodes
12cnode1
12cnode2
12cnode3
[oracle@12cnode3 ~]$

 

Also this environment has Flex ASM enabled.

 

[oracle@12cnode3 ~]$ asmcmd showclustermode
ASM cluster : Flex mode enabled
[oracle@12cnode3 ~]$
[oracle@12cnode3 ~]$ asmcmd showclusterstate
Normal
[oracle@12cnode3 ~]$

 

Below is the list of the resources managed by this cluster.

 

[oracle@12cnode3 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       12cnode1                 STABLE
               ONLINE  ONLINE       12cnode2                 STABLE
               ONLINE  ONLINE       12cnode3                 STABLE
ora.DATA.dg
               ONLINE  ONLINE       12cnode1                 STABLE
               ONLINE  ONLINE       12cnode2                 STABLE
               OFFLINE OFFLINE      12cnode3                 STABLE
ora.FRA.dg
               ONLINE  ONLINE       12cnode1                 STABLE
               ONLINE  ONLINE       12cnode2                 STABLE
               OFFLINE OFFLINE      12cnode3                 STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       12cnode1                 STABLE
               ONLINE  ONLINE       12cnode2                 STABLE
               OFFLINE OFFLINE      12cnode3                 STABLE
ora.net1.network
               ONLINE  ONLINE       12cnode1                 STABLE
               ONLINE  ONLINE       12cnode2                 STABLE
               ONLINE  ONLINE       12cnode3                 STABLE
ora.ons
               ONLINE  ONLINE       12cnode1                 STABLE
               ONLINE  ONLINE       12cnode2                 STABLE
               ONLINE  ONLINE       12cnode3                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.12cnode1.vip
      1        ONLINE  ONLINE       12cnode1                 STABLE
ora.12cnode2.vip
      1        ONLINE  ONLINE       12cnode2                 STABLE
ora.12cnode3.vip
      1        ONLINE  ONLINE       12cnode3                 STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       12cnode1                 STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       12cnode1                 169.254.3.189 192.16
                                                             8.1.107,STABLE
ora.asm
      1        ONLINE  ONLINE       12cnode1                 Started,STABLE
      3        ONLINE  ONLINE       12cnode2                 Started,STABLE
ora.cvu
      1        ONLINE  ONLINE       12cnode1                 STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       12cnode1                 Open,STABLE
ora.oc4j
      1        ONLINE  ONLINE       12cnode1                 STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       12cnode1                 STABLE
ora.srprim.db
      1        ONLINE  ONLINE       12cnode1                 Open,STABLE
      2        ONLINE  ONLINE       12cnode2                 Open,STABLE
      3        ONLINE  ONLINE       12cnode3                 Open,STABLE
ora.srprim.srprim_i03.svc
      1        ONLINE  ONLINE       12cnode1                 STABLE
--------------------------------------------------------------------------------

 
 

Since Flex ASM is enabled, I see that all my oracle database instances are running on all the nodes even though ASM is not currently running on node 12cnode3. But that should not be a show stopper for us.

 

[oracle@12cnode3 ~]$ srvctl status asm
ASM is running on 12cnode2,12cnode1

 

[oracle@12cnode3 ~]$ crsctl get node role config -all
Node '12cnode1' configured role is 'hub'
Node '12cnode2' configured role is 'hub'
Node '12cnode3' configured role is 'hub'

 

[oracle@12cnode3 ~]$  crsctl get node role status -all
Node '12cnode1' active role is 'hub'
Node '12cnode2' active role is 'hub'
Node '12cnode3' active role is 'hub'

 

Below is the detail of the votedisk and OCR files.

 

[oracle@12cnode3 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   eba1a2f1e10b4f60bfbe59bd50be0ae6 (/dev/DSK1) [DATA]
Located 1 voting disk(s).

 

[oracle@12cnode3 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     409568
         Used space (kbytes)      :       1740
         Available space (kbytes) :     407828
         ID                       :  553047649
         Device/File Name         :      +DATA
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check bypassed due to non-privileged user

[oracle@12cnode3 ~]$

 

Status of the database instances and the services currently running on the nodes.

 

[oracle@12cnode3 ~]$ srvctl status database -db srprim -v -f
Instance srprim1 is running on node 12cnode1 with online services srprim_i03. Instance status: Open.
Instance srprim2 is running on node 12cnode2. Instance status: Open.
Instance srprim3 is running on node 12cnode3. Instance status: Open.
[oracle@12cnode3 ~]$

 

Configuration of the database that is currently running on the cluster nodes.

 

[oracle@12cnode3 ~]$ srvctl config database -db srprim
Database unique name: srprim
Database name: srprim
Oracle home: /u01/app/oracle/product/12.1.0.2/db_1
Oracle user: oracle
Spfile: +DATA/SRPRIM/PARAMETERFILE/spfile.291.904396381
Password file: +DATA/SRPRIM/PASSWORD/pwdsrprim.276.904395317
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: FRA,DATA
Mount point paths:
Services: srprim_i03
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: oinstall
OSOPER group: oinstall
Database instances: srprim1,srprim2,srprim3
Configured nodes: 12cnode1,12cnode2,12cnode3
Database is administrator managed

 

Below is the configuration of the service configured for the “srprim” database.

 

[oracle@12cnode3 ~]$ srvctl config service -s srprim_i03 -db srprim
Service name: srprim_i03
Server pool:
Cardinality: 1
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Global: false
Commit Outcome: false
Failover type:
Failover method:
TAF failover retries:
TAF failover delay:
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Pluggable database name: srpdb1
Maximum lag time: ANY
SQL Translation Profile:
Retention: 86400 seconds
Replay Initiation Time: 300 seconds
Session State Consistency:
GSM Flags: 0
Service is enabled
Preferred instances: srprim3
Available instances: srprim1,srprim2

 

If you notice above, the service srprim_i03 is configured to run preferably on srprim3 instance (node 12cnode3). Hence we need to modify the “preferred instances” for service srprim_i03 else we would face error with message stating that “Cannot delete selected instance since it belongs to following services ‘srprim_i03’ as only preferred instance. Modify the services and try again.”

 

So, let’s modify the preferrend instances list for “srprim_i03” service as “srprim1”, and “srprim2”.

 

[oracle@12cnode3 ~]$ srvctl modify service -service srprim_i03 -db srprim -n -i srprim1,srprim2
[oracle@12cnode3 ~]$ srvctl status service -s srprim_i03 -db srprim
Service srprim_i03 is running on instance(s) srprim1

 

Stop and start the service to get the new configuration in place.

 

[oracle@12cnode3 ~]$ srvctl stop service -s srprim_i03 -db srprim
[oracle@12cnode3 ~]$ srvctl start service -s srprim_i03 -db srprim
[oracle@12cnode3 ~]$ srvctl status service -s srprim_i03 -db srprim
Service srprim_i03 is running on instance(s) srprim1,srprim2

 

[oracle@12cnode1 ~]$ srvctl status database -db srprim -v -f
Instance srprim1 is running on node 12cnode1 with online services srprim_i03. Instance status: Open.
Instance srprim2 is running on node 12cnode2 with online services srprim_i03. Instance status: Open.
Instance srprim3 is running on node 12cnode3. Instance status: Open.

 

Now all looks good, so let’s move on with the node deletion. Removing node from a cluster involves 2 steps:

 

1. Removing any database instance residing on the node that needs to be deleted.
2. Removing the node from the cluster.

 

Now, first, let’s remove the database instance srprim3 residing on the node to be removed 12cnode3.

 

Run dbca from different node of the cluster and not on the node which you are trying to remove. I’m running “dbca” in silent mode here to delete the instance on the node which will be removed (srprim3).

 

[oracle@12cnode1 ~]$ dbca -silent -deleteInstance -gdbName srprim -instanceName srprim3 -nodeList 12cnode3 -sysDBAUserName sys -sysDBAPassword oracle
Deleting instance
1% complete
2% complete
6% complete
13% complete
20% complete
26% complete
33% complete
40% complete
46% complete
53% complete
60% complete
66% complete
Completing instance management.
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/srprim.log" for further details.
[oracle@12cnode1 ~]$

 

Let’s check the database and it’s instances’s status. Instance srprim3 has now been removed from node 12cnode3.

 

[oracle@12cnode1 ~]$ srvctl status database -db srprim -v -f
Instance srprim1 is running on node 12cnode1 with online services srprim_i03. Instance status: Open.
Instance srprim2 is running on node 12cnode2 with online services srprim_i03. Instance status: Open.
[oracle@12cnode1 ~]$

 

From the database level, it’s clear below that the thread information and the redo logs associated with srprim3 too have been removed.

 

[oracle@12cnode1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Sun Sep 3 19:19:40 2017

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SYS@srprim1>select group#,thread#,bytes/1024/1024 from v$log;

    GROUP#    THREAD# BYTES/1024/1024
---------- ---------- ---------------
         1          1              50
         2          1              50
         3          2              50
         4          2              50

 

SYS@srprim1>select group#,member from v$logfile order by group#;

    GROUP# MEMBER
---------- ------------------------------------------------------------
         1 +FRA/SRPRIM/ONLINELOG/group_1.257.904395571
         1 +DATA/SRPRIM/ONLINELOG/group_1.282.904395571
         2 +FRA/SRPRIM/ONLINELOG/group_2.258.904395577
         2 +DATA/SRPRIM/ONLINELOG/group_2.283.904395575
         3 +DATA/SRPRIM/ONLINELOG/group_3.289.904396373
         3 +FRA/SRPRIM/ONLINELOG/group_3.259.904396375
         4 +DATA/SRPRIM/ONLINELOG/group_4.290.904396377
         4 +FRA/SRPRIM/ONLINELOG/group_4.260.904396379

8 rows selected.

 

SYS@srprim1>select name,value,inst_id from gv$parameter where name='undo_tablespace';

NAME                           VALUE                                                 INST_ID
------------------------------ -------------------------------------------------- ----------
undo_tablespace                UNDOTBS2                                                    2
undo_tablespace                UNDOTBS1                                                    1

 

Now, on the node to be removed, we need to update the inventory file. So, on 12cnode3, we update the inventory file with the database home listing only the node to be removed (12cnode3). This needs to be done only on the node to be removed and is done by running “oui” utility.

 

Current content of the inventory file on 12cnode3:

 

[oracle@12cnode3 ContentsXML]$ cat inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2014, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>12.1.0.2.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="OraGI12Home1" LOC="/u01/app/12.1.0.2/grid" TYPE="O" IDX="1" CRS="true">
   <NODE_LIST>
      <NODE NAME="12cnode1"/>
      <NODE NAME="12cnode2"/>
      <NODE NAME="12cnode3"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/product/12.1.0.2/db_1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="12cnode1"/>
      <NODE NAME="12cnode2"/>
      <NODE NAME="12cnode3"/>
   </NODE_LIST>
</HOME>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[oracle@12cnode3 ContentsXML]$

 

Updating the database home node list on 12cnode3. Please note that we need to specify the “-local” option to “oui” in order to update the inventory locally.

 

[oracle@12cnode3 ~]$ cd /u01/app/oracle/product/12.1.0.2/db_1/oui/bin/
[oracle@12cnode3 bin]$
[oracle@12cnode3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={12cnode3}" -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 10237 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
[oracle@12cnode3 bin]$

 

Below is the latest inventory file on 12cnode3.

 

[oracle@12cnode3 ContentsXML]$ cat inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2014, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>12.1.0.2.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="OraGI12Home1" LOC="/u01/app/12.1.0.2/grid" TYPE="O" IDX="1" CRS="true">
   <NODE_LIST>
      <NODE NAME="12cnode1"/>
      <NODE NAME="12cnode2"/>
      <NODE NAME="12cnode3"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/product/12.1.0.2/db_1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="12cnode3"/>
   </NODE_LIST>
</HOME>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[oracle@12cnode3 ContentsXML]$

 

Once the inventory is updated with the database home nodelist being only the node to be removed on 12cnode3, let’s deinstall RDBMS home on 12cnode3.
Again please note that we need to specify the “-local” option to the “deinstall” utility to remove the RDBMS binary on 12cnode3.

 

[oracle@12cnode3 ~]$ cd /u01/app/oracle/product/12.1.0.2/db_1/deinstall/

[oracle@12cnode3 deinstall]$ ./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/

############ ORACLE DECONFIG TOOL START ############


######################### DECONFIG CHECK OPERATION START #########################
## [START] Install check configuration ##


Checking for existence of the Oracle home location /u01/app/oracle/product/12.1.0.2/db_1
Oracle Home type selected for deinstall is: Oracle Real Application Cluster Database
Oracle Base selected for deinstall is: /u01/app/oracle
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /u01/app/12.1.0.2/grid
The following nodes are part of this cluster: 12cnode3,12cnode2,12cnode1
Checking for sufficient temp space availability on node(s) : '12cnode3'

## [END] Install check configuration ##


Network Configuration check config START

Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2017-09-03_07-37-06-PM.log

Network Configuration check config END

Database Check Configuration START

Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check2017-09-03_07-37-16-PM.log

Use comma as separator when specifying list of values as input

Specify the list of database names that are configured locally on this node for this Oracle home. Local configurations of the discovered databases will be removed [srprim3,srprim3]:
Database Check Configuration END
Oracle Configuration Manager check START
OCM check log file location : /u01/app/oraInventory/logs//ocm_check936.log
Oracle Configuration Manager check END

######################### DECONFIG CHECK OPERATION END #########################


####################### DECONFIG CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /u01/app/12.1.0.2/grid
The following nodes are part of this cluster: 12cnode3,12cnode2,12cnode1
The cluster node(s) on which the Oracle home deinstallation will be performed are:12cnode3
Oracle Home selected for deinstall is: /u01/app/oracle/product/12.1.0.2/db_1
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
The option -local will not modify any database configuration for this Oracle home.
Checking the config status for CCR
Oracle Home exists with CCR directory, but CCR is not configured
CCR check is finished
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2017-09-03_07-36-41-PM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2017-09-03_07-36-41-PM.err'

######################## DECONFIG CLEAN OPERATION START ########################
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2017-09-03_07-38-17-PM.log

Network Configuration clean config START

Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2017-09-03_07-38-17-PM.log

Network Configuration clean config END

Oracle Configuration Manager clean START
OCM clean log file location : /u01/app/oraInventory/logs//ocm_clean936.log
Oracle Configuration Manager clean END

######################### DECONFIG CLEAN OPERATION END #########################


####################### DECONFIG CLEAN OPERATION SUMMARY #######################
Cleaning the config for CCR
As CCR is not configured, so skipping the cleaning of CCR configuration
CCR clean is finished
#######################################################################


############# ORACLE DECONFIG TOOL END #############

Using properties file /tmp/deinstall2017-09-03_07-32-35PM/response/deinstall_2017-09-03_07-36-41-PM.rsp
Location of logs /u01/app/oraInventory/logs/

############ ORACLE DEINSTALL TOOL START ############





####################### DEINSTALL CHECK OPERATION SUMMARY #######################
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2017-09-03_07-36-41-PM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2017-09-03_07-36-41-PM.err'

######################## DEINSTALL CLEAN OPERATION START ########################
## [START] Preparing for Deinstall ##
Setting LOCAL_NODE to 12cnode3
Setting CLUSTER_NODES to 12cnode3
Setting CRS_HOME to false
Setting oracle.installer.invPtrLoc to /tmp/deinstall2017-09-03_07-32-35PM/oraInst.loc
Setting oracle.installer.local to true

## [END] Preparing for Deinstall ##

Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START

Detach Oracle home '/u01/app/oracle/product/12.1.0.2/db_1' from the central inventory on the local node : Done

Delete directory '/u01/app/oracle/product/12.1.0.2/db_1' on the local node : Done

The Oracle Base directory '/u01/app/oracle' will not be removed on local node. The directory is in use by Oracle Home '/u01/app/12.1.0.2/grid'.

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END


## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2017-09-03_07-32-35PM' on node '12cnode3'

## [END] Oracle install clean ##


######################### DEINSTALL CLEAN OPERATION END #########################


####################### DEINSTALL CLEAN OPERATION SUMMARY #######################
Successfully detached Oracle home '/u01/app/oracle/product/12.1.0.2/db_1' from the central inventory on the local node.
Successfully deleted directory '/u01/app/oracle/product/12.1.0.2/db_1' on the local node.
Oracle Universal Installer cleanup was successful.

Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################


############# ORACLE DEINSTALL TOOL END #############

[oracle@12cnode3 deinstall]$

 
 

On the remaining nodes of the cluster, if we would check the inventory file, the database home would still list all the nodes. So, we need to update the inventory file on the remaining nodes with the node list under the Database home listing only the remaining nodes.

 

On 12cnode1, let me update the node list under the Database Home using OUI utility.

 

Current content of the inventory file on 12cnode1.

 

[oracle@12cnode1 ~]$ cd /u01/app/oraInventory/ContentsXML/
[oracle@12cnode1 ContentsXML]$ ls -lrt
total 12
-rw-rw----. 1 oracle oinstall 817 Mar 24  2016 inventory.xml
-rw-rw----. 1 oracle oinstall 292 Mar 24  2016 libs.xml
-rw-rw----. 1 oracle oinstall 329 Mar 24  2016 comps.xml
[oracle@12cnode1 ContentsXML]$ cat inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2014, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>12.1.0.2.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="OraGI12Home1" LOC="/u01/app/12.1.0.2/grid" TYPE="O" IDX="1" CRS="true">
   <NODE_LIST>
      <NODE NAME="12cnode1"/>
      <NODE NAME="12cnode2"/>
      <NODE NAME="12cnode3"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/product/12.1.0.2/db_1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="12cnode1"/>
      <NODE NAME="12cnode2"/>
      <NODE NAME="12cnode3"/>
   </NODE_LIST>
</HOME>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[oracle@12cnode1 ContentsXML]$

 

Update the Database Home nodelist using oui with the cluster_nodes option listing the remaining nodes of the cluster.

 

[oracle@12cnode1 ~]$ . oraenv
ORACLE_SID = [srprim1] ?
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@12cnode1 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@12cnode1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={12cnode1,12cnode2}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 8125 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
[oracle@12cnode1 bin]$

 

Once the inventory is updated, let’s verify the contents of the inventory file.

 

[oracle@12cnode1 bin]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2014, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>12.1.0.2.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="OraGI12Home1" LOC="/u01/app/12.1.0.2/grid" TYPE="O" IDX="1" CRS="true">
   <NODE_LIST>
      <NODE NAME="12cnode1"/>
      <NODE NAME="12cnode2"/>
      <NODE NAME="12cnode3"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/product/12.1.0.2/db_1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="12cnode1"/>
      <NODE NAME="12cnode2"/>
   </NODE_LIST>
</HOME>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[oracle@12cnode1 bin]$

 

So, we have now successfully removed the database instance and the RDBMS home from 12cnode3. Let me now proceed with removing the node from the cluster.
Just briefing up, the “olsnodes” is listing out node 12cnode3. Further actions involve deconfiguring the cluster resources on 12cnode3 and then removing the GI home.

 

[oracle@12cnode1 ~]$ . oraenv
ORACLE_SID = [srprim1] ? +ASM1
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@12cnode1 ~]$
[oracle@12cnode1 ~]$ olsnodes -s
12cnode1        Active
12cnode2        Active
12cnode3        Active
[oracle@12cnode1 ~]$

 

[oracle@12cnode3 ~]$ olsnodes -s
12cnode1        Active
12cnode2        Active
12cnode3        Active
[oracle@12cnode3 ~]$

 

On 12cnode3, deconfigure the cluster resources by running “rootcrs.pl” script with “deconfig” option from the GI home.

 

[root@12cnode3 ~]# . oraenv
ORACLE_SID = [root] ? +ASM3
The Oracle base has been set to /u01/app/oracle
[root@12cnode3 ~]#
[root@12cnode3 ~]# cd /u01/app/12.1.0.2/grid/crs/install
[root@12cnode3 install]# ./rootcrs.pl -deconfig -force
Using configuration parameter file: ./crsconfig_params
Network 1 exists
Subnet IPv4: 192.168.0.0/255.255.255.0/eth0, static
Subnet IPv6:
Ping Targets:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:
VIP exists: network number 1, hosting node 12cnode1
VIP Name: 12cnode1-vip.mydomain
VIP IPv4 Address: 192.168.0.117
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
VIP exists: network number 1, hosting node 12cnode2
VIP Name: 12cnode2-vip.mydomain
VIP IPv4 Address: 192.168.0.118
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
VIP exists: network number 1, hosting node 12cnode3
VIP Name: 12cnode3-vip
VIP IPv4 Address: 192.168.0.121
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
ONS exists: Local port 6100, remote port 6200, EM port 2016, Uses SSL false
ONS is enabled
ONS is individually enabled on nodes:
ONS is individually disabled on nodes:
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on '12cnode3'
CRS-2673: Attempting to stop 'ora.crsd' on '12cnode3'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on '12cnode3'
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on '12cnode3'
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on '12cnode3' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on '12cnode3' has completed
CRS-2677: Stop of 'ora.crsd' on '12cnode3' succeeded
CRS-2673: Attempting to stop 'ora.evmd' on '12cnode3'
CRS-2673: Attempting to stop 'ora.storage' on '12cnode3'
CRS-2673: Attempting to stop 'ora.ctssd' on '12cnode3'
CRS-2673: Attempting to stop 'ora.mdnsd' on '12cnode3'
CRS-2673: Attempting to stop 'ora.gpnpd' on '12cnode3'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on '12cnode3'
CRS-2677: Stop of 'ora.storage' on '12cnode3' succeeded
CRS-2673: Attempting to stop 'ora.asm' on '12cnode3'
CRS-2677: Stop of 'ora.drivers.acfs' on '12cnode3' succeeded
CRS-2677: Stop of 'ora.asm' on '12cnode3' succeeded
CRS-2677: Stop of 'ora.ctssd' on '12cnode3' succeeded
CRS-2677: Stop of 'ora.evmd' on '12cnode3' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on '12cnode3'
CRS-2677: Stop of 'ora.mdnsd' on '12cnode3' succeeded
CRS-2677: Stop of 'ora.gpnpd' on '12cnode3' succeeded
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on '12cnode3' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on '12cnode3'
CRS-2677: Stop of 'ora.cssd' on '12cnode3' succeeded
CRS-2673: Attempting to stop 'ora.crf' on '12cnode3'
CRS-2677: Stop of 'ora.crf' on '12cnode3' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on '12cnode3'
CRS-2677: Stop of 'ora.gipcd' on '12cnode3' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on '12cnode3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2017/09/03 20:05:03 CLSRSC-4006: Removing Oracle Trace File Analyzer (TFA) Collector.

2017/09/03 20:05:37 CLSRSC-4007: Successfully removed Oracle Trace File Analyzer (TFA) Collector.

2017/09/03 20:05:40 CLSRSC-336: Successfully deconfigured Oracle Clusterware stack on this node

[root@12cnode3 install]#

 

Now, remove the node from the cluster using the “crsctl” utility.

 

[root@12cnode1 ~]# . oraenv
ORACLE_SID = [root] ? +ASM1
The Oracle base has been set to /u01/app/oracle
[root@12cnode1 ~]#
[root@12cnode1 ~]# crsctl delete node -n 12cnode3
CRS-4661: Node 12cnode3 successfully deleted.
[root@12cnode1 ~]#

 

The following steps involve removing the GI home which is similar to what we followed while removing the RDBMS home.
Firstly, update the inventory file on 12cnode3 with the information that this node is lonely and not part of the cluster. This is done by running the OUI utility from GI home.

 

Options passed to OUI are as below:

 

ORACLE_HOME: GI home
CLUSTER_NODES: 12cnode3 (The node to be removed)
CRS=TRUE: as we are dealing with GI home and not RDBMS home
local: which defines that the update needs to be done only on this node.

 

[oracle@12cnode3 ~]$ id
uid=501(oracle) gid=502(oinstall) groups=502(oinstall),501(dba) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
[oracle@12cnode3 ~]$
[oracle@12cnode3 ~]$ . oraenv
ORACLE_SID = [+ASM3] ?
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@12cnode3 ~]$
[oracle@12cnode3 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@12cnode3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={12cnode3}" CRS=TRUE -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 10237 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.

 

Verfiy the contents of the inventory file on 12cnode3.

 

[oracle@12cnode3 bin]$
[oracle@12cnode3 bin]$
[oracle@12cnode3 bin]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2014, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>12.1.0.2.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="OraGI12Home1" LOC="/u01/app/12.1.0.2/grid" TYPE="O" IDX="1" CRS="true">
   <NODE_LIST>
      <NODE NAME="12cnode3"/>
   </NODE_LIST>
</HOME>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[oracle@12cnode3 bin]$

 

Now, I can deinstall the GI home on 12cnode3 by running the “deinstall” script from the GI home.

 

[oracle@12cnode3 ~]$ echo $ORACLE_SID
+ASM3
[oracle@12cnode3 ~]$ cd /u01/app/12.1.0.2/grid/deinstall/
[oracle@12cnode3 deinstall]$ ./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2017-09-03_08-13-07PM/logs/

############ ORACLE DECONFIG TOOL START ############


######################### DECONFIG CHECK OPERATION START #########################
## [START] Install check configuration ##


Checking for existence of the Oracle home location /u01/app/12.1.0.2/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /u01/app/oracle
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /u01/app/12.1.0.2/grid
The following nodes are part of this cluster: 12cnode3
Checking for sufficient temp space availability on node(s) : '12cnode3'

## [END] Install check configuration ##

Traces log file: /tmp/deinstall2017-09-03_08-13-07PM/logs//crsdc_2017-09-03_08-17-00PM.log

Network Configuration check config START

Network de-configuration trace file location: /tmp/deinstall2017-09-03_08-13-07PM/logs/netdc_check2017-09-03_08-17-02-PM.log

Network Configuration check config END

Asm Check Configuration START

ASM de-configuration trace file location: /tmp/deinstall2017-09-03_08-13-07PM/logs/asmcadc_check2017-09-03_08-17-02-PM.log

Database Check Configuration START

Database de-configuration trace file location: /tmp/deinstall2017-09-03_08-13-07PM/logs/databasedc_check2017-09-03_08-17-02-PM.log

Database Check Configuration END

######################### DECONFIG CHECK OPERATION END #########################


####################### DECONFIG CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /u01/app/12.1.0.2/grid
The following nodes are part of this cluster: 12cnode3
The cluster node(s) on which the Oracle home deinstallation will be performed are:12cnode3
Oracle Home selected for deinstall is: /u01/app/12.1.0.2/grid
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Option -local will not modify any ASM configuration.
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/tmp/deinstall2017-09-03_08-13-07PM/logs/deinstall_deconfig2017-09-03_08-16-54-PM.out'
Any error messages from this session will be written to: '/tmp/deinstall2017-09-03_08-13-07PM/logs/deinstall_deconfig2017-09-03_08-16-54-PM.err'

######################## DECONFIG CLEAN OPERATION START ########################
Database de-configuration trace file location: /tmp/deinstall2017-09-03_08-13-07PM/logs/databasedc_clean2017-09-03_08-17-55-PM.log
ASM de-configuration trace file location: /tmp/deinstall2017-09-03_08-13-07PM/logs/asmcadc_clean2017-09-03_08-17-55-PM.log
ASM Clean Configuration END

Network Configuration clean config START

Network de-configuration trace file location: /tmp/deinstall2017-09-03_08-13-07PM/logs/netdc_clean2017-09-03_08-17-55-PM.log

Network Configuration clean config END


######################### DECONFIG CLEAN OPERATION END #########################


####################### DECONFIG CLEAN OPERATION SUMMARY #######################
Oracle Clusterware was already stopped and de-configured on node "12cnode3"
Oracle Clusterware is stopped and de-configured successfully.
#######################################################################


############# ORACLE DECONFIG TOOL END #############

Using properties file /tmp/deinstall2017-09-03_08-13-07PM/response/deinstall_2017-09-03_08-16-54-PM.rsp
Location of logs /tmp/deinstall2017-09-03_08-13-07PM/logs/

############ ORACLE DEINSTALL TOOL START ############





####################### DEINSTALL CHECK OPERATION SUMMARY #######################
A log of this session will be written to: '/tmp/deinstall2017-09-03_08-13-07PM/logs/deinstall_deconfig2017-09-03_08-16-54-PM.out'
Any error messages from this session will be written to: '/tmp/deinstall2017-09-03_08-13-07PM/logs/deinstall_deconfig2017-09-03_08-16-54-PM.err'

######################## DEINSTALL CLEAN OPERATION START ########################
## [START] Preparing for Deinstall ##
Setting LOCAL_NODE to 12cnode3
Setting CLUSTER_NODES to 12cnode3
Setting CRS_HOME to true
Setting oracle.installer.invPtrLoc to /tmp/deinstall2017-09-03_08-13-07PM/oraInst.loc
Setting oracle.installer.local to true

## [END] Preparing for Deinstall ##

Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START

Detach Oracle home '/u01/app/12.1.0.2/grid' from the central inventory on the local node : Done

Failed to delete the directory '/u01/app/12.1.0.2/grid/dc_ocm'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/has'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/auth/ohasd/12cnode3'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/auth/ohasd'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/auth/crs/12cnode3'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/auth/crs'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/auth/css/12cnode3'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/auth/css'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/auth/evm/12cnode3'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/auth/evm'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/auth'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/wwg'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/diagnostics'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/sqlplus'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/demo'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/jdk'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/jlib'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/ord'. The directory is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/ologgerd/init/12cnode3.pid'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/ologgerd/init/12cnode3'. The file is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/ologgerd/init'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/ologgerd'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/cdata'. The directory is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/ocssd.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/crs_start.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/oclsvmon'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/scrctl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/sysresv'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/orald'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/dg4pwdO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/patchgen'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/crs_register.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/oraenv'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/clsecho'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/hsallociO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/crsd.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/evmsort'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/gsd'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/asmcmdcore'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/adrci'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/osysmond.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/ldapbind'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/extjob'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/ldapmodify'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/afdtool'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/rmanO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/clscfg'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/ocrdump'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/bndlchk'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/scriptagent'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/appagent.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/hsdepxaO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/ocssdrim'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/extjobO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/appvipcfg'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/sqlplus'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/ocrconfig.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/crstmpl.scr'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/rman'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/tkprofO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/setasmgid0'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/skgxpinfo'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/agtctl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/cssvfupgd.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/dbgeu_run_action.pl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/agctl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/ocrcheck.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/trcroute'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/ghappctl.pl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/netmgr'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/diskmon'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/kfod'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/dbshut'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/mkpatch'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/trcasst'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/aqxmlctl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/renamedg0'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/emcrsp'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/crsdiag.pl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/loadpsp'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/asmca'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/cluutil'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/impO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/gnsd.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/dg4pwd'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/orapki'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/oprocd'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/expO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/evmshow.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/sysresv0'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/ocrcheck'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/crsctl.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/ocrpatch'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/nidO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/ologgerd'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/crs_relocate'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/sclsspawn'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/ldapmoddn'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/dbstart'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/sqlldrO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/cssdmonitor'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/racgwrap'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/clscfg.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/agapacheas'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/sbttestO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/okadriverstate'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/tnsping'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/dbca'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/acfsdriverstate'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/ologdbg.pl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/tkprof'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/octssd'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/evmwatch'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/acfssinglefsmount'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/clsid'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/osdbagrp'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/racgvip'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/setasmgidwrap'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/lsnrctl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/okinit0'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/evmshow'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/gpnptool.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/clsid.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/nid'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/evt.sh'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/evmlogger'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/cursize'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/mkstore'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/oclumon.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/impdpO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/acfsrepl_preapply'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/crs_start'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/ocssd'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/ojvmtc'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/jssu'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/oclsomon'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/acfsrepl_apply'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/cssvfupgd'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/racgeut'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/evmmklib.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/hsalloci'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/ldapcompare'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/gipcd'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/crswrapexece.pl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/agsiebsrvras'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/agtomcatas'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/oradnssd.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/oracle'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/osysmond'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/ologdbg'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/oclumon'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/afdroot'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/oranetmonitor.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/olsnodes'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/crsd'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/cssdagent.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/agsiebgtwyas'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/cemutls'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/impdp'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/lsnodes'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/mapsga'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/wrap'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/evmpost.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/dsml2ldif'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/tstshm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/loadjava'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/tnsping0'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/amdu'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/lbuilder'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/acfsrepl_monitor'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/oerr'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/hsotsO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/appagent'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/uidrvciO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/trcldr0'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/emdwgrd'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/extprocO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/oc4jctl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/dropjava'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/evmmkbin.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/ghappctl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/symfind'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/setasmgid'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/maxmem'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/lsdb.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/adapters'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/ndfnceca'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/odnsd'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/gensyslib'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/okdstry0'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/evminfo'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/orapwdO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/evmmkbin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/fmputlhp'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/ohasd'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/cursizeO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/unzip'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/extproc'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/schema'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/genorasdksh'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/odnsd.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/kgmgrO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/tstshmO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/usrvip'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/sbttest'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/genoccish'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/oc4jctl.pl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/agpsappas'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/orapki.bat'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/acfsrepl_apply.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/oclumon.pl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/gpnpd'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/aggoldengateas'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/genclntsh'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/orapwd'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/evmd'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/cluvfy'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/mapsga0'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/onsctl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/mkstore.bat'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/asmproxy'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/oifcfg'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/ldapdelete'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/acfsregistrymount'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/ldifmigrator'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/tnnfg'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/genclntst'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/dbua'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/diagsetup'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/cemutlo.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/crs_profile'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/crs_getperm.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/mdnsd'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/mkpatchO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/oerr.pl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/ncomp'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/crs_stat'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/oraagent.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/schemasync'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/rawutl0'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/coraenv'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/expdpO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/srvctl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/oifcfg.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/kfod.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/extusrupgrade'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/dbhome'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/dbfsizeO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/uidrvci'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/cemutlo'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/crs_unregister.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/okinit'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/lsdb'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/ldapaddmt'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/evmwatch.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/orabase'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/linkshlib'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/okdstry'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/owm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/agtctlO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/crfsetenv'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/crs_unregister'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/agpsbatchas'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/odig.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/agpspiaas'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/statusnc'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/lxchknlb'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/expdp'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/olsnodes.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/orarootagent.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/crs_getperm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/orion'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/clsecho.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/qosctl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/hsots'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/oraagent'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/extjoboO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/emdwgrd.pl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/xmlwf'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/crs_relocate.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/mdnsd.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/gpnpd.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/oradaemonagent'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/umu'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/gnsd'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/kgmgr'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/diagcollection.sh'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/oradism'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/gennttab'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/dumpsga'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/loadpspO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/genagtsh'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/oraxsl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/netca'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/octssd.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/clssproxy.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/deploync'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/appvipcfg.pl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/trcroute0'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/aqxmlctl.pl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/crs_profile.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/genezi'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/cssdagent'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/rconfig'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/dgmgrlO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/renamedg'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/wrcO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/oclskd'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/evmsort.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/lxegen'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/odisrvreg'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/cssdmonitor.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/evminfo.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/cemutls.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/ocrconfig'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/kfed'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/wrc'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/orionO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/eusm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/ohasd.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/trcsess'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/osh'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/afdtool.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/rhpctl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/crs_register'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/sqlldr'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/rdtool'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/echodo'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/hsdepxa'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/ojvmjava'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/vipca'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/relink'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/ldapsearch'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/exp'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/adrciO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/amduO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/asmcmd'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/evmpost'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/oidca'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/dbfs_client'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/afdload'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/evmlogger.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/oracleO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/genksms'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/emca'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/oranetmonitor'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/cluvfyrac.sh'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/dbfsize'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/lsnodes.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/trcldr'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/tnslsnr0'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/diskmon.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/crs_stop.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/plshprofO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/wrapO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/emcrsp.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/oclskd.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/acfsreplcrs'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/oraping'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/lmsgen'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/netca_deinst.sh'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/evmd.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/gipcd.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/oklist0'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/orajaxb'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/scriptagent.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/dumpsga0'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/oidprovtool'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/crs_setperm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/clssproxy'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/racgevtf'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/imp'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/gennfgt'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/racgmain'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/crsctl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/oklist'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/afddriverstate'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/tnslsnr'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/ocssdrim.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/maxmemO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/lcsscan'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/orarootagent'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/acfsroot'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/osdbagrp0'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/ocrdump.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/kfedO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/oradnssd'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/dgmgrl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/extjobo'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/oraxml'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/acfsrepl_initializer'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/diagcollection.pl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/kfodO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/crs_stat.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/zip'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/acfsrepl_transport'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/acfsload'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/platform_common'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/ldapadd'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/plshprof'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/dbv'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/evmmklib'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/gpnptool'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/ldapmodifymt'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/lxinst'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/okaload'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/fmputl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/odig'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/srvconfig'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/skgxpinfoO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/crs_setperm.bin'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/geneziO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/rawutl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/xml'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/mgmtca'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/dbvO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/crs_stop'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/okaroot'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/afdboot'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/bin/lsnrctl0'. The file is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/bin'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/javavm'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/xdk'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/oracore'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/ucp'. The directory is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/ctss/init/12cnode3.pid'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/ctss/init/12cnode3'. The file is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/ctss/init'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/ctss'. The directory is not empty.
Failed to delete the file '/u01/app/12.1.0.2/grid/osysmond/init/12cnode3.pid'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/osysmond/init/12cnode3'. The file is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/osysmond/init'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/osysmond'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/install'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/assistants'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/gpnp/gpnp_bcp__2016_3_23_18507/new__12cnode1/profiles/peer'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/gpnp/gpnp_bcp__2016_3_23_18507/new__12cnode1/profiles'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/gpnp/gpnp_bcp__2016_3_23_18507/new__12cnode1/wallets/pa'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/gpnp/gpnp_bcp__2016_3_23_18507/new__12cnode1/wallets/root'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/gpnp/gpnp_bcp__2016_3_23_18507/new__12cnode1/wallets/prdr'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/gpnp/gpnp_bcp__2016_3_23_18507/new__12cnode1/wallets/peer'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/gpnp/gpnp_bcp__2016_3_23_18507/new__12cnode1/wallets'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/gpnp/gpnp_bcp__2016_3_23_18507/new__12cnode1'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/gpnp/gpnp_bcp__2016_3_23_18507/stg__12cnode3/profiles/peer'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/gpnp/gpnp_bcp__2016_3_23_18507/stg__12cnode3/profiles'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/gpnp/gpnp_bcp__2016_3_23_18507/stg__12cnode3/wallets/pa'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/gpnp/gpnp_bcp__2016_3_23_18507/stg__12cnode3/wallets/root'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/gpnp/gpnp_bcp__2016_3_23_18507/stg__12cnode3/wallets/prdr'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/gpnp/gpnp_bcp__2016_3_23_18507/stg__12cnode3/wallets/peer'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/gpnp/gpnp_bcp__2016_3_23_18507/stg__12cnode3/wallets'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/gpnp/gpnp_bcp__2016_3_23_18507/stg__12cnode3'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/gpnp/gpnp_bcp__2016_3_23_18507'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/gpnp'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/tfa'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/racg'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/deinstall'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/cfgtoollogs'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/inventory'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/ohasd'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/sqlpatch'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/dbs'. The directory is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/root.sh'. The file is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/OPatch'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/clone'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/gipc'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/utl'. The directory is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crf/admin/crf12cnode3.cfg'. The file is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/crf/admin/run/crflogd'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/crf/admin/run/crfmond'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/crf/admin/run'. The directory is not empty.
Failed to delete the file '/u01/app/12.1.0.2/grid/crf/admin/crf12cnode3.ora'. The file is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/crf/admin'. The directory is not empty.
Failed to delete the file '/u01/app/12.1.0.2/grid/crf/db/12cnode3/proc/procdump.l01'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crf/db/12cnode3/proc/procdump.l02'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crf/db/12cnode3/proc/localdump.hdr'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crf/db/12cnode3/proc/procdump.log'. The file is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/crf/db/12cnode3/proc'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/crf/db/12cnode3'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/crf/db'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/crf'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/srvm'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/cv'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/owm'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/addnode'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/oc4j'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/gnsd/init'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/gnsd'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/nls'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/scheduler'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/ldap'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/usm'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/jdbc'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/eons'. The directory is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libsrvmcred12.so'. The file is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/lib/stubs'. The directory is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libba12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libcorejava.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/asmcmddisk.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/osds_afdlib.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libskgxn2.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/clntshcore.map'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libclntshcore.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/acfstoolsdriver.sh'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/s0main.o'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libexpat.la'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libcore12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libnls12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/liboramysql12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libcommon12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libuini12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libpatchgensh12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/osds_acfsroot.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libordimt12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libgx12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libexpat.so.1'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/naeet.o'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/nnfgt.o'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libsrvm12.so0'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libosbws12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libcell12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/acfssinglefsmount.pl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libclntsh.so.11.1'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libagtsh.so.1.0'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libippdcemerged.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libgns12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/osds_unix_linux_afdlib.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libskvol12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libshpksse4212.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/asmcmdambr.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libskgxpd.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/osds_afdroot.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/sscoreed.o'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libsnls12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libnoname12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/asmcmdaudit.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libntcp12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libntcpaio12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libplc12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libsqlplus.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libshpkavx12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/librdjni12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libvsn12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libvsn12_std.a.dbl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libocijdbc12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libnsgr12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libntns12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/asmcommand.xml'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libodm12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libshpkavx212.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libnnetd12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libkubsagt12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libskgxpg.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libsvml.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libacfs12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libwwg.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libonsx.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libccme_base_non_fips.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/xmlparserv2.jar'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libgnsjni12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/osds_okaroot.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/activation.jar'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libmm.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libcryptocme.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libnnz12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/afdlib.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libccme_asym.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libons.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libippdcmerged.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libipp_bz2.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libskgxpcompat.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libskgxns.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libxml12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/okalib.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libskgxp12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libclntsh.so.12.1'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libocrutl12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libccme_ecc_accel_non_fips.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libccme_base.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libexpat.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/afdtoolsdriver.sh'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libocci.so.12.1'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/asmcmddiag.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libnid.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libcell12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/facility.lis'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libowm2.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/afdload.pl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libcrf12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libpsa12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libheteroxa12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/asmcmdvol.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libnzjs12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/liboraz.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/osds_acfslib.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libnbeq12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libccme_ecc_accel_fips.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/xmlparserv2_sans_jaxp_services.jar'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libodm12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libasmclnt12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libocci12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libeonsserver.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/oc4jctl_lib.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/asmcmdparser.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libclntsh.so.10.1'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/asmcmdamdu.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libasmperl12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libcryptocme.sig'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/xmlparserv2_jaxp_services.jar'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/asmcmdbase.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/osds_afddriverstate.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libshpkavx12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/okadriverstate.pl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libplp12_pic.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/asmcmdanlz.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libordim12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libirc.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libn12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libclntshcore.so.12.1'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libgeneric12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libclsr12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/nigcon.o'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/osds_okadriverstate.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/osds_acfsload.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libldapclnt12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/osds_acfsregistrymount.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libpls12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/sysliblist'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/xschema.jar'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/okatoolsdriver.sh'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libzx12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/afdroot.pl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libons.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libvsn12_cee.a.dbl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libexpat.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libclntst12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/nautab.o'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/ldflagsO'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libsqlplus.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/nigtab.o'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libippsemerged.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libncrypt12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libplp12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libocr12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libippsmerged.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libsrvm12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/liborion12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libzt12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libskgxpr.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libnus12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libasmclntsh12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/acfsreplcrs.pl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libntmq12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libntcps12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libeons.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libnnz12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libdbcfg12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/ntcontab.o'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libagfw12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libccme_ecc.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/ldflags'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libnro12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/asmcmdexceptions.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libqsmashr.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/osds_unix_linux_okalib.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libnque12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libcxaguard.so.5'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libippcore.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/asmcmdtmpl.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libclib_jiio.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/acfsregistrymount.pl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libordim12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libsrvmhas12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/asmcmdglobal.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libimf.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libclient12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libztkg12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libpls12_pic.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libcrf_mdb12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/oc4jctl_common.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libvsn12_cse.a.dbl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/xmlmesg.jar'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libxdb.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/afddriverstate.pl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libnhost12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libccme_ecc_non_fips.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/acfsroot.pl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/okaload.pl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libmql1.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libldapjclnt12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libhasgen12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/naedhs.o'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/acfsdriverstate.pl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libipp_z.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libldapjclnt12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libsql12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libplc12_pic.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libnl12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libserver12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libctxc12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/osntabst.o'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/scorept.o'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/http_client.jar'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libippcpemerged.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libagent12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/asmcmdattr.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libasmclntsh12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libnjni12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libctx12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libpatchgensh12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libclntsh.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/asmcmdpasswd.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libodmd12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/liboramysql12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/asmcmdug.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libnnzst12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libskjcx12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/xmlcomp2.jar'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/okaroot.pl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libslax12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libocrb12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/osds_afdload.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libintlc.so.5'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libclsce12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/osds_okalib.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/asmcmdshare.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/xsu12.jar'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libnfsodm12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libavstub12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libippcpmerged.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/acfslib.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/clntsh.map'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/osds_acfssinglefsmount.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/liblxled.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libshpkavx212.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/liboevm.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libocci.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libshpksse4212.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libnnet12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/osds_acfsdriverstate.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libagtsh.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libipc1.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/acfsload.pl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/liborabz2.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libordsdo12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/naect.o'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libexpat.so.1.5.2'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/asmcmdsys.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/osds_unix_linux_acfslib.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libasmperl12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libafd12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libsrvmocr12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libclsra12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/xml.jar'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libunls12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/mail.jar'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/asmcmdxmlexceptions.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libnldap12.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libusmcrs12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libopc12.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libmql1.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/asmcmd_disk_header_format'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/libipc1.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/osds_okaload.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/liblzopro.a'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/lib/s_oc4jctl_lib.pm'. The file is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/lib'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/plsql'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/log/12cnode3/acfs'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/log/12cnode3/diskmon'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/log/12cnode3/admin'. The directory is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/log/12cnode3/alert12cnode3.log'. The file is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/log/12cnode3/ctssd'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/log/12cnode3/client'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/log/12cnode3/gipcd'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/log/12cnode3/crsd'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/log/12cnode3/racg'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/log/12cnode3/ohasd'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/log/12cnode3/gpnpd'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/log/12cnode3/evmd'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/log/12cnode3/crflogd'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/log/12cnode3/mdnsd'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/log/12cnode3/srvm'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/log/12cnode3/gnsd'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/log/12cnode3/cssd'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/log/12cnode3/afd'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/log/12cnode3/crfmond'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/log/12cnode3/xag'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/log/12cnode3'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/log'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/crs/auth'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/crs/demo'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/crs/script'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/crs/config'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/crs/install'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/crs/public'. The directory is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crs/utl/12cnode3/crsconfig_fileperms'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crs/utl/12cnode3/crsconfig_dirs'. The file is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/crs/utl/12cnode3'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/crs/utl'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/crs/trace'. The directory is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crs/init/ohasd'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crs/init/oka'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crs/init/init.ohasd'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crs/init/12cnode3.pid'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crs/init/afd.sles'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crs/init/ohasd.sles'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crs/init/afd'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crs/init/12cnode3'. The file is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/crs/init'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/crs/lib'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/crs/log'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/crs/profile'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/crs/mesg'. The directory is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crs/template/diskmon.type'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crs/template/gpnp.type'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crs/template/cssd.type'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crs/template/asm.type'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crs/template/mdns.type'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crs/template/evm.type'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crs/template/drivers.acfs.type'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crs/template/daemon.type'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crs/template/application.tdf'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crs/template/ctss.type'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crs/template/storage.type'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crs/template/ohasdbase.type'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crs/template/generic.tdf'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crs/template/appvipx.type'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crs/template/TYPE_application.cap'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crs/template/crf.type'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crs/template/TYPE_generic.cap'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crs/template/driver.afd.type'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crs/template/oka.type'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crs/template/haip.type'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crs/template/gipc.type'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crs/template/registry.acfs.type'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crs/template/cssdmonitor.type'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crs/template/appvip.type'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/crs/template/crs.type'. The file is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/crs/template'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/crs/sbs'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/crs'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/precomp'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/network'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/hs'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/dmu'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/opmn'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/instantclient'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/css'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/md'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/QOpatch'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/rest'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/evm'. The directory is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/internal/usableports.txt'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/internal/runstatus.txt'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/internal/.buildid'. The file is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/internal'. The directory is not empty.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/jlib/jewt4.jar'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/jlib/je-4.1.27.jar'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/jlib/ojdbc6.jar'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/jlib/share.jar'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/jlib/jdev-rt.jar'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/jlib/Symlink.so'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/jlib/jsch.jar'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/jlib/RATFA.jar'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/jlib/je-5.0.84.jar'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/jlib/commons-io-2.2.jar'. The file is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/jlib'. The directory is not empty.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/public.jks'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/bin/getppid.pl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/bin/uninstalltfa'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/bin/tfasetup.pl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/bin/Text/ASCIITable/Wrap.pm'. The file is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/bin/Text/ASCIITable'. The directory is not empty.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/bin/Text/ASCIITable.pm'. The file is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/bin/Text'. The directory is not empty.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/bin/discover_ora_stack.sh'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/bin/profiling.sh'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/bin/uninstalltfa.sh'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/bin/tfactl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/bin/tfactl.pl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/bin/tfactl_lib.pm'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/bin/patchtfa.sh'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/bin/collectfiles.pl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/bin/metric_iorm.pl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/bin/tfactl.tmpl'. The file is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/bin'. The directory is not empty.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/install/inittab.sunos'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/install/oracle-tfa.conf'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/install/inittab.aix'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/install/oracle-tfa.service'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/install/inittab.hpux'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/install/init.tfa.tmpl'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/install/inittab'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/install/inittab.linux'. The file is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/install'. The directory is not empty.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/tfa.jks'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/tfa_directories.txt'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/ext/tnt/bin/tnt'. The file is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/ext/tnt/bin'. The directory is not empty.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/ext/tnt/conf/tnt.prop.tmpl'. The file is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/ext/tnt/conf'. The directory is not empty.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/ext/tnt/lib/commons-cli-1.0.jar'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/ext/tnt/lib/commons-lang-2.1.jar'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/ext/tnt/lib/xmlparserv2.jar'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/ext/tnt/lib/xternal.jar'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/ext/tnt/lib/commons-logging-1.1.jar'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/ext/tnt/lib/tnt.jar'. The file is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/ext/tnt/lib'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/ext/tnt'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/ext'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/input'. The directory is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/tfa_setup.txt'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/tfa.md5'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/resources/problemset.xml'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/resources/file_type_patterns.xml'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/resources/tasks.xml'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/resources/ignore_extensions.txt'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/resources/searchStrings.xml'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/resources/collect_all_directories.xml'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/resources/date_patterns.xml'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/resources/file_type_patterns_internal.xml'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/resources/ignorefiles.txt'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/resources/scanFileList.xml'. The file is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/resources/mask_strings.xml'. The file is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home/resources'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/suptools/tfa/release/tfa_home'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/suptools/tfa/release'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/suptools/tfa'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/suptools/orachk'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/suptools'. The directory is not empty.
Failed to delete the directory '/u01/app/12.1.0.2/grid/xag'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/perl'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/wlm'. The directory is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/rootupgrade.sh'. The file is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/relnotes'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/mdns'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/rdbms'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/slax'. The directory is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid/oui'. The directory is in use.
Failed to delete the file '/u01/app/12.1.0.2/grid/oraInst.loc'. The file is in use.
Failed to delete the directory '/u01/app/12.1.0.2/grid'. The directory is not empty.
Delete directory '/u01/app/12.1.0.2/grid' on the local node : Failed <<<<

Delete directory '/u01/app/oraInventory' on the local node : Done

The Oracle Base directory '/u01/app/oracle' will not be removed on local node. The directory is not empty.

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END


## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2017-09-03_08-13-07PM' on node '12cnode3'

## [END] Oracle install clean ##


######################### DEINSTALL CLEAN OPERATION END #########################


####################### DEINSTALL CLEAN OPERATION SUMMARY #######################
Successfully detached Oracle home '/u01/app/12.1.0.2/grid' from the central inventory on the local node.
Failed to delete directory '/u01/app/12.1.0.2/grid' on the local node.
Successfully deleted directory '/u01/app/oraInventory' on the local node.
Oracle Universal Installer cleanup was successful.


Run 'rm -r /etc/oraInst.loc' as root on node(s) '12cnode3' at the end of the session.

Run 'rm -r /opt/ORCLfmap' as root on node(s) '12cnode3' at the end of the session.
Run 'rm -r /etc/oratab' as root on node(s) '12cnode3' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################


############# ORACLE DEINSTALL TOOL END #############

[oracle@12cnode3 deinstall]$

 

The above error message can be ignored to remove the GI home.
Now, update the inventory file on the remaining nodes that the node list does not contain 12cnode3.

 

[oracle@12cnode1 ~]$ . oraenv
ORACLE_SID = [+ASM1] ?
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@12cnode1 ~]$
[oracle@12cnode1 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@12cnode1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={12cnode1,12cnode2}" CRS=TRUE -silent
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 8126 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
[oracle@12cnode1 bin]$

 

Just verifying the inventory file post update.

 

[oracle@12cnode1 bin]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2014, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>12.1.0.2.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="OraGI12Home1" LOC="/u01/app/12.1.0.2/grid" TYPE="O" IDX="1" CRS="true">
   <NODE_LIST>
      <NODE NAME="12cnode1"/>
      <NODE NAME="12cnode2"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/product/12.1.0.2/db_1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="12cnode1"/>
      <NODE NAME="12cnode2"/>
   </NODE_LIST>
</HOME>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[oracle@12cnode1 bin]$

 

“olsnodes” too now just shows 12cnode1 and 12cnode2 as the nodes of the cluster and not 12cnode3.

 

[oracle@12cnode1 bin]$ olsnodes -s
12cnode1        Active
12cnode2        Active
[oracle@12cnode1 bin]$

 

Verify the cluster resources status from any one of the existing nodes.

 

[oracle@12cnode1 bin]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       12cnode1                 STABLE
               ONLINE  ONLINE       12cnode2                 STABLE
ora.DATA.dg
               ONLINE  ONLINE       12cnode1                 STABLE
               ONLINE  ONLINE       12cnode2                 STABLE
ora.FRA.dg
               ONLINE  ONLINE       12cnode1                 STABLE
               ONLINE  ONLINE       12cnode2                 STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       12cnode1                 STABLE
               ONLINE  ONLINE       12cnode2                 STABLE
ora.net1.network
               ONLINE  ONLINE       12cnode1                 STABLE
               ONLINE  ONLINE       12cnode2                 STABLE
ora.ons
               ONLINE  ONLINE       12cnode1                 STABLE
               ONLINE  ONLINE       12cnode2                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.12cnode1.vip
      1        ONLINE  ONLINE       12cnode1                 STABLE
ora.12cnode2.vip
      1        ONLINE  ONLINE       12cnode2                 STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       12cnode1                 STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       12cnode1                 169.254.3.189 192.16
                                                             8.1.107,STABLE
ora.asm
      1        ONLINE  ONLINE       12cnode1                 Started,STABLE
      3        ONLINE  ONLINE       12cnode2                 Started,STABLE
ora.cvu
      1        ONLINE  ONLINE       12cnode1                 STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       12cnode1                 Open,STABLE
ora.oc4j
      1        ONLINE  ONLINE       12cnode1                 STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       12cnode1                 STABLE
ora.srprim.db
      1        ONLINE  ONLINE       12cnode1                 Open,STABLE
      2        ONLINE  ONLINE       12cnode2                 Open,STABLE
ora.srprim.srprim_i03.svc
      1        ONLINE  ONLINE       12cnode1                 STABLE
      2        ONLINE  ONLINE       12cnode2                 STABLE
--------------------------------------------------------------------------------
[oracle@12cnode1 bin]$

 

We can run the “cluvfy” utility to see if any messages are reported by post node deletion.

 

[oracle@12cnode1 ~]$ cluvfy stage -post nodedel -n 12cnode3 -verbose

Performing post-checks for node removal

Checking CRS integrity...
The Oracle Clusterware is healthy on node "12cnode1"

CRS integrity check passed

Clusterware version consistency passed.
Result:
Node removal check passed

Post-check for node removal was successful.
[oracle@12cnode1 ~]$

 

All looks good now and the node is removed from the cluster.

 

 

 

COPYRIGHT

© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.

 

 

 

DISCLAIMER

The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organisation. The views expressed by visitors on this blog are theirs solely and may not reflect mine

 

 

September 9, 2017 / Shivananda Rao P

Flex ASM configuration in Oracle 12c

This article speaks of how Flex ASM is configured and the advantage of having it. Flex ASM feature was introduced in 12c which allows the ASM instance to run on different node and not intended to run on each of the nodes in the cluster. This is configured based on the cardinality of the ASM. In versions prior to 12c, the ASM instance needed to be run on each of the nodes of the cluster. If the ASM instance was unable to start on any of the nodes, then the associated database instances too could not be started. But, from 12c, this is no more a concern.

 

Let’s see how Flex ASM can be configured and how it works.

 

Environment:

 

RAC nodes : 12cnode1, 12cnode2, 12cnode3
Cluster version: 12.1.0.2
Database Name: srprim
Instance Name: srprim1 on 12cnode1, srprim2 on 12cnode2, srprim3 on 12cnode3
OS Platform: OEL 6

 

Below shows the list of nodes within the cluster.

 

[oracle@12cnode1 ~]$ olsnodes
12cnode1
12cnode2
12cnode3
[oracle@12cnode1 ~]$

 

Check the Cluster mode through ASMCMD. It’s obvious that Flex mode would be disabled.

 

[oracle@12cnode1 ~]$ asmcmd
ASMCMD> showclustermode
ASM cluster : Flex mode disabled

 

Flex ASM requires a separate listener called ASMLISTENER to be configured on a different port number which is not being used by any other Listener. The other important thing is that FLEX ASM requires a separate network with which the ASM instances and it’s clients communicate. You can also make use of the private network ethernet (used for inter node communication) as the network for the ASM instances and it’s clients to communicate. In my case, I’m using the private network itself as ASM network too.

 

We now need to convert the Normal ASM to FlexASM. This can be done via ASMCA utility. Here I’m doing this using “ASMCA” in silent mode and not through GUI.

 

The options passed to ASMCA are:

 

-silent: to run in silent mode
-convertToFlexASM : to convert to FlexASM
-asmNetworks: ASM Network to be used in the form of "interface_name/Subnet"
-asmListenerPort: ASM Listener Port number to be used

 

Run this command on one node of the cluster.

 

[oracle@12cnode1 ~]$ asmca -silent -convertToFlexASM -asmNetworks eth1/192.168.1.0 -asmListenerPort 1526

To complete ASM conversion, run the following script as privileged user in local node.
/u01/app/oracle/cfgtoollogs/asmca/scripts/converttoFlexASM.sh

 

Once done, as directed we need to run the “converttoFlexASM.sh” script as ROOT user on the node where ASMCA was run.
This script configures ASM NET LISTENER on all the nodes of the cluster as well as stops and starts all cluster resources on all the nodes of the cluster sequentially.

 

[root@12cnode1 ~]# /u01/app/oracle/cfgtoollogs/asmca/scripts/converttoFlexASM.sh
CRS-2673: Attempting to stop 'ora.crsd' on '12cnode1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on '12cnode1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on '12cnode1'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on '12cnode1'
CRS-2673: Attempting to stop 'ora.FRA.dg' on '12cnode1'
CRS-2673: Attempting to stop 'ora.DATA.dg' on '12cnode1'
CRS-2673: Attempting to stop 'ora.mgmtdb' on '12cnode1'
CRS-2673: Attempting to stop 'ora.srprim.db' on '12cnode1'
CRS-2677: Stop of 'ora.FRA.dg' on '12cnode1' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on '12cnode1' succeeded
CRS-2677: Stop of 'ora.LISTENER.lsnr' on '12cnode1' succeeded
CRS-2673: Attempting to stop 'ora.12cnode1.vip' on '12cnode1'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on '12cnode1' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on '12cnode1'
CRS-2677: Stop of 'ora.12cnode1.vip' on '12cnode1' succeeded
CRS-2672: Attempting to start 'ora.12cnode1.vip' on '12cnode3'
CRS-2677: Stop of 'ora.mgmtdb' on '12cnode1' succeeded
CRS-2673: Attempting to stop 'ora.MGMTLSNR' on '12cnode1'
CRS-2677: Stop of 'ora.scan1.vip' on '12cnode1' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on '12cnode3'
CRS-2677: Stop of 'ora.MGMTLSNR' on '12cnode1' succeeded
CRS-2672: Attempting to start 'ora.MGMTLSNR' on '12cnode3'
CRS-2677: Stop of 'ora.srprim.db' on '12cnode1' succeeded
CRS-2676: Start of 'ora.12cnode1.vip' on '12cnode3' succeeded
CRS-2676: Start of 'ora.scan1.vip' on '12cnode3' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on '12cnode3'
CRS-2676: Start of 'ora.MGMTLSNR' on '12cnode3' succeeded
CRS-2672: Attempting to start 'ora.mgmtdb' on '12cnode3'
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on '12cnode3' succeeded
CRS-2676: Start of 'ora.mgmtdb' on '12cnode3' succeeded
CRS-2673: Attempting to stop 'ora.ons' on '12cnode1'
CRS-2677: Stop of 'ora.ons' on '12cnode1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on '12cnode1'
CRS-2677: Stop of 'ora.net1.network' on '12cnode1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on '12cnode1' has completed
CRS-2677: Stop of 'ora.crsd' on '12cnode1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on '12cnode1'
CRS-2673: Attempting to stop 'ora.evmd' on '12cnode1'
CRS-2673: Attempting to stop 'ora.storage' on '12cnode1'
CRS-2677: Stop of 'ora.storage' on '12cnode1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on '12cnode1'
CRS-2677: Stop of 'ora.ctssd' on '12cnode1' succeeded
CRS-2677: Stop of 'ora.evmd' on '12cnode1' succeeded
CRS-2677: Stop of 'ora.asm' on '12cnode1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on '12cnode1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on '12cnode1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on '12cnode1'
CRS-2677: Stop of 'ora.cssd' on '12cnode1' succeeded
CRS-2672: Attempting to start 'ora.evmd' on '12cnode1'
CRS-2672: Attempting to start 'ora.cssdmonitor' on '12cnode1'
CRS-2676: Start of 'ora.evmd' on '12cnode1' succeeded
CRS-2676: Start of 'ora.cssdmonitor' on '12cnode1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on '12cnode1'
CRS-2672: Attempting to start 'ora.diskmon' on '12cnode1'
CRS-2676: Start of 'ora.diskmon' on '12cnode1' succeeded
CRS-2676: Start of 'ora.cssd' on '12cnode1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on '12cnode1'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on '12cnode1'
CRS-2676: Start of 'ora.ctssd' on '12cnode1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on '12cnode1' succeeded
CRS-2672: Attempting to start 'ora.asm' on '12cnode1'
CRS-2676: Start of 'ora.asm' on '12cnode1' succeeded
CRS-2672: Attempting to start 'ora.storage' on '12cnode1'
CRS-2676: Start of 'ora.storage' on '12cnode1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on '12cnode1'
CRS-2676: Start of 'ora.crsd' on '12cnode1' succeeded
Oracle Grid Infrastructure restarted in node 12cnode1
PRCC-1014 : ASMNET1LSNR_ASM was already running
PRCR-1004 : Resource ora.ASMNET1LSNR_ASM.lsnr is already running
PRCR-1079 : Failed to start resource ora.ASMNET1LSNR_ASM.lsnr
CRS-5702: Resource 'ora.ASMNET1LSNR_ASM.lsnr' is already running on '12cnode2'
CRS-5702: Resource 'ora.ASMNET1LSNR_ASM.lsnr' is already running on '12cnode1'
CRS-5702: Resource 'ora.ASMNET1LSNR_ASM.lsnr' is already running on '12cnode3'
ASM listener ASMNET1LSNR_ASM running already
CRS-2673: Attempting to stop 'ora.crsd' on '12cnode2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on '12cnode2'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on '12cnode2'
CRS-2673: Attempting to stop 'ora.cvu' on '12cnode2'
CRS-2673: Attempting to stop 'ora.oc4j' on '12cnode2'
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on '12cnode2'
CRS-2673: Attempting to stop 'ora.srprim.db' on '12cnode2'
CRS-2677: Stop of 'ora.cvu' on '12cnode2' succeeded
CRS-2672: Attempting to start 'ora.cvu' on '12cnode1'
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on '12cnode2' succeeded
CRS-2677: Stop of 'ora.LISTENER.lsnr' on '12cnode2' succeeded
CRS-2677: Stop of 'ora.srprim.db' on '12cnode2' succeeded
CRS-2673: Attempting to stop 'ora.12cnode2.vip' on '12cnode2'
CRS-2677: Stop of 'ora.12cnode2.vip' on '12cnode2' succeeded
CRS-2672: Attempting to start 'ora.12cnode2.vip' on '12cnode3'
CRS-2676: Start of 'ora.12cnode2.vip' on '12cnode3' succeeded
CRS-2676: Start of 'ora.cvu' on '12cnode1' succeeded
CRS-2677: Stop of 'ora.oc4j' on '12cnode2' succeeded
CRS-2672: Attempting to start 'ora.oc4j' on '12cnode1'
CRS-2673: Attempting to stop 'ora.DATA.dg' on '12cnode2'
CRS-2673: Attempting to stop 'ora.FRA.dg' on '12cnode2'
CRS-2677: Stop of 'ora.DATA.dg' on '12cnode2' succeeded
CRS-2677: Stop of 'ora.FRA.dg' on '12cnode2' succeeded
CRS-2676: Start of 'ora.oc4j' on '12cnode1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on '12cnode2'
CRS-2677: Stop of 'ora.ons' on '12cnode2' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on '12cnode2'
CRS-2677: Stop of 'ora.net1.network' on '12cnode2' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on '12cnode2' has completed
CRS-2677: Stop of 'ora.crsd' on '12cnode2' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on '12cnode2'
CRS-2673: Attempting to stop 'ora.evmd' on '12cnode2'
CRS-2673: Attempting to stop 'ora.storage' on '12cnode2'
CRS-2677: Stop of 'ora.storage' on '12cnode2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on '12cnode2'
CRS-2677: Stop of 'ora.evmd' on '12cnode2' succeeded
CRS-2677: Stop of 'ora.ctssd' on '12cnode2' succeeded
CRS-2677: Stop of 'ora.asm' on '12cnode2' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on '12cnode2'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on '12cnode2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on '12cnode2'
CRS-2677: Stop of 'ora.cssd' on '12cnode2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on '12cnode2'
CRS-2672: Attempting to start 'ora.evmd' on '12cnode2'
CRS-2676: Start of 'ora.evmd' on '12cnode2' succeeded
CRS-2676: Start of 'ora.cssdmonitor' on '12cnode2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on '12cnode2'
CRS-2672: Attempting to start 'ora.diskmon' on '12cnode2'
CRS-2676: Start of 'ora.diskmon' on '12cnode2' succeeded
CRS-2676: Start of 'ora.cssd' on '12cnode2' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on '12cnode2'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on '12cnode2'
CRS-2676: Start of 'ora.ctssd' on '12cnode2' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on '12cnode2' succeeded
CRS-2672: Attempting to start 'ora.asm' on '12cnode2'
CRS-2676: Start of 'ora.asm' on '12cnode2' succeeded
CRS-2672: Attempting to start 'ora.storage' on '12cnode2'
CRS-2676: Start of 'ora.storage' on '12cnode2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on '12cnode2'
CRS-2676: Start of 'ora.crsd' on '12cnode2' succeeded
Oracle Grid Infrastructure restarted in node 12cnode2
CRS-2673: Attempting to stop 'ora.crsd' on '12cnode3'
CRS-2677: Stop of 'ora.crsd' on '12cnode3' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on '12cnode3'
CRS-2673: Attempting to stop 'ora.evmd' on '12cnode3'
CRS-2673: Attempting to stop 'ora.storage' on '12cnode3'
CRS-2677: Stop of 'ora.storage' on '12cnode3' succeeded
CRS-2673: Attempting to stop 'ora.asm' on '12cnode3'
CRS-2677: Stop of 'ora.evmd' on '12cnode3' succeeded
CRS-2677: Stop of 'ora.ctssd' on '12cnode3' succeeded
CRS-2675: Stop of 'ora.asm' on '12cnode3' failed
CRS-2679: Attempting to clean 'ora.asm' on '12cnode3'
CRS-2681: Clean of 'ora.asm' on '12cnode3' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on '12cnode3'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on '12cnode3' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on '12cnode3'
CRS-2677: Stop of 'ora.cssd' on '12cnode3' succeeded
CRS-2672: Attempting to start 'ora.evmd' on '12cnode3'
CRS-2672: Attempting to start 'ora.cssdmonitor' on '12cnode3'
CRS-2676: Start of 'ora.evmd' on '12cnode3' succeeded
CRS-2676: Start of 'ora.cssdmonitor' on '12cnode3' succeeded
CRS-2672: Attempting to start 'ora.cssd' on '12cnode3'
CRS-2672: Attempting to start 'ora.diskmon' on '12cnode3'
CRS-2676: Start of 'ora.diskmon' on '12cnode3' succeeded
CRS-2676: Start of 'ora.cssd' on '12cnode3' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on '12cnode3'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on '12cnode3'
CRS-2676: Start of 'ora.ctssd' on '12cnode3' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on '12cnode3' succeeded
CRS-2672: Attempting to start 'ora.asm' on '12cnode3'
CRS-2676: Start of 'ora.asm' on '12cnode3' succeeded
CRS-2672: Attempting to start 'ora.storage' on '12cnode3'
CRS-2676: Start of 'ora.storage' on '12cnode3' succeeded
CRS-2672: Attempting to start 'ora.crsd' on '12cnode3'
CRS-2676: Start of 'ora.crsd' on '12cnode3' succeeded
Oracle Grid Infrastructure restarted in node 12cnode3
[root@12cnode1 ~]#

 

Now that the execution of the “converttoFlexASM.sh” script is completed, let’s check the cluster mode.
We should see as “Flex mode enabled”.

 

[oracle@12cnode1 ~]$ asmcmd
ASMCMD> showclustermode
ASM cluster : Flex mode enabled

 

Let’s check the configuration of the ASM listener. It’s clear below that the ASM cluster listener is configured on port 1526 and subnet that was specified while running ASMCA.

 

[oracle@12cnode1 ~]$ srvctl config listener -l ASMNET1LSNR_ASM
Name: ASMNET1LSNR_ASM
Type: ASM Listener
Owner: oracle
Subnet: 192.168.1.0
Home: <CRS home>
End points: TCP:1526
Listener is enabled.
Listener is individually enabled on nodes:
Listener is individually disabled on nodes:
[oracle@12cnode1 ~]$

 

Let’s check the status of this listener. It needs to be running on all the nodes of the cluster.

 

[oracle@12cnode1 ~]$
[oracle@12cnode1 ~]$ srvctl status listener -l ASMNET1LSNR_ASM
Listener ASMNET1LSNR_ASM is enabled
Listener ASMNET1LSNR_ASM is running on node(s): 12cnode3,12cnode2,12cnode1

 

Also, we shall check the configuration of the ASM to check the cardinality and see if the cluster ASM listener is being listed.

 

[oracle@12cnode1 ~]$ srvctl config asm
ASM home: <CRS home>
Password file: +DATA/orapwASM
ASM listener: LISTENER
ASM instance count: 3
Cluster ASM listener: ASMNET1LSNR_ASM
[oracle@12cnode1 ~]$

 

We see that ASM cardinality has been set to 3. Since this is a 3 node cluster and in order to check the functionality of FlexASM, I would change the cardinality of ASM to 2. By doing so, only 2 ASM instances will have to run.

 

[oracle@12cnode1 ~]$ srvctl modify asm -count 2
[oracle@12cnode1 ~]$
[oracle@12cnode1 ~]$ srvctl config asm
ASM home: <CRS home>
Password file: +DATA/orapwASM
ASM listener: LISTENER
ASM instance count: 2
Cluster ASM listener: ASMNET1LSNR_ASM

 

We can see below that ASM is running only on 2 nodes (12cnode1 and 12cnode3) and no ASM instance is running on 12cnode2.

 

[oracle@12cnode1 ~]$ srvctl status asm
ASM is running on 12cnode3,12cnode1

 

Since ASM is not running on node 12cnode2, let’s check if the associated database instance is running on node 12cnode2 or not.

 

[oracle@12cnode3 ~]$ srvctl status database -db srprim -v -f
Instance srprim1 is running on node 12cnode1. Instance status: Open.
Instance srprim2 is running on node 12cnode2. Instance status: Open.
Instance srprim3 is running on node 12cnode3. Instance status: Open.

 

We can notice above that all the instances of database “srprim” are running on all the 3 nodes of the cluster which means that there is no impact seen on the associated database instance even if ASM is not running on a particular node.

 

So now, that ASM is not running on 12cnode2 but database instance “srprim2” is running, we need to figure out which ASM instance is used by database instance “srprim2”.

 

Let’s check this from ASM3 instance: (anyways I’m using “gv$” views-so can be run on any ASM instance that’s running)

 

SQL> select adg.name,adg.state,ac.instance_name,ac.db_name,adg.inst_id,ac.status,ac.cluster_name from gv$asm_diskgroup adg,gv$asm_client ac where adg.GROUP_NUMBER=ac.GROUP_NUMBER and adg.inst_id=ac.inst_id and ac.db_name='srprim' and adg.name='DATA' order by adg.inst_id;

NAME            STATE       INSTANCE_NAME        DB_NAME     INST_ID STATUS       CLUSTER_NAME
--------------- ----------- -------------------- -------- ---------- ------------ -------------------------------
DATA            MOUNTED     srprim1              srprim            1 CONNECTED    node12c-scan
DATA            MOUNTED     srprim3              srprim            3 CONNECTED    node12c-scan
DATA            MOUNTED     srprim2              srprim            3 CONNECTED    node12c-scan

 

It’s very clear from above that “srprim2” instance is being served by ASM instance 3 on node “12cnode3”. In other words, ASM instance 3 has 2 clients — 1. srprim2 from 12cnode2 and srprim3 from 12cnode3.

 

The same can be viewed from the alert log file of ASM3 instance:

 

NOTE: Flex client id 0x0 [srprim3:srprim:node12c-scan] attempting to connect
NOTE: registered owner id 0x10002 for srprim3:srprim:node12c-scan
NOTE: Flex client srprim3:srprim:node12c-scan registered, osid 23500, mbr 0x0, asmb 23305 (reg:4024785090)
NOTE: client srprim3:srprim:node12c-scan mounted group 1 (DATA)
NOTE: client srprim3:srprim:node12c-scan mounted group 2 (FRA)
Sat Feb 25 12:28:03 2017
NOTE: Flex client id 0x0 [srprim2:srprim:node12c-scan] attempting to connect
NOTE: registered owner id 0x10003 for srprim2:srprim:node12c-scan
NOTE: Flex client srprim2:srprim:node12c-scan registered, osid 23757, mbr 0x0, asmb 24702 (reg:1349866226)
NOTE: client srprim2:srprim:node12c-scan mounted group 1 (DATA)
NOTE: client srprim2:srprim:node12c-scan mounted group 2 (FRA)

 

 

COPYRIGHT

© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.

 

 

DISCLAIMER

The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organisation. The views expressed by visitors on this blog are theirs solely and may not reflect mine

 

 

September 6, 2017 / Shivananda Rao P

Manual Upgrade of RAC Database from 11.2.0.3 to 12.1.0.2

In the previous post, we have seen on upgrading the Grid Infrastructure from 11.2.0.3 to 12.1.0.2. In this, we shall see the steps involved in the database upgrade from 11.2.0.3 to 12.1.0.2

 
 

Environment:

 

RAC nodes: drnode1, drnode2
DB Name: srprim
DB Instances: srprim1, srprim2
Current DB version: 11.2.0.3.0
DB to be upgraded to version: 12.1.0.2.0
Cluster Storage used: ASM
Platform: OEL 6
Current DB HOME: /u01/app/oracle/product/11.2.0.3/db_1
New 12c DB HOME: /u01/app/oracle/product/12.1.0.2/db_1

 

Since this is an out-of-place upgrade, firstly install Oracle 12.1.0.2 database software. In this environment, the software here has been unzipped to location “/u03” and then the database software is installed using a response file in silent mode.

 

Here is the response file used to install the 12.1.0.2 database software.

 

[oracle@drnode1 ~]$ cd /u03/database
[oracle@drnode1 database]$ ./runInstaller -silent -responseFile /u02/db_install.rsp -ignoreSysPrereqs
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 500 MB.   Actual 32711 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 10229 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2017-02-21_07-55-11PM. Please wait ...[oracle@drnode1 database]$ You can find the log of this install session at:
 /u01/app/oraInventory/logs/installActions2017-02-21_07-55-11PM.log
The installation of Oracle Database 12c was successful.
Please check '/u01/app/oraInventory/logs/silentInstall2017-02-21_07-55-11PM.log' for more details.

As a root user, execute the following script(s):
        1. /u01/app/oracle/product/12.1.0.2/db_1/root.sh

Execute /u01/app/oracle/product/12.1.0.2/db_1/root.sh on the following nodes:
[drnode1, drnode2]


Successfully Setup Software.

 

Once the 12.1.0.2 database software has been installed, it’s time for us to upgrade our database.

 

Run the “preupgrd.sql” script available from the newly installed oracle 12c home on the 11.2 database. This script performs pre-requisite checks on the database to be upgraded and generates 2 scripts:

 

1. preupgrade_fixups.sql which needs to be run on the database to fix any issues reported while performing the pre-checks.
2. postupgrade_fixups.sql which needs to be run on the database once upgraded to 12.1.0.2 version.

 

Let’s run the “preupgrd.sql” script from the newly installed oracle 12.1.0.2 home (/u01/app/oracle/product/12.1.0.2/db_1) on the database.

 

[oracle@drnode1 ~]$ ls -lrt /u01/app/oracle/product/12.1.0.2/db_1/rdbms/admin/preupgrd.sql
-rw-r--r--. 1 oracle oinstall 14083 May 15  2014 /u01/app/oracle/product/12.1.0.2/db_1/rdbms/admin/preupgrd.sql

 

[oracle@drnode1 ~]$
[oracle@drnode1 ~]$ sqlplus sys/oracle@srprim as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Tue Feb 21 21:22:35 2017

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> @/u01/app/oracle/product/12.1.0.2/db_1/rdbms/admin/preupgrd.sql



Loading Pre-Upgrade Package...


***************************************************************************
Executing Pre-Upgrade Checks in SRPRIM...
***************************************************************************


      ************************************************************

                  ====>> ERRORS FOUND for SRPRIM <<====

 The following are *** ERROR LEVEL CONDITIONS *** that must be addressed
                    prior to attempting your upgrade.
            Failure to do so will result in a failed upgrade.

           You MUST resolve the above errors prior to upgrade

      ************************************************************

      ************************************************************

              ====>> PRE-UPGRADE RESULTS for SRPRIM <<====

ACTIONS REQUIRED:

1. Review results of the pre-upgrade checks:
 /u01/app/oracle/product/11.2.0.3/db_1/cfgtoollogs/srprim/preupgrade/preupgrade.log

2. Execute in the SOURCE environment BEFORE upgrade:
 /u01/app/oracle/product/11.2.0.3/db_1/cfgtoollogs/srprim/preupgrade/preupgrade_fixups.sql

3. Execute in the NEW environment AFTER upgrade:
 /u01/app/oracle/product/11.2.0.3/db_1/cfgtoollogs/srprim/preupgrade/postupgrade_fixups.sql

      ************************************************************

***************************************************************************
Pre-Upgrade Checks in SRPRIM Completed.
***************************************************************************

***************************************************************************
***************************************************************************

 

 

We see above that the preupgrd.sql script generated “preupgrade_fixups.sql” ( to fix any issues reported during pre-check phase) and “postupgrade_fixups.sql” (to be run post the upgrade) scripts that needs to be run on the database. Review the “preupgrade.log” and take necessary actions on the recommendations made.

 

================================================================================================

 

Preupgrade warning log:

[oracle@drnode1 ~]$ cat /u01/app/oracle/product/11.2.0.3/db_1/cfgtoollogs/srprim/preupgrade/preupgrade.log
Oracle Database Pre-Upgrade Information Tool 02-21-2017 21:24:39
Script Version: 12.1.0.2.0 Build: 006
**********************************************************************
   Database Name:  SRPRIM
  Container Name:  Not Applicable in Pre-12.1 database
    Container ID:  Not Applicable in Pre-12.1 database
         Version:  11.2.0.3.0
      Compatible:  11.2.0.0.0
       Blocksize:  8192
        Platform:  Linux x86 64-bit
   Timezone file:  V14
**********************************************************************
                           [Update parameters]
         [Update Oracle Database 11.2.0.3.0 init.ora or spfile]

--> If Target Oracle is 32-bit, refer here for Update Parameters:
WARNING: --> "processes" needs to be increased to at least 300

--> If Target Oracle is 64-bit, refer here for Update Parameters:
WARNING: --> "processes" needs to be increased to at least 300
**********************************************************************
**********************************************************************
                          [Renamed Parameters]
                     [No Renamed Parameters in use]
**********************************************************************
**********************************************************************
                    [Obsolete/Deprecated Parameters]
             [No Obsolete or Desupported Parameters in use]
**********************************************************************
                            [Component List]
**********************************************************************
--> Oracle Catalog Views                   [upgrade]  VALID
--> Oracle Packages and Types              [upgrade]  VALID
--> JServer JAVA Virtual Machine           [upgrade]  VALID
--> Oracle XDK for Java                    [upgrade]  VALID
--> Real Application Clusters              [upgrade]  VALID
--> Oracle Workspace Manager               [upgrade]  VALID
--> OLAP Analytic Workspace                [upgrade]  VALID
--> Oracle Enterprise Manager Repository   [upgrade]  VALID
--> Oracle Text                            [upgrade]  VALID
--> Oracle XML Database                    [upgrade]  VALID
--> Oracle Java Packages                   [upgrade]  VALID
--> Oracle Multimedia                      [upgrade]  VALID
--> Oracle Spatial                         [upgrade]  VALID
--> Expression Filter                      [upgrade]  VALID
--> Rule Manager                           [upgrade]  VALID
--> Oracle Application Express             [upgrade]  VALID
--> Oracle OLAP API                        [upgrade]  VALID
**********************************************************************
                              [Tablespaces]
**********************************************************************
--> SYSTEM tablespace is adequate for the upgrade.
     minimum required size: 1225 MB
--> SYSAUX tablespace is adequate for the upgrade.
     minimum required size: 1509 MB
--> UNDOTBS1 tablespace is adequate for the upgrade.
     minimum required size: 400 MB
--> TEMP tablespace is adequate for the upgrade.
     minimum required size: 60 MB
--> EXAMPLE tablespace is adequate for the upgrade.
     minimum required size: 310 MB

                      [No adjustments recommended]

**********************************************************************
**********************************************************************
                          [Pre-Upgrade Checks]
**********************************************************************
WARNING: --> Process Count may be too low

     Database has a maximum process count of 150 which is lower than the
     default value of 300 for this release.
     You should update your processes value prior to the upgrade
     to a value of at least 300.
     For example:
        ALTER SYSTEM SET PROCESSES=300 SCOPE=SPFILE
     or update your init.ora file.

WARNING: --> Enterprise Manager Database Control repository found in the database

     In Oracle Database 12c, Database Control is removed during
     the upgrade. To save time during the Upgrade, this action
     can be done prior to upgrading using the following steps after
     copying rdbms/admin/emremove.sql from the new Oracle home
   - Stop EM Database Control:
    $> emctl stop dbconsole

   - Connect to the Database using the SYS account AS SYSDBA:

   SET ECHO ON;
   SET SERVEROUTPUT ON;
   @emremove.sql
     Without the set echo and serveroutput commands you will not
     be able to follow the progress of the script.

INFORMATION: --> OLAP Catalog(AMD) exists in database

     Starting with Oracle Database 12c, OLAP Catalog component is desupported.
     If you are not using the OLAP Catalog component and want
     to remove it, then execute the
     ORACLE_HOME/olap/admin/catnoamd.sql script before or
     after the upgrade.

INFORMATION: --> Older Timezone in use

     Database is using a time zone file older than version 18.
     After the upgrade, it is recommended that DBMS_DST package
     be used to upgrade the 11.2.0.3.0 database time zone version
     to the latest version which comes with the new release.
     Please refer to My Oracle Support note number 977512.1 for details.

INFORMATION: --> There are existing Oracle components that will NOT be
     upgraded by the database upgrade script.  Typically, such components
     have their own upgrade scripts, are deprecated, or obsolete.
     Those components are:  OLAP Catalog,OWB

INFORMATION: --> Oracle Application Express (APEX) can be
     manually upgraded prior to database upgrade

     APEX is currently at version 3.2.1.00.12 and will need to be
     upgraded to APEX version 4.2.5 in the new release.
     Note 1: To reduce database upgrade time, APEX can be manually
             upgraded outside of and prior to database upgrade.
     Note 2: See MOS Note 1088970.1 for information on APEX
             installation upgrades.


**********************************************************************
                      [Pre-Upgrade Recommendations]
**********************************************************************

                        *****************************************
                        ********* Dictionary Statistics *********
                        *****************************************

Please gather dictionary statistics 24 hours prior to
upgrading the database.
To gather dictionary statistics execute the following command
while connected as SYSDBA:
    EXECUTE dbms_stats.gather_dictionary_stats;

^^^ MANUAL ACTION SUGGESTED ^^^


**********************************************************************
                     [Post-Upgrade Recommendations]
**********************************************************************

                        *****************************************
                        ******** Fixed Object Statistics ********
                        *****************************************

Please create stats on fixed objects two weeks
after the upgrade using the command:
   EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;

^^^ MANUAL ACTION SUGGESTED ^^^

**********************************************************************
                   ************  Summary  ************

 0 ERRORS exist in your database.
 2 WARNINGS that Oracle suggests are addressed to improve database performance.
 4 INFORMATIONAL messages that should be reviewed prior to your upgrade.

 After your database is upgraded and open in normal mode you must run
 rdbms/admin/catuppst.sql which executes several required tasks and completes
 the upgrade process.

 You should follow that with the execution of rdbms/admin/utlrp.sql, and a
 comparison of invalid objects before and after the upgrade using
 rdbms/admin/utluiobj.sql

 If needed you may want to upgrade your timezone data using the process
 described in My Oracle Support note 1509653.1
                   ***********************************

 

 

Oracle suggested a few recommendations as seen above that needs to be carried out before performing the upgrade. Let’s try running “fixup.sql” script that got generated previously.

 

Fixup sql:

 

[oracle@drnode1 ~]$ sqlplus sys/oracle@srprim as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Tue Feb 21 21:50:00 2017

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> @/u01/app/oracle/product/11.2.0.3/db_1/cfgtoollogs/srprim/preupgrade/preupgrade_fixups.sql

 

If the “fixup.sql” script is unable to fix the recommendations made, then fix them manually.

 

Upgrade:

 

Now that the recommendations have been fixed, let’s move with the upgrade.

 

Create a pfile from the spfile on first instance of 11.2.0.3 database to a temporary location and comment out the following parameters.

 

1. instance_number
2. thread
3. 2nd instance's undo tablespace.
4. cluster_database

 


[oracle@drnode1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Tue Feb 21 22:10:00 2017

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL>
SQL> create pfile='/u02/srprim.ora' from spfile;

File created.

 

[oracle@drnode1 u02]$ cat /u02/srprim.ora | grep ^#
#*.cluster_database=true
#srprim1.instance_number=1
#srprim2.instance_number=2
#srprim2.thread=2
#srprim1.thread=1
#srprim2.undo_tablespace='UNDOTBS2'

 

Stop the database on 11.2.0.3 home

 

[oracle@drnode1 u02]$ srvctl stop database -d srprim

 

Now on the first node, set the environment variables pointing to the newly oracle 12c Home

 

[oracle@drnode1 ~]$ export ORACLE_SID=srprim1
[oracle@drnode1 ~]$ export ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/db_1
[oracle@drnode1 ~]$ export PATH=$ORACLE_HOME/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/oracle/bin
[oracle@drnode1 ~]$ which sqlplus
/u01/app/oracle/product/12.1.0.2/db_1/bin/sqlplus

 

Start the first instance in upgrade mode from oracle 12c home using the pfile that was created previously in the temporary location.

 

[oracle@drnode1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Tue Feb 21 22:18:19 2017

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup upgrade pfile='/u02/srprim.ora';
ORACLE instance started.

Total System Global Area 1073741824 bytes
Fixed Size                  2932632 bytes
Variable Size             729809000 bytes
Database Buffers          335544320 bytes
Redo Buffers                5455872 bytes
Database mounted.
Database opened.

 

Now it’s time to run the “catupgrd.sql” script available at 12c ORACLE HOME. Here, I’m running this using the parallel utility “catctl.pl” with 5 number of parallel processes.

 

[oracle@drnode1 ~]$ cd /u01/app/oracle/product/12.1.0.2/db_1/rdbms/admin/
[oracle@drnode1 admin]$
[oracle@drnode1 admin]$ nohup /u01/app/oracle/product/12.1.0.2/db_1/perl/bin/perl catctl.pl -n 5 catupgrd.sql > /u02/upgrade.log &

 

Here is the sample of the upgrade process log. Review the log that was spooled for the execution of “catupgrd.sql” and once the execution is completed successfully, copy the pfile and password file of the instances from 11.2.0.3 home to 12.1.0.2 oracle home.
Make sure that the pfile consists of the path to spfile and nothing else.

 

On first node:

 

[oracle@drnode1 admin]$ cat /u01/app/oracle/product/11.2.0.3/db_1/dbs/initsrprim1.ora
SPFILE='+DATA/srprim/spfilesrprim.ora'          # line added by Agent

 

[oracle@drnode1 admin]$ cp /u01/app/oracle/product/11.2.0.3/db_1/dbs/orapwsrprim1 /u01/app/oracle/product/12.1.0.2/db_1/dbs/
[oracle@drnode1 admin]$
[oracle@drnode1 admin]$ cp /u01/app/oracle/product/11.2.0.3/db_1/dbs/initsrprim1.ora /u01/app/oracle/product/12.1.0.2/db_1/dbs/
[oracle@drnode1 admin]$

 

On second node:

 

[oracle@drnode2 ~]$ cat /u01/app/oracle/product/11.2.0.3/db_1/dbs/initsrprim2.ora
SPFILE='+DATA/srprim/spfilesrprim.ora'          # line added by Agent
[oracle@drnode2 ~]$

 

[oracle@drnode2 ~]$ cp /u01/app/oracle/product/11.2.0.3/db_1/dbs/orapwsrprim2 /u01/app/oracle/product/12.1.0.2/db_1/dbs/
[oracle@drnode2 ~]$ cp /u01/app/oracle/product/11.2.0.3/db_1/dbs/initsrprim2.ora /u01/app/oracle/product/12.1.0.2/db_1/dbs/
[oracle@drnode2 ~]$

 

Now, start the first instance from new 12c Home and review the parameters. Post that, run the postupgrade_fixups.sql script that was generated earlier.

 

[oracle@drnode1 admin]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed Feb 22 01:43:55 2017

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 1073741824 bytes
Fixed Size                  2932632 bytes
Variable Size             775946344 bytes
Database Buffers          289406976 bytes
Redo Buffers                5455872 bytes
Database mounted.
Database opened.
SQL>
SQL> show parameter cluster

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
cluster_database                     boolean     TRUE
cluster_database_instances           integer     2
cluster_interconnects                string
SQL>

 

Execution of postupgrade_fixups.sql:

 

SQL> @/u01/app/oracle/product/11.2.0.3/db_1/cfgtoollogs/srprim/preupgrade/postupgrade_fixups.sql


SQL> EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;


SQL> select count(*) from dba_objects where status = 'INVALID';

  COUNT(*)
----------
      6376

SQL> @?/rdbms/admin/utlrp.sql


SQL> select count(*) from dba_objects where status='INVALID';

  COUNT(*)
----------
         0


SQL> @?/rdbms/admin/catuppst.sql		 


SQL> @?/rdbms/admin/utlu121s.sql

PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.



CATCTL REPORT = /u01/app/oracle/product/12.1.0.2/db_1/cfgtoollogs/srprim/upgrade/upg_summary.log

PL/SQL procedure successfully completed.



Oracle Database 12.1 Post-Upgrade Status Tool           02-22-2017 02:17:23

Component                               Current         Version  Elapsed Time
Name                                    Status          Number   HH:MM:SS

Oracle Server                             VALID      12.1.0.2.0  00:45:29
JServer JAVA Virtual Machine              VALID      12.1.0.2.0  00:13:46
Oracle Real Application Clusters          VALID      12.1.0.2.0  00:00:06
Oracle Workspace Manager                  VALID      12.1.0.2.0  00:02:58
OLAP Analytic Workspace                   VALID      12.1.0.2.0  00:01:12
OLAP Catalog                         OPTION OFF      11.2.0.3.0  00:00:00
Oracle OLAP API                           VALID      12.1.0.2.0  00:02:09
Oracle XDK                                VALID      12.1.0.2.0  00:02:17
Oracle Text                               VALID      12.1.0.2.0  00:03:09
Oracle XML Database                       VALID      12.1.0.2.0  00:06:44
Oracle Database Java Packages             VALID      12.1.0.2.0  00:00:54
Oracle Multimedia                         VALID      12.1.0.2.0  00:07:37
Spatial                                   VALID      12.1.0.2.0  00:19:07
Oracle Application Express                VALID     4.2.5.00.08  01:02:58
Final Actions                                                    00:05:38
Post Upgrade                                                     00:00:11

Total Upgrade Time: 02:55:35

PL/SQL procedure successfully completed.

SQL>
SQL> --
SQL> -- Update Summary Table with con_name and endtime.
SQL> --
SQL> UPDATE sys.registry$upg_summary SET reportname = :ReportName,
  2                                  con_name = SYS_CONTEXT('USERENV','CON_NAME'),
  3                                  endtime  = SYSDATE
  4         WHERE con_id = -1;

1 row updated.

SQL> commit;

Commit complete.

SQL>

SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

 

From 11.2 home, crosscheck if the database is down and then remove its 11.2 configuration from the clusterware managing. (In this post, I’m removing the 11.2 database configuration from clusterware management and adding it back with 12c database configuration settings. You can avoid removing the old configuration and adding the new one by just running “srvctl upgrade database -db -oraclehome ” from the 12c ORACLE HOME.

 

[oracle@drnode1 ~]$ cd /u01/app/oracle/product/11.2.0.3/db_1/bin/
[oracle@drnode1 bin]$ ./srvctl status database -d srprim
Instance srprim1 is not running on node drnode1
Instance srprim2 is not running on node drnode2

 

[oracle@drnode1 bin]$ ./srvctl remove database -d srprim
Remove the database srprim? (y/[n]) y
[oracle@drnode1 bin]$

 

Now, add this database “srprim” with 12c home configuration to the clusterware managing.

 

[oracle@drnode1 bin]$ cd $ORACLE_HOME/bin
[oracle@drnode1 bin]$ pwd
/u01/app/oracle/product/12.1.0.2/db_1/bin
[oracle@drnode1 bin]$
[oracle@drnode1 bin]$ ./srvctl add database -d srprim -o /u01/app/oracle/product/12.1.0.2/db_1
[oracle@drnode1 bin]$
[oracle@drnode1 bin]$
[oracle@drnode1 bin]$ ./srvctl add instance -i srprim1 -d srprim -n drnode1
[oracle@drnode1 bin]$
[oracle@drnode1 bin]$
[oracle@drnode1 bin]$ ./srvctl add instance -i srprim2 -d srprim -n drnode2

 

[oracle@drnode1 bin]$
[oracle@drnode1 bin]$ ./srvctl status database -d srprim -v -f
Instance srprim1 is not running on node drnode1
Instance srprim2 is not running on node drnode2

 

If there were any services configured previously, then create them as such from 12c ORACLE HOME.
Start the database now using srvctl and also it’s services.

 

 

[oracle@drnode1 bin]$ ./srvctl add service -s srprim_any -d srprim -r srprim1,srprim2
[oracle@drnode1 bin]$ ./srvctl start database -d srprim
[oracle@drnode1 bin]$ ./srvctl start service -s srprim_any -d srprim
[oracle@drnode1 bin]$ ./srvctl status database -d srprim -v -f
Instance srprim1 is running on node drnode1 with online services srprim_any. Instance status: Open.
Instance srprim2 is running on node drnode2 with online services srprim_any. Instance status: Open.

 

 

COPYRIGHT

© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.

 

 

DISCLAIMER

The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organisation. The views expressed by visitors on this blog are theirs solely and may not reflect mine

 

August 21, 2017 / Shivananda Rao P

Grid Infrastructure (GI) upgrade from 11.2.0.3 to 12.1.0.2 in silent mode on RAC

In this article, we shall see the steps involved in upgrading the Grid Infrastructure (CRS) from version 11.2.0.3 to 12.1.0.2 in silent mode.

 

Environment:

 

RAC nodes: drnode1, drnode2
Current GI (CRS) version: 11.2.0.3.0
GI to be upgraded to version: 12.1.0.2.0
Cluster Storage used: ASM
Platform: OEL 6
Current CRS HOME: /u01/app/11.2.0.3/grid
New 12c CRS HOME: /u01/app/12.1.0.2/grid

 

 

Below shows the current version of the CRS. The output is shown only for the first node.

 

[oracle@drnode1 ~]$ crsctl query crs softwareversion drnode1
Oracle Clusterware version on node [drnode1] is [11.2.0.3.0]

 

[oracle@drnode1 ~]$ crsctl query crs softwareversion drnode2
Oracle Clusterware version on node [drnode2] is [11.2.0.3.0]

 

[oracle@drnode1 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.3.0]

 

 

Download oracle 12.1.0.2 Grid Infrastructure and unzip. Here, the software is unzipped under “/u03”.
Let’s run the “cluvfy” utility to perform pre-requisite checks before upgrading. This is achieved by running the “runcluvfy.sh” script from the unzipped 12c Grid Infrastructure software location.

 

The parameters passed to this script are:

 

-pre crsinst: To perform pre-checks before the CRS installation
-upgrade: To perform upgrade pre-checks
-rolling: To perform rolling upgrade
-src_crshome: Location of the Source GI home
-dest_crshome: Location of the Destination GI Home
-dest_version: The version to which GI will be upgraded

 

[oracle@drnode1 ~]$ cd /u03/grid
[oracle@drnode1 grid]$ ./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /u01/app/11.2.0.3/grid -dest_crshome /u01/app/12.1.0.2/grid -dest_version 12.1.0.2.0

Performing pre-checks for cluster services setup

Checking node reachability...
Node reachability check passed from node "drnode1"


Checking user equivalence...
User equivalence check passed for user "oracle"
Package existence check passed for "cvuqdisk"

Check: Grid Infrastructure home writeability of path /u01/app/12.1.0.2/grid
PRVG-11932 : Path "/u01/app/12.1.0.2/grid" cannot be created on node "drnode1".
PRVG-11932 : Path "/u01/app/12.1.0.2/grid" cannot be created on node "drnode2".
Grid Infrastructure home check failed

Checking CRS user consistency
CRS user consistency check successful
Checking network configuration consistency.
Check for network configuration consistency passed.
Checking ASM disk size consistency
All ASM disks are correctly sized
Checking if default discovery string is being used by ASM
ASM discovery string "/dev/DSK*" is not the default discovery string
Checking if ASM parameter file is in use by an ASM instance on the local node
ASM instance is using parameter file "+DATA/drnode-scan/asmparameterfile/registry.253.936445691" on node "drnode1" on which upgrade is requested.

Checking OLR integrity...
Check of existence of OLR configuration file "/etc/oracle/olr.loc" passed
Check of attributes of OLR configuration file "/etc/oracle/olr.loc" passed

WARNING:
This check does not verify the integrity of the OLR contents. Execute 'ocrcheck -local' as a privileged user to verify the contents of OLR.

OLR integrity check passed

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity using interfaces on subnet "192.168.1.0"
Node connectivity passed for subnet "192.168.1.0" with node(s) drnode1,drnode2
TCP connectivity check passed for subnet "192.168.1.0"


Check: Node connectivity using interfaces on subnet "192.168.0.0"
Node connectivity passed for subnet "192.168.0.0" with node(s) drnode1,drnode2
TCP connectivity check passed for subnet "192.168.0.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.0.0".
Subnet mask consistency check passed for subnet "192.168.1.0".
Subnet mask consistency check passed.

Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.1.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "192.168.1.0" for multicast communication with multicast group "224.0.0.251" passed.

Check of multicast communication passed.
Task ASM Integrity check started...


Starting check to see if ASM is running on all cluster nodes...

ASM Running check passed. ASM is running on all specified nodes
Disk Group Check passed. At least one Disk Group configured

Task ASM Integrity check passed...

Checking OCR integrity...
Disks "+DATA" are managed by ASM.

OCR integrity check passed

Checking ASMLib configuration.
Check for ASMLib configuration passed.
Total memory check failed
Check failed on nodes:
        drnode2,drnode1
Available memory check passed
Swap space check passed
Free disk space check passed for "drnode2:/usr,drnode2:/var,drnode2:/etc,drnode2:/sbin"
Free disk space check passed for "drnode1:/usr,drnode1:/var,drnode1:/etc,drnode1:/sbin"
Free disk space check passed for "drnode2:/u01/app/11.2.0.3/grid"
Free disk space check passed for "drnode1:/u01/app/11.2.0.3/grid"
Free disk space check passed for "drnode2:/tmp"
Free disk space check passed for "drnode1:/tmp"
Check for multiple users with UID value 501 passed
User existence check passed for "oracle"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Membership check for user "oracle" in group "oinstall" [as Primary] passed
Membership check for user "oracle" in group "dba" passed
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
There are no oracle patches required for home "/u01/app/11.2.0.3/grid".
There are no oracle patches required for home "/u01/app/11.2.0.3/grid".
Source home "/u01/app/11.2.0.3/grid" is suitable for upgrading to version "12.1.0.2.0".
System architecture check passed
Kernel version check failed
Check failed on nodes:
        drnode2,drnode1
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Kernel parameter check passed for "panic_on_oops"
Package existence check passed for "binutils"
Package existence check passed for "compat-libcap1"
Package existence check passed for "compat-libstdc++-33(x86_64)"
Package existence check passed for "libgcc(x86_64)"
Package existence check passed for "libstdc++(x86_64)"
Package existence check passed for "libstdc++-devel(x86_64)"
Package existence check passed for "sysstat"
Package existence check passed for "gcc"
Package existence check passed for "gcc-c++"
Package existence check passed for "ksh"
Package existence check passed for "make"
Package existence check passed for "glibc(x86_64)"
Package existence check passed for "glibc-devel(x86_64)"
Package existence check passed for "libaio(x86_64)"
Package existence check passed for "libaio-devel(x86_64)"
Package existence check passed for "nfs-utils"
Check for multiple users with UID value 0 passed
Current group ID check passed

Starting check for consistency of primary group of root user

Check for consistency of root user's primary group passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP configuration file "/etc/ntp.conf" existence check passed
No NTP Daemons or Services were found to be running
PRVF-5507 : NTP daemon or service is not running on any node but NTP configuration file exists on the following node(s):
drnode2,drnode1
Clock synchronization check using Network Time Protocol(NTP) failed

Core file name pattern consistency check passed.

User "oracle" is not part of "root" group. Check passed
Default user file creation mask check passed
Checking integrity of file "/etc/resolv.conf" across nodes

"domain" and "search" entries do not coexist in any "/etc/resolv.conf" file
All nodes have same "search" order defined in file "/etc/resolv.conf"
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: drnode1,drnode2

Check for integrity of file "/etc/resolv.conf" failed


UDev attributes check for OCR locations started...
UDev attributes check passed for OCR locations


UDev attributes check for Voting Disk locations started...
UDev attributes check passed for Voting Disk locations

Time zone consistency check passed
Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking VIP reachability
Check for VIP reachability passed.

Checking Oracle Cluster Voting Disk configuration...

Oracle Cluster Voting Disk configuration check passed

Clusterware version consistency passed.

Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed


Checking daemon "avahi-daemon" is not configured and running
Daemon not configured check passed for process "avahi-daemon"
Daemon not running check passed for process "avahi-daemon"

Starting check for Reverse path filter setting ...

Check for Reverse path filter setting passed

Starting check for Network interface bonding status of private interconnect network interfaces ...

Check for Network interface bonding status of private interconnect network interfaces passed

Starting check for /dev/shm mounted as temporary file system ...

Check for /dev/shm mounted as temporary file system passed

Starting check for /boot mount ...

Check for /boot mount passed

Starting check for zeroconf check ...

Check for zeroconf check passed

Pre-check for cluster services setup was unsuccessful on all the nodes.
[oracle@drnode1 grid]$

 

Some of the checks failed such as “Clock synchronization check using Network Time Protocol(NTP)” where NTP daemon or service was not running, “DNS response check failed”. I safely ignore these checks as both NTP and DNS are not configured on these nodes.

 

Now, update the “grid_install.rsp” file available under the unzipped location of the 12c GI software. The response file needs to be updated appropriately based on the environment. Here is the updated grid_install.rsp file used for this upgrade.

 

Before proceeding with the upgrade, unset the CRS HOME if set any on the environment and run the “runInstaller” from the unzipped location of the 12c GI software. Also, pass the parameters “-silent” to run the upgrade in SILENT mode and “-responsefile” specifying the location of the above updated response file.

 

[oracle@drnode1 u02]$ cd /u03/grid
[oracle@drnode1 grid]$
[oracle@drnode1 grid]$ unset ORA_CRS_HOME
[oracle@drnode1 grid]$ echo $ORA_CRS_HOME

[oracle@drnode1 grid]$ nohup ./runInstaller -silent -ignorePrereq -responsefile /u02/grid_install.rsp &
[1] 16082
[oracle@drnode1 grid]$ nohup: ignoring input and appending output to 'nohup.out'

 

Let’s view the output file to see what’s happening.

 

[oracle@drnode1 grid]$ cat nohup.out
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 415 MB.   Actual 32905 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 10237 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2017-02-21_10-10-33AM. Please wait ...[WARNING] [INS-41813] OSDBA for ASM, OSOPER for ASM, and OSASM are the same OS group.
   CAUSE: The group you selected for granting the OSDBA for ASM group for database access, and the OSOPER for ASM group for startup and shutdown of Oracle ASM, is the same group as the OSASM group, whose members have SYSASM privileges on Oracle ASM.
   ACTION: Choose different groups as the OSASM, OSDBA for ASM, and OSOPER for ASM groups.
[WARNING] [INS-41874] Oracle ASM Administrator (OSASM) Group specified is same as the inventory group.
   CAUSE: Operating system group oinstall specified for OSASM Group is same as the inventory group.
   ACTION: It is not recommended to have OSASM group same as inventory group. Select any of the group other than the inventory group to avoid incorrect configuration.
You can find the log of this install session at:
 /u01/app/oraInventory/logs/installActions2017-02-21_10-10-33AM.log
The installation of Oracle Grid Infrastructure 12c was successful.
Please check '/u01/app/oraInventory/logs/silentInstall2017-02-21_10-10-33AM.log' for more details.

As a root user, execute the following script(s):
        1. /u01/app/12.1.0.2/grid/rootupgrade.sh

Execute /u01/app/12.1.0.2/grid/rootupgrade.sh on the following nodes:
[drnode1, drnode2]

Run the script on the local node first. After successful completion, you can start the script in parallel on all other nodes, except a node you designate as the last node. When all the nodes except the last node are done successfully, run the script on the last node.

Successfully Setup Software.
As install user, execute the following script to complete the configuration.
        1. /u01/app/12.1.0.2/grid/cfgtoollogs/configToolAllCommands RESPONSE_FILE=<response_file>

        Note:
        1. This script must be run on the same host from where installer was run.
        2. This script needs a small password properties file for configuration assistants that require passwords (refer to install guide documentation).

 

We now need to run the “rootupgrade.sh” script from the new 12c GI home as ROOT user on all the nodes. Post that, the runInstaller demands to run “configToolAllCommands” on the first node drnode1 where runInstaller was run.

 

Below is the output of the execution of the rootupgrade.sh script on first node drnode1.

 


[root@drnode1]# /u01/app/12.1.0.2/grid/rootupgrade.sh
Check /u01/app/12.1.0.2/grid/install/root_drnode1.mydomain_2017-02-21_11-08-09.log for the output of root script
[root@drnode1]#

=====================================================================================================================
[oracle@drnode1 grid]$ cat /u01/app/12.1.0.2/grid/install/root_drnode1.mydomain_2017-02-21_11-08-09.log
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/12.1.0.2/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0.2/grid/crs/install/crsconfig_params
2017/02/21 11:08:21 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.

2017/02/21 11:09:12 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.

2017/02/21 11:09:19 CLSRSC-464: Starting retrieval of the cluster configuration data

2017/02/21 11:09:38 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.

2017/02/21 11:09:38 CLSRSC-363: User ignored prerequisites during installation

2017/02/21 11:09:58 CLSRSC-515: Starting OCR manual backup.

2017/02/21 11:10:03 CLSRSC-516: OCR manual backup successful.

2017/02/21 11:10:10 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode

2017/02/21 11:10:10 CLSRSC-482: Running command: '/u01/app/12.1.0.2/grid/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /u01/app/11.2.0.3/grid -oldCRSVersion 11.2.0.3.0 -nodeNumber 1 -firstNode true -startRolling true'


ASM configuration upgraded in local node successfully.

2017/02/21 11:10:27 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode

2017/02/21 11:10:27 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack

2017/02/21 11:12:46 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.

OLR initialization - successful
2017/02/21 11:20:36 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'

CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2017/02/21 11:27:18 CLSRSC-472: Attempting to export the OCR

2017/02/21 11:27:18 CLSRSC-482: Running command: 'ocrconfig -upgrade oracle oinstall'

2017/02/21 11:27:32 CLSRSC-473: Successfully exported the OCR

2017/02/21 11:27:39 CLSRSC-486:
 At this stage of upgrade, the OCR has changed.
 Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.

2017/02/21 11:27:39 CLSRSC-541:
 To downgrade the cluster:
 1. All nodes that have been upgraded must be downgraded.

2017/02/21 11:27:39 CLSRSC-542:
 2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.

2017/02/21 11:27:39 CLSRSC-543:
 3. The downgrade command must be run on the node drnode1 with the '-lastnode' option to restore global configuration data.

2017/02/21 11:28:05 CLSRSC-343: Successfully started Oracle Clusterware stack

clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2017/02/21 11:28:37 CLSRSC-474: Initiating upgrade of resource types

2017/02/21 11:29:48 CLSRSC-482: Running command: 'upgrade model  -s 11.2.0.3.0 -d 12.1.0.2.0 -p first'

2017/02/21 11:29:48 CLSRSC-475: Upgrade of resource types successfully initiated.

2017/02/21 11:30:23 CLSRSC-325:	Configure Oracle Grid Infrastructure for a Cluster ... succeeded

=====================================================================================================================

 

Below is the output of the execution of the rootupgrade.sh script on the second (last) node drnode2.

 

[root@drnode2 u01]# /u01/app/12.1.0.2/grid/rootupgrade.sh
Check /u01/app/12.1.0.2/grid/install/root_drnode2.mydomain_2017-02-21_12-22-07.log for the output of root script
[root@drnode2 u01]#

==============================================================================================================

[root@drnode2 u01]# cat /u01/app/12.1.0.2/grid/install/root_drnode2.mydomain_2017-02-21_12-22-07.log
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/12.1.0.2/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0.2/grid/crs/install/crsconfig_params
2017/02/21 12:22:13 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.

2017/02/21 12:22:13 CLSRSC-4012: Shutting down Oracle Trace File Analyzer (TFA) Collector.

2017/02/21 12:22:22 CLSRSC-4013: Successfully shut down Oracle Trace File Analyzer (TFA) Collector.

2017/02/21 12:22:35 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.

2017/02/21 12:22:38 CLSRSC-464: Starting retrieval of the cluster configuration data

2017/02/21 12:22:47 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.

2017/02/21 12:22:47 CLSRSC-363: User ignored prerequisites during installation


ASM configuration upgraded in local node successfully.

2017/02/21 12:23:11 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack

2017/02/21 12:25:21 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.

OLR initialization - successful
2017/02/21 12:26:47 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'

CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2017/02/21 12:34:02 CLSRSC-343: Successfully started Oracle Clusterware stack

clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Start upgrade invoked..
2017/02/21 12:34:31 CLSRSC-478: Setting Oracle Clusterware active version on the last node to be upgraded

2017/02/21 12:34:31 CLSRSC-482: Running command: '/u01/app/12.1.0.2/grid/bin/crsctl set crs activeversion'

Started to upgrade the Oracle Clusterware. This operation may take a few minutes.
Started to upgrade the OCR.
Started to upgrade the CSS.
The CSS was successfully upgraded.
Started to upgrade the CRS.
The CRS was successfully upgraded.
Successfully upgraded the Oracle Clusterware.
Oracle Clusterware operating version was successfully set to 12.1.0.2.0
2017/02/21 12:37:03 CLSRSC-479: Successfully set Oracle Clusterware active version

2017/02/21 12:37:14 CLSRSC-476: Finishing upgrade of resource types

2017/02/21 12:37:27 CLSRSC-482: Running command: 'upgrade model  -s 11.2.0.3.0 -d 12.1.0.2.0 -p last'

2017/02/21 12:37:27 CLSRSC-477: Successfully completed upgrade of resource types

2017/02/21 12:38:31 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

=======================================================================================================

 

 

Now let’s run the “configToolAllCommands” on drnode1. This script perform PostInstallation tasks such as creation of the “Grid Infrastructure Management Repository Database” with the Container Database Name as “MGMTDB” and it’s Pluggabe Database Name as “”.

 

[oracle@drnode1 bin]$ /u01/app/12.1.0.2/grid/cfgtoollogs/configToolAllCommands RESPONSE_FILE=/u02/grid_install.rsp
Setting the invPtrLoc to /u01/app/12.1.0.2/grid/oraInst.loc

perform - mode is starting for action: configure

Feb 21, 2017 1:39:07 PM oracle.install.driver.oui.UpdateNodelistJob call
INFO: UpdateNodelist data:
Feb 21, 2017 1:39:07 PM oracle.install.driver.oui.UpdateNodelistJob call
INFO: oracle.installer.oui_loc:/u01/app/12.1.0.2/grid/oui
Feb 21, 2017 1:39:07 PM oracle.install.driver.oui.UpdateNodelistJob call
INFO: oracle.installer.jre_loc:/u01/app/12.1.0.2/grid/jdk/jre
Feb 21, 2017 1:39:07 PM oracle.install.driver.oui.UpdateNodelistJob call
INFO: oracle.installer.doNotUpdateNodeList:true
Feb 21, 2017 1:39:07 PM oracle.install.driver.oui.UpdateNodelistJob call
INFO: oracle.installer.rootOwnedHome:true
Feb 21, 2017 1:39:07 PM oracle.install.driver.oui.UpdateNodelistJob call
INFO: OracleHomeToUpdate:/u01/app/11.2.0.3/grid;isCRS:false;isCFS:false;isLocal:false
Feb 21, 2017 1:39:07 PM oracle.install.driver.oui.UpdateNodelistJob call
INFO: From map: Hosts:[drnode1, drnode2] => Nodelist:[drnode1, drnode2]
Feb 21, 2017 1:39:07 PM oracle.install.driver.oui.UpdateNodelistJob call
INFO: Before calling api: Hosts:[drnode1, drnode2] => Nodelist:[drnode1, drnode2], update localnode? true
Feb 21, 2017 1:40:47 PM oracle.install.driver.oui.UpdateNodelistJob call
INFO: UpdateNodelist data:
Feb 21, 2017 1:40:47 PM oracle.install.driver.oui.UpdateNodelistJob call
INFO: oracle.installer.oui_loc:/u01/app/12.1.0.2/grid/oui
Feb 21, 2017 1:40:47 PM oracle.install.driver.oui.UpdateNodelistJob call
INFO: oracle.installer.jre_loc:/u01/app/12.1.0.2/grid/jdk/jre
Feb 21, 2017 1:40:47 PM oracle.install.driver.oui.UpdateNodelistJob call
INFO: oracle.installer.doNotUpdateNodeList:true
Feb 21, 2017 1:40:47 PM oracle.install.driver.oui.UpdateNodelistJob call
INFO: oracle.installer.rootOwnedHome:
Feb 21, 2017 1:40:47 PM oracle.install.driver.oui.UpdateNodelistJob call
INFO: OracleHomeToUpdate:/u01/app/12.1.0.2/grid;isCRS:true;isCFS:false;isLocal:false
Feb 21, 2017 1:40:47 PM oracle.install.driver.oui.UpdateNodelistJob call
INFO: From map: Hosts:[drnode1, drnode2] => Nodelist:[drnode1, drnode2]
Feb 21, 2017 1:40:47 PM oracle.install.driver.oui.UpdateNodelistJob call
INFO: Before calling api: Hosts:[drnode1, drnode2] => Nodelist:[drnode1, drnode2], update localnode? true
Feb 21, 2017 1:44:03 PM oracle.install.config.crs.MgmtDBCDBInternalPlugIn invoke
INFO: MgmtDBCDBInternalPlugin: ... adding </oui_internal>
Feb 21, 2017 1:44:03 PM oracle.install.driver.oui.config.GenericInternalPlugIn invoke
INFO: Executing MGMTDBCDB
Feb 21, 2017 1:44:03 PM oracle.install.driver.oui.config.GenericInternalPlugIn invoke
INFO: Command /u01/app/12.1.0.2/grid/bin/dbca  -silent -createDatabase -createAsContainerDatabase true -templateName MGMTSeed_Database.dbc -sid -MGMTDB -gdbName _mgmtdb -storageType ASM -diskGroupName DATA -datafileJarLocation /u01/app/12.1.0.2/grid/assistants/dbca/templates -characterset AL32UTF8 -autoGeneratePasswords -skipUserTemplateCheck   -oui_internal
Feb 21, 2017 1:44:03 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: ... GenericInternalPlugIn.handleProcess() entered.
Feb 21, 2017 1:44:03 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: ... GenericInternalPlugIn: getting configAssistantParmas.
Feb 21, 2017 1:44:03 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: ... GenericInternalPlugIn: checking secretArguments.
Feb 21, 2017 1:44:03 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: No arguments to pass to stdin
Feb 21, 2017 1:44:03 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: ... GenericInternalPlugIn: starting read loop.
Feb 21, 2017 1:44:45 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Read: Registering database with Oracle Grid Infrastructure
Feb 21, 2017 1:44:45 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
WARNING: Skipping line: Registering database with Oracle Grid Infrastructure
Feb 21, 2017 1:44:45 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: End of argument passing to stdin
Feb 21, 2017 1:44:46 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Read: 5% complete
Feb 21, 2017 1:44:46 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
WARNING: Skipping line: 5% complete
Feb 21, 2017 1:44:46 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
Feb 21, 2017 1:44:46 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Read: Copying database files
Feb 21, 2017 1:44:46 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
WARNING: Skipping line: Copying database files
Feb 21, 2017 1:44:46 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
Feb 21, 2017 1:44:46 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Read: 7% complete
Feb 21, 2017 1:44:46 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
WARNING: Skipping line: 7% complete
Feb 21, 2017 1:44:46 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
Feb 21, 2017 1:46:25 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Read: 9% complete
Feb 21, 2017 1:46:25 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
WARNING: Skipping line: 9% complete
Feb 21, 2017 1:46:25 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
Feb 21, 2017 1:46:27 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Read: 16% complete
Feb 21, 2017 1:46:27 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
WARNING: Skipping line: 16% complete
Feb 21, 2017 1:46:27 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
Feb 21, 2017 1:46:47 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Read: 23% complete
Feb 21, 2017 1:46:47 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
WARNING: Skipping line: 23% complete
Feb 21, 2017 1:46:47 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
Feb 21, 2017 1:47:37 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Read: 30% complete
Feb 21, 2017 1:47:37 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
WARNING: Skipping line: 30% complete
Feb 21, 2017 1:47:37 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
Feb 21, 2017 1:48:57 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Read: 37% complete
Feb 21, 2017 1:48:57 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
WARNING: Skipping line: 37% complete
Feb 21, 2017 1:48:57 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
Feb 21, 2017 1:49:03 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Read: 41% complete
Feb 21, 2017 1:49:03 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
WARNING: Skipping line: 41% complete
Feb 21, 2017 1:49:03 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
Feb 21, 2017 1:49:03 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Read: Creating and starting Oracle instance
Feb 21, 2017 1:49:03 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
WARNING: Skipping line: Creating and starting Oracle instance
Feb 21, 2017 1:49:03 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
Feb 21, 2017 1:49:40 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Read: 43% complete
Feb 21, 2017 1:49:40 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
WARNING: Skipping line: 43% complete
Feb 21, 2017 1:49:40 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
Feb 21, 2017 1:50:22 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Read: 48% complete
Feb 21, 2017 1:50:22 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
WARNING: Skipping line: 48% complete
Feb 21, 2017 1:50:22 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
Feb 21, 2017 1:50:26 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Read: 49% complete
Feb 21, 2017 1:50:26 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
WARNING: Skipping line: 49% complete
Feb 21, 2017 1:50:26 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
Feb 21, 2017 1:51:43 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Read: 50% complete
Feb 21, 2017 1:51:43 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
WARNING: Skipping line: 50% complete
Feb 21, 2017 1:51:43 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
Feb 21, 2017 1:53:07 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Read: 55% complete
Feb 21, 2017 1:53:07 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
WARNING: Skipping line: 55% complete
Feb 21, 2017 1:53:07 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
Feb 21, 2017 1:53:08 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Read: 60% complete
Feb 21, 2017 1:53:08 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
WARNING: Skipping line: 60% complete
Feb 21, 2017 1:53:08 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
Feb 21, 2017 1:54:25 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Read: 61% complete
Feb 21, 2017 1:54:25 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
WARNING: Skipping line: 61% complete
Feb 21, 2017 1:54:25 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
Feb 21, 2017 1:54:26 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Read: 64% complete
Feb 21, 2017 1:54:26 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
WARNING: Skipping line: 64% complete
Feb 21, 2017 1:54:26 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
Feb 21, 2017 1:54:26 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Read: Completing Database Creation
Feb 21, 2017 1:54:26 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
WARNING: Skipping line: Completing Database Creation
Feb 21, 2017 1:54:26 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
Feb 21, 2017 1:54:27 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Read: 68% complete
Feb 21, 2017 1:54:27 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
WARNING: Skipping line: 68% complete
Feb 21, 2017 1:54:27 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
Feb 21, 2017 1:54:34 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Read: 79% complete
Feb 21, 2017 1:54:34 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
WARNING: Skipping line: 79% complete
Feb 21, 2017 1:54:34 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
Feb 21, 2017 2:00:17 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Read: 89% complete
Feb 21, 2017 2:00:17 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
WARNING: Skipping line: 89% complete
Feb 21, 2017 2:00:17 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
Feb 21, 2017 2:00:17 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Read: 100% complete
Feb 21, 2017 2:00:17 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
WARNING: Skipping line: 100% complete
Feb 21, 2017 2:00:17 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
Feb 21, 2017 2:00:17 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Read: Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/_mgmtdb/_mgmtdb.log" for further details.
Feb 21, 2017 2:00:17 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
WARNING: Skipping line: Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/_mgmtdb/_mgmtdb.log" for further details.
Feb 21, 2017 2:00:17 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
Feb 21, 2017 2:00:18 PM oracle.install.config.crs.MgmtDBPDBInternalPlugIn invoke
INFO: MgmtDBPDBInternalPlugin: ... adding </oui_internal>
Feb 21, 2017 2:00:18 PM oracle.install.driver.oui.config.GenericInternalPlugIn invoke
INFO: Executing MGMTDBPDB
Feb 21, 2017 2:00:18 PM oracle.install.driver.oui.config.GenericInternalPlugIn invoke
<strong>INFO: Command /u01/app/12.1.0.2/grid/bin/dbca  -silent -createPluggableDatabase -sourceDB -MGMTDB -pdbName drnode_scan -createPDBFrom RMANBACKUP -PDBBackUpfile /u01/app/12.1.0.2/grid/assistants/dbca/templates/mgmtseed_pdb.dfb -PDBMetadataFile /u01/app/12.1.0.2/grid/assistants/dbca/templates/mgmtseed_pdb.xml -createAsClone true -internalSkipGIHomeCheck -oui_internal</strong>
Feb 21, 2017 2:00:18 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: ... GenericInternalPlugIn.handleProcess() entered.
Feb 21, 2017 2:00:18 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: ... GenericInternalPlugIn: getting configAssistantParmas.
Feb 21, 2017 2:00:18 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: ... GenericInternalPlugIn: checking secretArguments.
Feb 21, 2017 2:00:18 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: No arguments to pass to stdin
Feb 21, 2017 2:00:18 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: ... GenericInternalPlugIn: starting read loop.
Feb 21, 2017 2:01:21 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Read: Creating Pluggable Database
Feb 21, 2017 2:01:21 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
WARNING: Skipping line: Creating Pluggable Database
Feb 21, 2017 2:01:21 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: End of argument passing to stdin
Feb 21, 2017 2:01:21 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Read: 4% complete
Feb 21, 2017 2:01:21 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
WARNING: Skipping line: 4% complete
Feb 21, 2017 2:01:21 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
Feb 21, 2017 2:01:21 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Read: 12% complete
Feb 21, 2017 2:01:21 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
WARNING: Skipping line: 12% complete
Feb 21, 2017 2:01:21 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
Feb 21, 2017 2:01:21 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Read: 21% complete
Feb 21, 2017 2:01:21 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
WARNING: Skipping line: 21% complete
Feb 21, 2017 2:01:21 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
Feb 21, 2017 2:04:28 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Read: 38% complete
Feb 21, 2017 2:04:28 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
WARNING: Skipping line: 38% complete
Feb 21, 2017 2:04:28 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
Feb 21, 2017 2:04:38 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Read: 55% complete
Feb 21, 2017 2:04:38 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
WARNING: Skipping line: 55% complete
Feb 21, 2017 2:04:38 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
Feb 21, 2017 2:05:10 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Read: 85% complete
Feb 21, 2017 2:05:10 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
WARNING: Skipping line: 85% complete
Feb 21, 2017 2:05:10 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
Feb 21, 2017 2:05:10 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Read: Completing Pluggable Database Creation
Feb 21, 2017 2:05:10 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
WARNING: Skipping line: Completing Pluggable Database Creation
Feb 21, 2017 2:05:10 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
Feb 21, 2017 2:07:31 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Read: 100% complete
Feb 21, 2017 2:07:31 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
WARNING: Skipping line: 100% complete
Feb 21, 2017 2:07:31 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0
Feb 21, 2017 2:07:31 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Read: Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/_mgmtdb/drnode_scan/_mgmtdb.log" for further details.
Feb 21, 2017 2:07:31 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
WARNING: Skipping line: Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/_mgmtdb/drnode_scan/_mgmtdb.log" for further details.
Feb 21, 2017 2:07:31 PM oracle.install.driver.oui.config.GenericInternalPlugIn handleProcess
INFO: Exceeded the number of arguments passed to stdin. CurrentCount:1 Total args:0

perform - mode finished for action: configure

You can see the log file: /u01/app/12.1.0.2/grid/cfgtoollogs/oui/configActions2017-02-21_01-39-03-PM.log

 

 

Post execution of all the demanded scripts, let’s check the latest CRS version on both the nodes.

 

On drnode1:

 

[oracle@drnode1 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [0].
[oracle@drnode1 ~]$

 

[oracle@drnode1 ~]$ /u01/app/12.1.0.2/grid/bin/crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [12.1.0.2.0]
[oracle@drnode1 ~]$

 

[oracle@drnode1 ~]$ /u01/app/12.1.0.2/grid/bin/crsctl query crs softwareversion
Oracle Clusterware version on node [drnode1] is [12.1.0.2.0]

 

On drnode2:

 

[oracle@drnode2 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [0].
[oracle@drnode2 ~]$

 

[oracle@drnode2 ~]$ /u01/app/12.1.0.2/grid/bin/crsctl query crs softwareversion
Oracle Clusterware version on node [drnode2] is [12.1.0.2.0]

 

[oracle@drnode2 ~]$ /u01/app/12.1.0.2/grid/bin/crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [12.1.0.2.0]
[oracle@drnode2 ~]$

 

Verify that all ASM instances and all the diskgroups are MOUNTED on both the nodes.

 

Finally, you can detach the Old GI home (11.2.0.3) from the ORAINVENTORY file.

 

[oracle@drnode1 ~]$ cd /u01/app/11.2.0.3/grid/oui/bin/
[oracle@drnode1 bin]$ ./runInstaller -detachHome -silent ORACLE_HOME=/u01/app/11.2.0.3/grid
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 10236 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'DetachHome' was successful.

 

 

COPYRIGHT

© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.

 

 

DISCLAIMER

The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organisation. The views expressed by visitors on this blog are theirs solely and may not reflect mine

 

January 16, 2017 / Shivananda Rao P

Cache Fusion – Internals of Block Transfer within RAC DB instances

We all know that each instance of a RAC database has it’s own buffer cache in its SGA and all these caches put together forms the cache fusion. The blocks within the RAC instances are transferred using a high speed private interconnect. In order to make sure that no two or more instances are updating the same block at the same time and also to track what each instance is performing with the block, cache coherency and consistency needs to be maintained. Oracle RAC maitains this cache coherency and consistency using two main services GCS (Global Cache Service)and GES (Global Enqueue Service).

 

GCS with the help of LMS processes, co-ordinates within the RAC instances and records the status of the cached data blocks in the GRD (Global Resource Directory). The GRD is distributed across all the active instances and contains the information of the cached block such as the block number, which instance is owning the current version of the block, mode of the block, role of the block.

 

GES maintains the coherency of enqueues on the dictionary and library cache. It keeps track of all global enqueues of the resources in the RAC environment.

 

Based on what a resource holder is requesting to perform, a data block can be of any of the 3 modes.
 
1. NULL (N): A null mode indicates just a resource holder and has no access rights.

2. Exclusive (X): An exclusive mode signifies exclusive access of the block. This means that the resource holder needs to perform a write operation on the block and no other resource can write over it. However, other resources can perform read operation on the block.

3. Shared (S): A shared mode indicates that the resource holder has a shared lock on the block and is performing a read operation. As the mode name, since the lock is shared, any other resource can also read the block.

 

In addition to the modes, GCS also has roles for the resources.
 
1. Local role: When a data block is first read from the disk into the cache, it’s role is said to be LOCAL. This also means that no other modified/dirty copy of the block exists in the cache.

2. Global Role: When a data block is acquired from a remote instance and if this data block is already modified in the remote instance, then the role of the block is said to be GLOBAL. If multiple copies of modified/dirtied data block exists across multiple instances, then the role of this data block is considered to be GLOBAL.

 

Another important concept that we need to know of is the “Past Image” (PI). As the name says, an image copy of the modified/updated data block is saved by the modifying instance before passing on this data block to the other requesting instance that would like to perform either read or write operation. A PI block is considered to be the current version of the block and in case of a node failure, GCS would start the recovery from this PI block thereby reducing the recovery time. Once the latest version of the data block is written to the disk (checkpoint), GCS informs all the instances having PI to discard those images.

 

All the above details speak about the data blocks, but when a block is read into the memory, it would be stored in the buffer. The state of a buffer is determined based on the 3 characters.
 

1. Lock mode – N (NULL), S (Shared), X (Exclusive)

2. Role – L(Local), G(Global)

3. Past Image(PI): Number indicating the number of past images

 

GV$BH view can be used to check the status of the block against an instance of the database. The values represent:

1. “cr“: This represents a NULL lock mode on the block.

2. “SCUR“: This represents that a SHARED lock is held on the block by that particular instance.

3. “XCUR“: This represents an EXCLUSIVE lock is held on the block by that particular instance.
 
Lets consider a 3 node RAC as an example to see how the block transfer works with user A connected to instance 1, user B connected to instance 2 and user C connected to instance 3.
 
Assuming that the the table EMP under schema BTTEST is created freshly and no connections from any of the 3 instances have accessed this table, the following scenarios have been defined. But before moving on to the scenarios, let’s capture the block relevant information for a row of this EMP table. This can be done with the help of dbms_rowid.
 


SQL> select * from bttest.emp;

CODE       NAME
---------- --------------------------------------------------
100        JAMES
200        SCOTT
300        SMITH
400        JOHN

 


SQL>select owner,object_name,data_object_id from dba_objects where object_name='EMP';

OWNER     OBJECT_NAME  DATA_OBJECT_ID
--------- ------------ ----------------
BTTEST    EMP          91791

 


SQL>select dbms_rowid.rowid_relative_fno(rowid) FILE#,dbms_rowid.rowid_block_number(rowid) BLOCK# from bttest.emp where code=300;

FILE#      BLOCK#
---------- ----------
11         135

 
Considering the block 135 which holds a row of the table EMP whose value for column CODE is 300, we are explaining the following scenarios.
 
Scenario 1:

User B on instance 2 performs a SELECT operation on this table which access the contents of the block 135.

SQL statement run: select * from bttest.emp where code=300;

1. Since no other connections from other instances have accessed this table previously, the data block 135 needs to be read from the disk and written on to the buffer.
 
2. User B will now hold a Shared (S) lock on this block as it’s performing only a READ and not WRITE operation. Since the block is being read for the first time from the disk and that there exists no dirty copy of this block in the buffer cache, the role of the block is LOCAL (L). The third consideration is that has been no modification done to this block by USER B and hence there exists no PAST IMAGES (0). With this, the mode of the block on instance 2 would be SL0 and nothing with respect to instance 1 and 3.
 
Querying the GV$BH view to capture the details of the block, we see that the status of the block for instance 2 is SCUR which indicates that the block is held in SHARED lock by instance 2. There is no information for instance 1 and 3 as they haven’t yet accessed this block.
 


SQL> select file#,block#,status,dirty,objd,inst_id from gv$bh where objd=91791 and block#=135;

FILE#      BLOCK#     STATUS     D OBJD       INST_ID
---------- ---------- ---------- - ---------- ----------
11         135        scur       N 91791      2

 
===========================================================================
 
Scenario 2:

Now user C on instance 3 performs an UPDATE operation on the row in the block 135. The SQL statement run is:

update bttest.emp set name=’UPDATE3′ where code=300;

Since the block is already available in the cache of instance B, there would be no requirement to read the block from the disk and thereby avoiding physical read.
 
1. Instance 3 would send the request to GCS which knows who currently owns the block (using the information from GRD).
 
2. GCS forwards the request to instance B who is currently holding the SHARED lock.
 
3. Instance 2 would downgrade it’s lock on the block from SHARED to NULL and since there is no modification done on this block by instance 2, the role of the block remains LOCAL and Past Image (PI) count is 0. Thereby, the mode of the block on instance 2 would be NL0.
 
4. Instance 2 would then send the requested block to instance 3 and updates this to the GCS.
 
5. Instance 3 will now go for an EXCLUSIVE lock (X) on the block and the role of the block still remains LOCAL as the instance previously holding this block (instance 2) hasn’t made any modification to this block and so would the PI count be 0. The mode of the block on instance 3 would be XL0.
 
6. Instance 1 has not come into picture yet with respect to this block.
 

SQL>select file#,block#,status,dirty,objd,inst_id from gv$bh where objd=91791 and block#=135;

FILE#      BLOCK#     STATUS     D OBJD       INST_ID
---------- ---------- ---------- - ---------- ----------
11         135        cr         N 91791      2
11         135        xcur       Y 91791      3

 
From the above result, we can see that the buffer status on instance 2 is CR (NULL) and no modifications being done to the block. While on instance 3, the buffer status is XCUR (EXCLUSIVE) and instance 3 has dirtied (modified) (column DIRTY – Value “Y”) the block.
 
===========================================================================
 
Scenario 3:
 
User A on instance 1 runs a “select” statement to access the row in block 135.
 
1. Instance 1 requests GCS for the data block. GCS knows that instance 3 owns the block and has an EXCLUSIVE lock on it.
 
2. GCS forwards the request to instance 3.
 
3. Instance 3 would now bring down it’s lock from Exclusive to Shared on the requested block. The lock would have been lowered to NULL if instance 1 would have requested the block to be modified (WRITE operation), instead, it has requested for a READ operation. Since instance 3 has modified the block in the previous scenario, the role of the block would be GLOBAL and GCS be informed that the requesting instance needs to have this block in the GLBOAL role.
 
4. Since instance 3 has modified the block in scenario 2, it retains a copy (PI) of the modified block and sends the requested block. Instance 3 will now have SG1 mode on the block.
 
5. Instance 1 receives the block and will hold a Shared (S) lock with Global Role (as informed previously) with 0 Past Images (there exists no past images for this block on instance 1). So, instance 1 will now have SG0 mode on the block.
 
6. The mode of the block on instance 2 will be NG0. The role of the block will be changed to GLOBAL as instance 3 has modified the block in the previous scenario.
 

SQL>select file#,block#,status,dirty,objd,inst_id from gv$bh where objd=91791 and block#=135;

FILE#      BLOCK#     STATUS     D OBJD       INST_ID
---------- ---------- ---------- - ---------- ----------
11         135        cr         N 91791      2
11         135        scur       N 91791      1
11         135        scur       Y 91791      3

 

SQL>select file#,block#,status,dirty,objd,inst_id from gv$bh where objd=91791 and block#=135 and status <> 'cr';

FILE#      BLOCK#     STATUS     D OBJD       INST_ID
---------- ---------- ---------- - ---------- ----------
11         135        scur       N 91791      1
11         135        scur       Y 91791      3

 
When I query the GV$BH view, I could see that both instance 1 and 3 have SHARED lock on the block 135 whereas instance 2 with just CR mode (NULL). Since instance 3 has dirtied this block previously, the value for column DIRTY against instance 3 remains as Y (YES).
 
===========================================================================
 
Scenario 4:
 
Now, I run an “Update” statement on the same block through User B from instance 2.
 
1. Instance 2 requests GCS for the data block. The block was last accessed by instance 1 and GCS would forward the request to instance 1.
 
2. Instance 1 would send the block to the requesting instance 2 through GCS. But, before passing on the block, instance 1 would downgrade it’s lock on the block from Shared to NULL with the role of the block being “GLOBAL” and not holding any Past Images. The mode of the block on instance 1 would now be NG0.
 
3. Since instance 2 has requested for WRITE operation, instance 3 will also downgrade it’s lock on the block from SHARED to NULL. The role of the block remains GLOBAL and since it holds 1 PAST IMAGE (as per Scenario 3), the mode of the block on instance 3 will be NG1.
 
4. Instance 2 will acquire an EXCLUSIVE (X) lock on the block with the role of the block retained as GLOBAL and the PI count remains 0. The PI count remains 0 for instance 2 is because this instance has never performed any modifications to this block and thereby not retaining any copies of the modifications. The mode of the block on instance 2 would be XG0.
 

SQL>select file#,block#,status,dirty,objd,inst_id from gv$bh where objd=91791 and block#=135;

FILE#      BLOCK#     STATUS     D OBJD       INST_ID
---------- ---------- ---------- - ---------- ----------
11         135        cr         N 91791      1
11         135        cr         N 91791      1
11         135        cr         N 91791      1
11         135        xcur       Y 91791      2
11         135        cr         N 91791      2
11         135        cr         N 91791      2
11         135        pi         Y 91791      3
11         135        cr         N 91791      3

 

SQL>select file#,block#,status,dirty,objd,inst_id from gv$bh where objd=91791 and block#=135 and status <> 'cr';

FILE#      BLOCK#     STATUS     D OBJD       INST_ID
---------- ---------- ---------- - ---------- ----------
11         135        pi         Y 91791      3
11         135        xcur       Y 91791      2

 
Querying GV$BH, one can see that the status of the buffer on instance 2 is XCUR with the value for the column DIRTY being “Y”. Also, it shows a clear image of the status of the buffer being shown as “PI” with the value for the column DIRTY being “Y” against instance 3. This is related to the scenario 2 where instance 3 performed an UPDATE operation on the block 135. The values “CR” for the column “STATUS” against instance 2 or instance 3 represent the status of the block in the previous scenarios and not to the current one.
 
===========================================================================
 
Scenario 5:
 
User C on instance 3 runs a “SELECT” statement to access row in block 135.
 
1. Instance 3 requests GCS for the block 135. GCS then forwards the request to instance 2 which is holding the block with an EXCLUSIVE lock.
 
2. Instance 2 downgrades its lock from “EXCLUSIVE” to “SHARED”, adds a flag that it modified the block as stated in scenario 4 and thereby declaring that it has 1 PAST IMAGE. However the role of the block still remains GLBOAL and the mode of this block would be SG1 on instance 2.
 
3. The block is then received by instance 3 and since it requested for a READ operation, it would acquire a SHARED lock on the block. The role of the block would be GLBOAL (as multiple changes have occurred to this block by remoted instances too). The PAST IMAGE count for this bock on instance 3 would be 1 (Reason – there has been only 1 change made to this block by instance 3 (as explained in scenario 2)). Finally, the mode of the block on instance 3 will be SG1.
 
4. Since instance 1 hasn’t come into picture in this scenario, it would retain it’s mode as it had in the previous scenario i.e, NG0.
 

SQL>select file#,block#,status,dirty,objd,inst_id from gv$bh where objd=91791 and block#=135;

FILE#      BLOCK#     STATUS     D OBJD       INST_ID
---------- ---------- ---------- - ---------- ----------
11         135        scur       Y 91791      2
11         135        cr         N 91791      2
11         135        cr         N 91791      2
11         135        cr         N 91791      1
11         135        cr         N 91791      1
11         135        cr         N 91791      1
11         135        pi         Y 91791      3
11         135        scur       N 91791      3
11         135        cr         N 91791      3
11         135        cr         N 91791      3
11         135        cr         N 91791      3

11 rows selected.

 

SQL>select file#,block#,status,dirty,objd,inst_id from gv$bh where objd=91791 and block#=135 and status <> 'cr';

FILE#      BLOCK#     STATUS     D OBJD       INST_ID
---------- ---------- ---------- - ---------- ----------
11         135        pi         Y 91791      3
11         135        scur       N 91791      3
11         135        scur       Y 91791      2

 
From the above result, we can see that the status of the block against instance 2 and instance 3 is SCUR (SHARED lock) and that NULL (cr) for instance 1.
 
===========================================================================
 
Scenario 6:
 
User A on instance 1 performs an UPDATE operation on the same block 135.
 
SQL statement run: “update bttest.emp set name=’UPDATE2′ where code=300;
 
1. Instance 1 requests for the block with GCS. GCS knows that the block was last modified by instance 2 and is the latest one. Hence, it would request instance 2 to transfer the block to instance 1.
 
2. Since instance 1 wants to perform a WRITE operation on the block, it would acquire an EXCLUSIVE lock. This in turn means that all other instances should downgrade their locks on this block to NULL. The role of the block would be GLBOAL and the PAST IMAGE count of this block on instance 1 would be 0 as this is the first time that this instance has requested for an UPDATE operation on this block. So no PAST IMAGES exist. As a result, the mode of the block on instance 1 would be XG0.
 
3. As explained in previous step, instance 2 would have NULL lock on the block with GLBOAL role. The PAST IMAGE count would still be 1 because of the update operation it did as per scenario 4. The mode of the block 135 with respect to instance 2 would be NG1.
 
4. Similarly the mode of the block on instance 3 would be NG1.
 

SQL>select file#,block#,status,dirty,objd,inst_id from gv$bh where objd=91791 and block#=135;

FILE#      BLOCK#     STATUS     D OBJD       INST_ID
---------- ---------- ---------- - ---------- ----------
11         135        pi         Y 91791      2
11         135        cr         N 91791      2
11         135        cr         N 91791      2
11         135        pi         Y 91791      3
11         135        cr         N 91791      3
11         135        cr         N 91791      3
11         135        cr         N 91791      3
11         135        cr         N 91791      3
11         135        xcur       Y 91791      1
11         135        cr         N 91791      1
11         135        cr         N 91791      1
11         135        cr         N 91791      1
11         135        cr         N 91791      1

13 rows selected.

 

SYS@srprim1&gt;select file#,block#,status,dirty,objd,inst_id from gv$bh where objd=91791 and block#=135 and status <> 'cr';

FILE#      BLOCK#     STATUS     D OBJD       INST_ID
---------- ---------- ---------- - ---------- ----------
11         135        pi         Y 91791      2
11         135        pi         Y 91791      3
11         135        xcur       Y 91791      1

 
We now have the status as “PI” which is in dirty state for instance 2 (scenario 4) and instance 3 (scenario 2) each and that for instance 1, the block status is XCUR (EXCLUSIVE).
 
===========================================================================
 
Scenario 7:
 
User A on instance 1 performs another UPDATE operation on the same block.
 
SQL statement run: “update bttest.emp set name=’UPDATE11′ where code=300;
 
1. GCS knows that the last update operation performed on this block was instance 1 itself. Now, again instance 1 has requested for an EXCLUSIVE lock.
 
2. Mode of the block on instance 1 would be XG1 (The Past image count is 1 for instance 1 as it previously performed as UPDATE operation as per scenario 6).
 
3. The mode of the block with respect to instance 2 and 3 would remain as in the previous stage (NG1).
 

SQL>select file#,block#,status,dirty,objd,inst_id from gv$bh where objd=91791 and block#=135;

FILE#      BLOCK#     STATUS     D OBJD       INST_ID
---------- ---------- ---------- - ---------- ----------
11         135        pi         Y 91791      2
11         135        cr         N 91791      2
11         135        cr         N 91791      2
11         135        xcur       Y 91791      1
11         135        cr         N 91791      1
11         135        cr         N 91791      1
11         135        cr         N 91791      1
11         135        cr         N 91791      1
11         135        cr         N 91791      1
11         135        pi         Y 91791      3
11         135        cr         N 91791      3
11         135        cr         N 91791      3
11         135        cr         N 91791      3
11         135        cr         N 91791      3

14 rows selected.

 

SQL>select file#,block#,status,dirty,objd,inst_id from gv$bh where objd=91791 and block#=135 and status <> 'cr';

FILE#      BLOCK#     STATUS     D OBJD       INST_ID
---------- ---------- ---------- - ---------- ----------
11         135        pi         Y 91791      2 
11         135        xcur       Y 91791      1
11         135        pi         Y 91791      3

 
From the above result, instance 1 holds an XCUR status on the block 135 which indicates an EXCLUSIVE lock is held. While the other 2 instances (2 and 3) hold NULL lock which is why the value is “cr”.
 
===========================================================================
 
Scenario 8:
 
User B on instance 2 performs a CHECKPOINT. (This signifies that the dirty blocks in the buffer needs to be written to the disk.)
 
1. Instance 2 requests GCS for a checkpoint.
 
2. GCS forwards the request to instance 1 who held the block in EXCLUSIVE mode in scenario 7, to write the block to disk.
 
3. Instance 1 does the write operation of the block to disk and informs GCS of the completion of the operation, but would still retain it’s lock as EXCLUSIVE on the block.
 
4. GCS then informs all the instances holding PIs to discard or flush those PIs and requests the instances to change the role of the block from GLOBAL to LOCAL.
 
5. Thus, instance 1 will now have the block with mode XL0, instance 2 and instance 3 with NULL.
 

SQL>select file#,block#,status,dirty,objd,inst_id from gv$bh where objd=91791 and block#=135;

FILE#      BLOCK#     STATUS     D OBJD       INST_ID
---------- ---------- ---------- - ---------- ----------
11         135        cr         N 91791      2
11         135        cr         N 91791      2
11         135        cr         N 91791      2
11         135        cr         N 91791      3
11         135        cr         N 91791      3
11         135        cr         N 91791      3
11         135        cr         N 91791      3
11         135        cr         N 91791      3
11         135        xcur       N 91791      1
11         135        cr         N 91791      1
11         135        cr         N 91791      1
11         135        cr         N 91791      1
11         135        cr         N 91791      1
11         135        cr         N 91791      1

14 rows selected.

 

SQL>select file#,block#,status,dirty,objd,inst_id from gv$bh where objd=91791 and block#=135 and status <> 'cr';

FILE#      BLOCK#     STATUS     D OBJD       INST_ID
---------- ---------- ---------- - ---------- ----------
11         135        xcur       N 91791      1

 
Here, one can see that the block is currently held by instance 1 in EXCLUSIVE mode while all other instances have NULL lock on this block. All the past images that instance 2 and 3 had have been flushed.
 

 

COPYRIGHT

© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.

 

 

DISCLAIMER

The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organisation. The views expressed by visitors on this blog are theirs solely and may not reflect mine

 

July 25, 2016 / Shivananda Rao P

Overview of Global Data Services in oracle 12c

This article should give you an overview of the new feature in Oracle 12c – Global Services. We are aware of database services in oracle. These services provide a workload management by ensuring that clients connect to the optimal instance which is offering the service. It also serves the purpose of high availability by failing over the client connections to the other survival instances that are offering the same service. This is referred to as local data services.

 

Oracle 12c came up with a new feature of Global Data Services which extends the above feature with automated and centralized workload management across the set of replicated databases like dataguard, golden gate. In addition to this, GDS provides replication lag based workload routing, role based global services and region based global services. Note that the global services can be configured only for databases of version 12c. For databases prior 12c, only local data services can be configured.

 

One of the major question that would be raised is the difference between local and global data services – A global data service is created across a set of multiple databases, whereas local data services is created only across a set of instances of a single database. With local data services, there is more of manual intervention needed such as a local data service for a standby needs to be first created on the primary database and then on the standby database. Whereas, with GDS, it’s managed globally with a single service.

 

A global data service configuration consists of some of the major components.

 

GDS Pool: A GDS Pool is a set of databases containing replicated data in the GDS configuration that provide unique Global Data Services and are administered by different administrator.
A database can belong to only one GDS pool. All databases in a GDS pool need not provide the same global service, but all databases that provide the same global service must belong to the same GDS pool.

 

GDS Catalog: Just as we have a catalog database in RMAN which acts as a repository for the backup configuration of the registered database, likewise, a GDS Catalog is a repository that is used to store the GDS configuration. This catalog must reside in an oracle database of version 12c.
This database may reside inside or outside the GDS configuration. Please note that a catalog is associated with only one GDS configuration.

 

GSM (Global Service Manager): As the name says, GSM serves the role of a manager in managing the Global services, failover and load balancing of global services.
It acts as a middle layer between the clients and the databases, just as a remote listener does in RAC databases. In addition to this, it measures the network latency between regions of the configuration by collecting the performance metrics from the databases in the configuration, monitors the databases and global services in the GDS configuration and notifies the clients when they fail.

 

Client first connects to the GSM and requests for a connection to the global service. The GSM, then, forwards the connection request to the database instance in the GDS configuration that is offering the requested Global Service.

 

GDS Region: A GDS region is a set of databases within the GDS configuration and the clients that share a very little network latency. A region can contain multiple GDS pools.

 

You can configure, modify, start or stop global services using GDSCTL utility. To use this, you need to download and install the Oracle Global Service Manager software from the Oracle site.

 

In the coming posts, we shall see on the installation of GSM software, configuring global services for the databases, how Global Services work with Dataguard and how effective it serves when there is a replication lag.

 

 

COPYRIGHT

© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.

 

 

DISCLAIMER

The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organisation. The views expressed by visitors on this blog are theirs solely and may not reflect mine

 

July 9, 2016 / Shivananda Rao P

Upgrade OEM 12c to 13c

Here, in this article, I’m demonstrating an upgrade from cloud control 12c to 13c on a linux machine. It is required to take a look at the https://docs.oracle.com/cd/E63000_01/EMUPG/preface.htm#EMUPG101 upgrade document before proceeding further and make sure that all the pre-requisites are met.

 

Few things to consider before you proceed with the upgrade.

 

1. OMS 13c repository database needs to be on 12.1.0.2 version. So, if the repository database is on a pre-12c release, then you need to upgrade it before upgrading the Cloud control.

 

2. Cloud control 13c is not supported on OEL/RHEL 5 and so are the 13c agents. You cannot upgrade cloud control 12c on linux machine less than version 6 to 13c.

 

Environment details is as below:

 

The environment used here has the Cloud control 12c installed on OEL 6 with the repository database of version 12.1.0.2 which is a non-CDB and has the 12c agents deployed on linux machine of release 6 (OEL 6).

 

Cloud Control 12c OMS hostname: ora1-2
Repository database version: 12.1.0.2

 

Make sure that the “COMPATIBLE” parameter on the OMS respository database is set to 12.1.0.2.0.

 

SYS@omsdb> show parameter compatible

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
compatible                           string      12.1.0.2.0
noncdb_compatible                    boolean     FALSE
SYS@omsdb&gt;
SYS@omsdb> show parameter adaptive

 

If the adaptive optimizer feature (optimizer_adaptive_features) is enabled, then it needs to be disabled on the OMS repository database before upgrading it.

 

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
optimizer_adaptive_features          boolean     TRUE
optimizer_adaptive_reporting_only    boolean     FALSE
parallel_adaptive_multi_user         boolean     TRUE
SYS@omsdb>
SYS@omsdb> alter system set optimizer_adaptive_features=false;

System altered.

SYS@omsdb> show parameter adaptive

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
optimizer_adaptive_features          boolean     FALSE
optimizer_adaptive_reporting_only    boolean     FALSE
parallel_adaptive_multi_user         boolean     TRUE

 

If there are any invalid objects in the OMS Repository database, then those need to be validated.

 

Take a backup of the current OMS. Refer the documentation “http://docs.oracle.com/cd/E24628_01/install.121/e24089/ha_backup_recover.htm#EMADV9639&#8221; on how to backup the OMS.

 

Copy the EMKEY from the exsting OMS to the existing management repository. EMKEY is the encryption key which is used by Enterprise Manager to encrypt/decrypt sensitive data such as passwords, preferred credentials.

 

[oracle@ora1-2 ~]$ echo $OMS_HOME
/u02/oms12c/oms

[oracle@ora1-2 ~]$ $OMS_HOME/bin/emctl config emkey -copy_to_repos
Oracle Enterprise Manager Cloud Control 12c Release 4
Copyright (c) 1996, 2014 Oracle Corporation.  All rights reserved.
Enter Enterprise Manager Root (SYSMAN) Password :
The EMKey has been copied to the Management Repository. This operation will cause the EMKey to become unsecure.
After the required operation has been completed, secure the EMKey by running "emctl config emkey -remove_from_repos".


[oracle@ora1-2 ~]$ $OMS_HOME/bin/emctl status emkey
Oracle Enterprise Manager Cloud Control 12c Release 4
Copyright (c) 1996, 2014 Oracle Corporation.  All rights reserved.
Enter Enterprise Manager Root (SYSMAN) Password :
The EMKey  is configured properly, but is not secure. Secure the EMKey by running "emctl config emkey -remove_from_repos".
[oracle@ora1-2 ~]$

 

Stop all the components of OMS which includes WebTier, OMS and AdminServer.

 

[oracle@ora1-2 u03]$ /u02/oms12c/oms/bin/emctl stop oms -all
Oracle Enterprise Manager Cloud Control 12c Release 4
Copyright (c) 1996, 2014 Oracle Corporation.  All rights reserved.
Stopping WebTier...
WebTier Successfully Stopped
Stopping Oracle Management Server...
Oracle Management Server Successfully Stopped
AdminServer Successfully Stopped
Oracle Management Server is Down
[oracle@ora1-2 u03]$

 

If there is an agent configured for the OMS server, then stop that too.

 

[oracle@ora1-2 ~]$ /u02/12cagent/core/12.1.0.4.0/bin/emctl stop agent
Oracle Enterprise Manager Cloud Control 12c Release 4
Copyright (c) 1996, 2014 Oracle Corporation.  All rights reserved.
Stopping agent ... stopped.

 

Copy all the EM13c software parts into a staging directory. I have them copied into directory “/u03/em13csoftware”.

 

[oracle@ora1-2 u03]$ ls -lrt /u03/em13csoftware | grep -i "em"
-rw-r--r--. 1 oracle oinstall 1315250699 Jun 11 15:54 em13100_linux64-2.zip
-rw-r--r--. 1 oracle oinstall 2145473190 Jun 11 16:05 em13100_linux64-3.zip
-rw-r--r--. 1 oracle oinstall 2141357330 Jun 11 16:15 em13100_linux64-4.zip
-rw-r--r--. 1 oracle oinstall  331590923 Jun 11 16:16 em13100_linux64-5.zip
-rw-r--r--. 1 oracle oinstall  841114290 Jun 11 16:20 em13100_linux64.bin

 

Set the executable permission on the file “em13100_linux64.bin” which is the executable binary file.
Execute the em13100_linux64.bin file to invoke the 13c GUI component.

 

[oracle@ora1-2 em13csoftware]$ chmod +x em13100_linux64.bin
[oracle@ora1-2 em13csoftware]$ ls -lrt em13100_linux64.bin
-rwxr-xr-x. 1 oracle oinstall 841114290 Jun 11 16:20 em13100_linux64.bin

 

[oracle@ora1-2 em13csoftware]$ ./em13100_linux64.bin
0%...............................................................100%
Launcher log file is /tmp/OraInstall2016-06-12_11-28-00AM/launcher2016-06-12_11-28-00AM.log.
Starting Oracle Universal Installer

Checking if CPU speed is above 300 MHz.   Actual 3180.718 MHz    Passed
Checking monitor: must be configured to display at least 256 colors.   Actual 16777216    Passed
Checking swap space: must be greater than 512 MB.   Actual 10236 MB    Passed
Checking if this platform requires a 64-bit JVM.   Actual 64    Passed (64-bit not required)
Checking temp space: must be greater than 300 MB.   Actual 22122 MB    Passed


Preparing to launch the Oracle Universal Installer from /tmp/OraInstall2016-06-12_11-28-00AM
====Prereq Config Location main===
/tmp/OraInstall2016-06-12_11-28-00AM/stage/prereq
EMGCInstaller args -scratchPath
EMGCInstaller args /tmp/OraInstall2016-06-12_11-28-00AM
EMGCInstaller args -sourceType
EMGCInstaller args network
EMGCInstaller args -timestamp
EMGCInstaller args 2016-06-12_11-28-00AM
EMGCInstaller args -paramFile
EMGCInstaller args /tmp/sfx_Wr9ToQ/Disk1/install/linux64/oraparam.ini
EMGCInstaller args -nocleanUpOnExit
DiskLoc inside SourceLoc/u03/em13csoftware
EMFileLoc:/tmp/OraInstall2016-06-12_11-28-00AM/oui/em/
ScratchPathValue :/tmp/OraInstall2016-06-12_11-28-00AM

 

Provide the mail ID followed by password if you wish to receive the support updates and click NEXT.

 

1

 

I would not like to search for the software update and hence chosing the SKIP option and clicking NEXT.

 

2

 

Run the pre-requisite checks and if any discrepancy, then fix them before proceeding further.

 

3

 

Since we are upgrading to 13c EM, select the “Upgrade an existing Enterprise Manager system” option and under this select the “One-System Upgrade” by selecting the 12c EM home.

 

4

 

Enter the middleware home location. I’m installing 13c under the location “/u02/13c”.

 

5

 

Oracle automatically captures the connect description details for the repository database. Provide the SYS and SYSMAN password.
Also confirm that the Management Repository is backed up by checking the box. If you wish to stop the Enterprise Manager for post-upgradation maintenance, then check the “Disable DDMP jobs” and click NEXT button. In that case, the post-upgradation maintenance activity needs to be carried out manually. DDMP is Deferred Data Migration which is to migrate the format of the data from the previous release Enterprise Manager to the format used in 13c.

 

6

 

If there are any pre-requisite checks at OMS repository database level failed, then review the recommendations provided. Click YES, if you would like the installer to take necessary actions else NO for a manual fix.

 

7

 

Review the information on the plugins that will be upgraded and click Next.

 

8

 

Select the additional plugins that you want to deploy and click Next.

 

9

 

Fill in the WebLogic server details by specifying the password for the Weblogic User Name and also providing the OMS Instance Base Location.

 

10

 

If you want to configure and enable BI publisher, you can select it here, else click Next.

 

11

 

Review the default ports listed and click Next.

 

12

 

Review the list of information provided and click on Upgrade to begin the upgrade.

 

13

 

14

 

Run the “allroot.sh” script as ROOT user when prompted.

 

[root@ora1-2 ~]# /u02/13c/allroot.sh

Starting to execute allroot.sh .........

Starting to execute /u02/13c/root.sh ......
/etc exist
/u02/13c
Finished execution of  /u02/13c/root.sh ......

 

15

 

Review and note down the URLs

 

16

 

Login to the EM 13c through the browser with SYSMAN as the user and the password which was being used previously.

 

17

 

Accept the License Agreement to proceed further.

 

18

 

Review the OEM 13c summary page.

 

19

 

If you had disabled DDMP jobs while upgrading EM, you should consider to carry out the post upgrade tasks.

 

Login to the EM 13c through the browser. On the summary page, click SETUP. Under this, select “Manage Cloud Control” and then the “Post Upgrade Tasks” option.

 

24

 

Click the “Start” option to start the post upgrade tasks.

 

25

 

Review the status of the post upgrade tasks.

 

26

 

Upgrading a 12c agent to 13c:

 

As said previously, 13c agents can be configured on a Linux target machine whose version is not below 6. So, if you already have the 12c agent installed on a Linux machine which is less than version 6, then this cannot be considered to upgrade to 13c agent. But, you can still use the old agent (12c agent) with EM 13c.

 

For the purpose of this demo, I had a 12c agent installed on my OMS host which is of version 6. So, I’m considering to upgrade this agent to 13c.

 

Agent Host name: ora1-2
Agent Version: 12c
Agent Host Flavor: OEL 6

 

Login to the EM 13c through the browser. On the summary page, click SETUP. Under this, select “Manage Cloud Control” and then the “Upgrade Agents” option.

 

Click on the Add option under the “Agents for upgrade” option to view the list of agents that can be upgraded to 13c.

 

20

 

Select the agent you desire to upgrade and click OK.

 

21

 

Review the information and click on Submit to upgrade the agent.

 

22

 

After the agent upgrade, review the status manually.

 

12c agent under location:

 

[oracle@ora1-2 bin]$ cd /u02/12cagent/
[oracle@ora1-2 12cagent]$ ls -lrt
total 32
drwxr-xr-x.  3 oracle oinstall 4096 May 24  2014 core
drwxr-xr-x. 12 oracle oinstall 4096 Jun  7 13:24 plugins
-rw-rw-r--.  1 oracle oinstall  179 Jun 13 20:30 agentimage.properties
-rw-r--r--.  1 oracle oinstall  693 Jun 13 20:35 agentInstall.rsp
drwxr-xr-x.  3 oracle oinstall 4096 Jun 13 20:35 backup_agtup
drwxr-xr-x.  5 oracle oinstall 4096 Jun 13 20:40 sbin
drwxr-xr-x.  9 oracle oinstall 4096 Jun 13 20:49 agent_inst
drwxr-xr-x. 28 oracle oinstall 4096 Jun 13 20:50 agent_13.1.0.0.0

 

Agent upgrade installer creates a new directory called “agent_13.1.0.0.0” under the previous 12c agent directory.

 

[oracle@ora1-2 12cagent]$ cd agent_13.1.0.0.0/bin/
[oracle@ora1-2 bin]$ ./emctl status agent
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
Agent Version          : 13.1.0.0.0
OMS Version            : 13.1.0.0.0
Protocol Version       : 12.1.0.1.0
Agent Home             : /u02/12cagent/agent_inst
Agent Log Directory    : /u02/12cagent/agent_inst/sysman/log
Agent Binaries         : /u02/12cagent/agent_13.1.0.0.0
Core JAR Location      : /u02/12cagent/agent_13.1.0.0.0/jlib
Agent Process ID       : 28044
Parent Process ID      : 27922
Agent URL              : https://ora1-2.mydomain:3872/emd/main/
Local Agent URL in NAT : https://ora1-2.mydomain:3872/emd/main/
Repository URL         : https://ora1-2.mydomain:4903/empbs/upload
Started at             : 2016-06-13 20:40:14
Started by user        : oracle
Operating System       : Linux version 2.6.32-71.el6.x86_64 (amd64)
Number of Targets      : 36
Last Reload            : 2016-06-13 20:43:33
Last successful upload                       : 2016-06-13 20:50:53
Last attempted upload                        : 2016-06-13 20:50:53
Total Megabytes of XML files uploaded so far : 0.29
Number of XML files pending upload           : 0
Size of XML files pending upload(MB)         : 0
Available disk space on upload filesystem    : 30.18%
Collection Status                            : Collections enabled
Heartbeat Status                             : Ok
Last attempted heartbeat to OMS              : 2016-06-13 20:50:38
Last successful heartbeat to OMS             : 2016-06-13 20:50:38
Next scheduled heartbeat to OMS              : 2016-06-13 20:51:39

---------------------------------------------------------------
Agent is Running and Ready
[oracle@ora1-2 bin]$

 

 

COPYRIGHT

© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.

 

 

DISCLAIMER

The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organisation. The views expressed by visitors on this blog are theirs solely and may not reflect mine