Background:
we have to remove a node from Oracle 10gR2 RAC.
OS: Redhat EL4
Node name:
db01
db02
db05
Storage: ASM (instance name +ASM)
The most important 3 steps that need to be followed are;
A. Remove the instance using DBCA.
B. Remove the node from the cluster.
C. Reconfigure the OS and remaining hardware.
Here is a breakdown of the above steps.
A. Remove the instance using DBCA.
————————————–
1. Verify that you have a good backup of the OCR (Oracle Configuration Repository) using ocrconfig -showbackup or dd command.
Run the backup manually with dd for OCR and Voting disk:
[oracle@db01 ocr_voting]$ pwd
/databases/oracle/backup/ocr_voting
[oracle@db01 ocr_voting]$ dd if=/dev/raw/raw1 of=before_del_3rdnode.ocr
[oracle@db01 ocr_voting]$ dd if=/dev/raw/raw2 of=before_del_3rdnode.vote
2. Run DBCA from one of the nodes you are going to keep (db01). Leave the database up and also leave the departing instance up and running.
3. Choose “Instance Management”
4. Choose “Delete an instance”
5. On the next screen, select the cluster database (p10c) from which you will delete an instance. Supply the system privilege username and password.
6. On the next screen, a list of cluster database instances will appear. Highlight the instance you would like to delete then click next.



7. If you have services configured, reassign the services. Modify the
services so that each service can run on one of the remaining
instances. Set “not used” for each service regarding the instance
that is to be deleted. Click Finish.
8. If your database is in archive log mode you may encounter the
following errors (10gR2 doesn’t have this issue):
ORA-350
ORA-312
This may occur because the DBCA cannot drop the current log, as
it needs archiving. This issue is fixed in the 10.1.0.3
patchset. But previous to this patchset you should click the
ignore button and when the DBCA completes, manually archive
the logs for the deleted instance and dropt the log group.
SQL> alter system archive log all;
SQL> alter database drop logfile group 2;
9. Verify that the dropped instance’s redo thread has been removed by
querying v$log. If for any reason the redo thread is not disabled
then disable the thread.
SQL> alter database disable thread 2;
10. Verify that the instance was removed from the OCR (Oracle Configuration Repository) with the following commands:
srvctl config database -d <db_name>
[oracle@vanpgprodb01 dbca]$ srvctl config database -d p10c
db01 p10c1 /databases/oracle/db
db02 p10c2 /databases/oracle/db
cd <CRS_HOME>/bin/crs_stat
11. If this node had an ASM instance and the node will no longer be a part of the cluster you will now need to remove the ASM instance with:
srvctl stop asm -n <nodename>
[oracle@db01 dbca]$ srvctl stop asm -n db05
srvctl remove asm -n <nodename>
[oracle@db01 dbca]$ srvctl remove asm -n db05
Verify that asm is removed with:
srvctl config asm -n <nodename>
[oracle@db01 dbca]$ srvctl config asm -n db05 –the output is nothing instead of show you the ASM instance name and asm home
Delete the ASM folders on deleting node:
rm -r $ORACLE_BASE/admin/+ASM
[root@db05 admin]# rm -rf +ASM
rm -f $ORACLE_HOME/dbs/*ASM*
[root@db05 dbs]# rm -rf *ASM*
Remove the ASM library on node3:
[root@db05 db]# /etc/init.d/oracleasm stop
Unmounting ASMlib driver filesystem: [ OK ]
Unloading module “oracleasm”: [ OK ]
[root@db05 db]# rpm -qa | grep oracleasm
oracleasmlib-2.0.2-1
oracleasm-2.6.9-55.ELsmp-2.0.3-1
oracleasm-support-2.0.3-1
[root@db05 db]# rpm -ev oracleasm-support-2.0.3-1 oracleasm-2.6.9-55.ELsmp-2.0.3-1 oracleasmlib-2.0.2-1
warning: /etc/sysconfig/oracleasm saved as /etc/sysconfig/oracleasm.rpmsave
[root@db05 db]# rm -f /etc/sysconfig/oracleasm.rpmsave
[root@db05 db]# rm -f /etc/rc.d/init.d/oracleasm
[root@db05 db]# rm -f /etc/rc0.d/*oracleasm*
[root@db05 db]# rm -f /etc/rc1.d/*oracleasm*
[root@db05 db]# rm -f /etc/rc2.d/*oracleasm*
[root@db05 db]# rm -f /etc/rc3.d/*oracleasm*
[root@db05 db]# rm -f /etc/rc4.d/*oracleasm*
[root@db05 db]# rm -f /etc/rc5.d/*oracleasm*
[root@db05 db]# rm -f /etc/rc6.d/*oracleasm*
B. Remove the Node from the Cluster
—————————————-
Once the instance has been deleted, the process of removing the node from the cluster is a manual process. This is accomplished by running scripts on the deleted node to remove the CRS install, as well as scripts on the remaining nodes to update the node list. The following steps assume that the node to be removed is still functioning.
1. Remove the listener and nodeapp on deleting node:
1). To delete node first stop and remove the nodeapps on the node you are removing. Assuming that you have removed the ASM instance as the root user on a remaining node;
# srvctl stop nodeapps -n <nodename>
[oracle@db01 dbs]$ srvctl stop nodeapps -n db05
[oracle@db01 dbs]$ crs_stat -t
Name Type Target State Host
————————————————————
ora.p10c.db application ONLINE ONLINE db02
ora….c1.inst application ONLINE ONLINE db01
ora….c2.inst application ONLINE ONLINE db02
ora….SM1.asm application ONLINE ONLINE db01
ora….01.lsnr application ONLINE ONLINE db01
ora….b01.gsd application ONLINE ONLINE db01
ora….b01.ons application ONLINE ONLINE db01
ora….b01.vip application ONLINE ONLINE db01
ora….SM2.asm application ONLINE ONLINE db02
ora….02.lsnr application ONLINE ONLINE db02
ora….b02.gsd application ONLINE ONLINE db02
ora….b02.ons application ONLINE ONLINE db02
ora….b02.vip application ONLINE ONLINE db02
ora….05.lsnr application OFFLINE OFFLINE
ora….b05.gsd application OFFLINE OFFLINE
ora….b05.ons application OFFLINE OFFLINE
ora….b05.vip application OFFLINE OFFLINE
2). Run netca. Choose “Cluster Configuration”.
3). Only select the node you are removing and click next.

4). Choose “Listener Configuration” and click next.
5). To delete the listener: Choose “Delete” and delete any listeners configured on the node you are removing.
6). Run <CRS_HOME>/bin/crs_stat. Make sure that all database resources are running on nodes that are going to be kept. For example:
NAME=ora.<db_name>.db
TYPE=application
TARGET=ONLINE
STATE=ONLINE on <node2>
[oracle@db05 db]$ crs_stat -t
Name Type Target State Host
————————————————————
ora.p10c.db application ONLINE ONLINE db02
ora….c1.inst application ONLINE ONLINE db01
ora….c2.inst application ONLINE ONLINE db02
ora….SM1.asm application ONLINE ONLINE db01
ora….01.lsnr application ONLINE ONLINE db01
ora….b01.gsd application ONLINE ONLINE db01
ora….b01.ons application ONLINE ONLINE db01
ora….b01.vip application ONLINE ONLINE db01
ora….SM2.asm application ONLINE ONLINE db02
ora….02.lsnr application ONLINE ONLINE db02
ora….b02.gsd application ONLINE ONLINE db02
ora….b02.ons application ONLINE ONLINE db02
ora….b02.vip application ONLINE ONLINE db02
ora….b05.gsd application OFFLINE OFFLINE
ora….b05.ons application OFFLINE OFFLINE
ora….b05.vip application OFFLINE OFFLINE
Ensure that this resource is not running on a node that will be removed (this step hasn’t been done in our case since crs stats shows everything correct). Use <CRS_HOME>/bin/crs_relocate to perform this.
Example: crs_relocate ora.<db_name>.db
7). As the root user, remove the nodeapps on the node you are removing.
# srvctl remove nodeapps -n <nodename>
[root@db05 db]# srvctl remove nodeapps -n db05
Please confirm that you intend to remove the node-level applications on node db05 (y/[n]) y
[root@db05 db]# crs_stat -t
Name Type Target State Host
————————————————————
ora.p10c.db application ONLINE ONLINE db02
ora….c1.inst application ONLINE ONLINE db01
ora….c2.inst application ONLINE ONLINE db02
ora….SM1.asm application ONLINE ONLINE db01
ora….01.lsnr application ONLINE ONLINE db01
ora….b01.gsd application ONLINE ONLINE db01
ora….b01.ons application ONLINE ONLINE db01
ora….b01.vip application ONLINE ONLINE db01
ora….SM2.asm application ONLINE ONLINE db02
ora….02.lsnr application ONLINE ONLINE db02
ora….b02.gsd application ONLINE ONLINE db02
ora….b02.ons application ONLINE ONLINE db02
ora….b02.vip application ONLINE ONLINE db02
2. Remove the Oracle Database Software from the Node to be Deleted
1). On node3 make sure you have correct ORACLE_HOME
[oracle@db05 db]$ echo $ORACLE_HOME
/databases/oracle/db
2). Update Node List for Oracle Database Software – (Remove node3):
[oracle@db05 bin]$ export DISPLAY=10.50.133.143:0
[oracle@db05 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES=”” -local
Starting Oracle Universal Installer…
No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed.
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /databases/oracle/oraInventory
‘UpdateNodeList’ was successful.
Note: Although the OUI does not launch an installer GUI, the DISPLAY environment variable still needs to be set!
3). De-install Oracle Database Software
Next, run the OUI from the node to be deleted (linux3) to de-install the Oracle Database software. Make certain that you choose the home to be removed and not just the products under that home.

4). Update Node List for Remaining Nodes in the Cluster (on any of remaining nodes)
[oracle@db01 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME “CLUSTER_NODES={vanpgprodb01,vanpgprodb02}”
Starting Oracle Universal Installer…
No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed.
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /databases/oracle/oraInventory
‘UpdateNodeList’ was successful.
3. Remove the Node to be Deleted from Oracle Clusterware
1). Remove Node-Specific Interface Configuration
[oracle@db05 db]$ export ORA_CRS_HOME=/databases/oracle/crs
[oracle@db05 db]$ grep ‘^remoteport’ $ORA_CRS_HOME/opmn/conf/ons.config
remoteport=6200
[oracle@db05 bin]$ ./racgons remove_config vanpgprodb05:6200
racgons: Existing key value on vanpgprodb05 = 4948.
WARNING: db05:6200 does not exist.
[oracle@db05 bin]$ ./racgons remove_config db05:4948
racgons: Existing key value on vanpgprodb05 = 4948.
racgons: db05:4948 removed from OCR.
[oracle@db05 bin]$ ./oifcfg delif -node db05
PROC-4: The cluster registry key to be operated on does not exist.
PRIF-11: cluster registry error
Note: this error has been approved ok.
4. Disable Oracle Clusterware Applications
Running this script will stop the CRS stack and delete the ocr.loc file on the node to be removed. The nosharedvar option assumes the ocr.loc file is not on a shared file sytem.
While logged into node3 as the root user account, run the following:
[root@vanpgprodb05 install]# ./rootdelete.sh local nosharedvar nosharedhome
CRS-0210: Could not find resource ‘ora.db05.LISTENER_DB05.lsnr’.
CRS-0210: Could not find resource ‘ora.db05.ons’.
CRS-0210: Could not find resource ‘ora.db05.vip’.
CRS-0210: Could not find resource ‘ora.db05.gsd’.
Shutting down Oracle Cluster Ready Services (CRS):
Feb 17 15:24:19.153 | INF | daemon shutting down
Stopping resources. This could take several minutes.
Successfully stopped CRS resources.
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
Shutdown has begun. The daemons should exit soon.
Checking to see if Oracle CRS stack is down…
Checking to see if Oracle CRS stack is down…
Oracle CRS stack is not running.
Oracle CRS stack is down now.
Removing script for Oracle Cluster Ready services
Updating ocr file for downgrade
Cleaning up SCR settings in ‘/etc/oracle/scls_scr’
5. Delete Node from Cluster and Update OCR
Upon successful completion of the rootdelete.sh script, run the rootdeletenode.sh script to delete the node (linux3) from the Oracle cluster and to update the Oracle Cluster Registry (OCR). This script should be run from a pre-existing / available node in the cluster (node1) as the root user account:
Before executing rootdeletenode.sh, we need to know the node number associated with the node name to be deleted from the cluster. To determine the node number, run the following command as the oracle user account from node1:
[oracle@db01 bin]$ pwd
/databases/oracle/crs/bin
[oracle@db01 bin]$ olsnodes -n
db01 1
db02 2
db05 3
Note: notice the node # from result, we need to use it for removing node.
[root@db01 install]# pwd
/databases/oracle/crs/install
[root@db01 install]# ./rootdeletenode.sh db05,3
CRS-0210: Could not find resource ‘ora.db05.LISTENER_DB05.lsnr’.
CRS-0210: Could not find resource ‘ora.db05.ons’.
CRS-0210: Could not find resource ‘ora.db05.vip’.
CRS-0210: Could not find resource ‘ora.db05.gsd’.
CRS-0210: Could not find resource ora.db05.vip.
CRS nodeapps are deleted successfully
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully deleted 14 values from OCR.
Key SYSTEM.css.interfaces.nodevanpgprodb05 marked for deletion is not there. Ignoring.
Successfully deleted 5 keys from OCR.
Node deletion operation successful.
‘db05,3’ deleted successfully
[root@db01 install]# ../bin/olsnodes -n
db01 1
db02 2
6. Update Node List for Oracle Clusterware Software – (Remove node3)
From Node3 as Oracle user:
[oracle@db05 bin]$ pwd
/databases/oracle/crs/oui/bin
[oracle@db05 bin]$ export ORA_CRS_HOME=/databases/oracle/crs
[oracle@db05 bin]$ export DISPLAY=10.50.133.143:0
[oracle@db05 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORA_CRS_HOME CLUSTER_NODES=”” -local CRS=true
Starting Oracle Universal Installer…
No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed.
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /databases/oracle/oraInventory
‘UpdateNodeList’ was successful.
7. De-install Oracle Clusterware Software
[oracle@db05 bin]$ pwd
/databases/oracle/crs/oui/bin
[oracle@db05 bin]$ ./runInstaller
After delete the crs software, the directory will be deleted as well.
8. Update Node List for Remaining Nodes in the Cluster
[oracle@db01 bin]$ export ORA_CRS_HOME=/databases/oracle/crs
[oracle@db01 bin]$ export DISPLAY=10.50.133.143:0
[oracle@db01 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORA_CRS_HOME “CLUSTER_NODES={db01,db02}” CRS=true
Starting Oracle Universal Installer…
No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed.
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /databases/oracle/oraInventory
‘UpdateNodeList’ was successful.
C. Reconfigure the OS and remaining hardware.
————————————————-
1. Check the tnsnames.ora on the rest of nodes if exists.
2. Delete oracle_home and crs_home
3. Next, as root, from the deleted node, verify that all init scripts and soft links are removed:
For Linux:
rm -f /etc/init.d/init.cssd
rm -f /etc/init.d/init.crs
rm -f /etc/init.d/init.crsd
rm -f /etc/init.d/init.evmd
rm -f /etc/rc2.d/K96init.crs
rm -f /etc/rc2.d/S96init.crs
rm -f /etc/rc3.d/K96init.crs
rm -f /etc/rc3.d/S96init.crs
rm -f /etc/rc5.d/K96init.crs
rm -f /etc/rc5.d/S96init.crs
rm -Rf /etc/oracle
rm -rf /etc/ora.tab
Reference:
1. Removing a Node from an Oracle RAC 10g Release 2 Cluster on Linux – (CentOS 4.5 / iSCSI)
by Jeff Hunter http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle10gRAC/CLUSTER_23.shtml
2. How to delete a node from 3 node RAC in 10GR2
http://www.oraclefaq.net/2007/06/21/how-to-delete-a-node-from-3-node-rac-in-10gr2/
3. 10 Adding and Deleting Nodes and Instances on UNIX-Based Systems
Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide 10g Release 2 (10.2) Part Number B14197-09
http://download.oracle.com/docs/cd/B19306_01/rac.102/b14197/adddelunix.htm#BEIJAJHH
4. Removing a Node from a 10g RAC Cluster Doc ID: 269320.1
https://metalink2.oracle.com/metalink/plsql/f?p=130:14:4196773229713543167::::p14_database_id,p14_docid,p14_show_header,p14_show_help,p14_black_frame,p14_font:NOT,269320.1,1,1,1,helvetica