Hi everyone, hope you’re doing well!
I was working on a RAC migration project for a client. I did the Cluster install for the first environment. Cluster and DB versions are 19 on Linux. As a DBA working on Linux, I always prefer to install the software using the silent option with a response file. No GUI!
The client DBA team then did the installation for the rest of the environments. They were basically copying the control file that I built to the other environments and were replacing some variables like: SCAN name, cluster nodes, cluster name, etc.
Well, when they were doing the installation in the environments, the DBA forgot to change the parameter for the cluster name in one of the clusters. So, what happened? Two clusters configured with the same name.
The official documentation is very clear about this:

You can find this info here:
Cluster Name and SCAN Requirements
If you have two or more clusters with same name on your network, you must fix it.
If you are running your cluster below version 19c, you will need to use the -deconfig and re(-config) your cluster as listed on Support Note: How to Change the Cluster Name in RAC (Requires -deconfig and (re)-config of Grid Infrastructure) (Doc ID 1967916.1)
Luckily, our client was running on 19c, and now the procedure is easier and faster. You can follow the new procedure as per my blog post, if you prefer, you can read directly from the Support Note: Steps to rename the cluster name (Doc ID 2725377.1)
I will explain here about the steps, however, it’s important to understand the pre-requirements:
- Supported only for “Standalone Cluster”
- Not supported for SIHA
- Not supported if RHP Server or RHP Client is configured
- Grid Infrastructure stack should be up on all nodes when the “Cluster Rename Procedure” is performed
- Requires unique new cluster name
Update, thank you Maicon Carneiro for catching up this!
I also recommend to check if rhpclient resource is enabled and active.
If so, the DBA will need to unregister all workingcopies and the client metadata related to this cluster in the RHP Server before removing the rhpclient in the Cluster that is being renamed.
After the renaming is complete, the RHP Client can be configured again with the new name, and the workingcopies can be registered manually as well. This is not a trivial task, but can be done.
As on my environment, the DB server is not working as a RHP Client, I did not performed any deletion on workingcopies, etc.
You can check if RHP Client is configured running this:
srvctl status rhpclient
To list and delete workingcopies, and delete rhpclient, you need to use rhpctl command. You can check the reference for rhpctl here.
OK, we met all the requirements listed above.
Let’s start our procedure doing some checks.
Let’s check if cluster is running on all nodes. Both for grid and root users, we have the environment variables set pointing to GRID_HOME:
[root@node01 ~]# crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online [root@node01 ~]# [root@node02 ~]# crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online [root@node02 ~]#
On any node, as root, check the current cluster name:
[root@node01 ~]# cemutlo -n wrongcluster
The client is not using RHP here, if you are using, keep in mind you will need to disable and remove the resource from the cluster to proceed using the rename cluster feature without deconfig. Let’s check if RHP is running. As grid user on any node:
[grid@node01 ~]$ srvctl status rhpserver Rapid Home Provisioning Server is enabled Rapid Home Provisioning Server is not running
OK, we can see that RHP is enable but not running, let’s disable it:
[grid@node01 ~]$ srvctl disable rhpserver [grid@node01 ~]$ srvctl status rhpserver Rapid Home Provisioning Server is disabled Rapid Home Provisioning Server is not running
Now that we disabled RHP, let’s remove it from the cluster:
[root@node01 ~]# srvctl remove rhpserver
Perfect! We are good to proceed! You’ll see that cluster rename operation is pretty straightforward!
On the first node, as root user, perform the cluster rename operation:
[root@node01 ~]# crsrename cluster rightcluster CRS-42004: successfully set the cluster name; restart Oracle High Availability Services on all nodes for new cluster name to take effect
Perfect! Cluster name changed! Let’s restart the cluster services on all nodes. As root on all nodes:
[root@node01 ~]# crsctl stop crs -f CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'node01' CRS-2673: Attempting to stop 'ora.crf' on 'node01' CRS-2673: Attempting to stop 'ora.mdnsd' on 'node01' CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'node01' CRS-2673: Attempting to stop 'ora.gpnpd' on 'node01' CRS-2677: Stop of 'ora.crf' on 'node01' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'node01' CRS-2677: Stop of 'ora.drivers.acfs' on 'node01' succeeded CRS-2677: Stop of 'ora.mdnsd' on 'node01' succeeded CRS-2677: Stop of 'ora.gpnpd' on 'node01' succeeded CRS-2677: Stop of 'ora.gipcd' on 'node01' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'node01' has completed CRS-4133: Oracle High Availability Services has been stopped. [root@node02 ~]# crsctl stop crs -f CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'node02' CRS-2673: Attempting to stop 'ora.mdnsd' on 'node02' CRS-2673: Attempting to stop 'ora.crf' on 'node02' CRS-2673: Attempting to stop 'ora.gpnpd' on 'node02' CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'node02' CRS-2677: Stop of 'ora.drivers.acfs' on 'node02' succeeded CRS-2677: Stop of 'ora.mdnsd' on 'node02' succeeded CRS-2677: Stop of 'ora.gpnpd' on 'node02' succeeded CRS-2677: Stop of 'ora.crf' on 'node02' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'node02' CRS-2677: Stop of 'ora.gipcd' on 'node02' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'node02' has completed
Now that cluster services are stopped in all nodes. Let’s start it:
[root@node01 ~]# crsctl start crs -wait CRS-4123: Starting Oracle High Availability Services-managed resources CRS-2672: Attempting to start 'ora.evmd' on 'node01' CRS-2672: Attempting to start 'ora.mdnsd' on 'node01' CRS-2676: Start of 'ora.mdnsd' on 'node01' succeeded CRS-2676: Start of 'ora.evmd' on 'node01' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'node01' CRS-2676: Start of 'ora.gpnpd' on 'node01' succeeded CRS-2672: Attempting to start 'ora.gipcd' on 'node01' CRS-2676: Start of 'ora.gipcd' on 'node01' succeeded CRS-2672: Attempting to start 'ora.crf' on 'node01' CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node01' CRS-2676: Start of 'ora.cssdmonitor' on 'node01' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'node01' CRS-2672: Attempting to start 'ora.diskmon' on 'node01' CRS-2676: Start of 'ora.diskmon' on 'node01' succeeded CRS-2676: Start of 'ora.crf' on 'node01' succeeded CRS-2676: Start of 'ora.cssd' on 'node01' succeeded CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'node01' CRS-2672: Attempting to start 'ora.ctssd' on 'node01' CRS-2676: Start of 'ora.ctssd' on 'node01' succeeded CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'node01' succeeded CRS-2672: Attempting to start 'ora.asm' on 'node01' CRS-2676: Start of 'ora.asm' on 'node01' succeeded CRS-2672: Attempting to start 'ora.storage' on 'node01' CRS-2676: Start of 'ora.storage' on 'node01' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'node01' CRS-2676: Start of 'ora.crsd' on 'node01' succeeded CRS-6023: Starting Oracle Cluster Ready Services-managed resources CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'node01' CRS-6017: Processing resource auto-start for servers: node01 CRS-2672: Attempting to start 'ora.scan3.vip' on 'node01' CRS-2672: Attempting to start 'ora.scan2.vip' on 'node01' CRS-2672: Attempting to start 'ora.scan1.vip' on 'node01' CRS-2672: Attempting to start 'ora.node01.vip' on 'node01' CRS-2672: Attempting to start 'ora.ons' on 'node01' CRS-2672: Attempting to start 'ora.node02.vip' on 'node01' CRS-2672: Attempting to start 'ora.qosmserver' on 'node01' CRS-2672: Attempting to start 'ora.cvu' on 'node01' CRS-2676: Start of 'ora.cvu' on 'node01' succeeded CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'node01' succeeded CRS-2676: Start of 'ora.node01.vip' on 'node01' succeeded CRS-2672: Attempting to start 'ora.MGMTLSNR' on 'node01' CRS-2672: Attempting to start 'ora.LISTENER.lsnr' on 'node01' CRS-2676: Start of 'ora.node02.vip' on 'node01' succeeded CRS-2676: Start of 'ora.scan3.vip' on 'node01' succeeded CRS-2672: Attempting to start 'ora.LISTENER_SCAN3.lsnr' on 'node01' CRS-2676: Start of 'ora.scan2.vip' on 'node01' succeeded CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on 'node01' CRS-2676: Start of 'ora.scan1.vip' on 'node01' succeeded CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'node01' CRS-2676: Start of 'ora.qosmserver' on 'node01' succeeded CRS-2676: Start of 'ora.MGMTLSNR' on 'node01' succeeded CRS-2676: Start of 'ora.LISTENER.lsnr' on 'node01' succeeded CRS-2676: Start of 'ora.LISTENER_SCAN3.lsnr' on 'node01' succeeded CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'node01' succeeded CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'node01' succeeded CRS-2676: Start of 'ora.ons' on 'node01' succeeded CRS-2672: Attempting to start 'ora.mgmtdb' on 'node01' CRS-2672: Attempting to start 'ora.RACTST.db' on 'node01' CRS-2676: Start of 'ora.mgmtdb' on 'node01' succeeded CRS-2672: Attempting to start 'ora.chad' on 'node01' CRS-2676: Start of 'ora.chad' on 'node01' succeeded CRS-2676: Start of 'ora.RACTST.db' on 'node01' succeeded CRS-2672: Attempting to start 'ora.RACTST.preprod_batch_srvc.svc' on 'node01' CRS-2672: Attempting to start 'ora.RACTST.DBRAC_svc.svc' on 'node01' CRS-2676: Start of 'ora.RACTST.preprod_batch_srvc.svc' on 'node01' succeeded CRS-2676: Start of 'ora.RACTST.DBRAC_svc.svc' on 'node01' succeeded CRS-6016: Resource auto-start has completed for server node01 CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources CRS-4123: Oracle High Availability Services has been started. [root@node02 ~]# crsctl start crs -wait CRS-4123: Starting Oracle High Availability Services-managed resources CRS-2672: Attempting to start 'ora.mdnsd' on 'node02' CRS-2672: Attempting to start 'ora.evmd' on 'node02' CRS-2676: Start of 'ora.mdnsd' on 'node02' succeeded CRS-2676: Start of 'ora.evmd' on 'node02' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'node02' CRS-2676: Start of 'ora.gpnpd' on 'node02' succeeded CRS-2672: Attempting to start 'ora.gipcd' on 'node02' CRS-2676: Start of 'ora.gipcd' on 'node02' succeeded CRS-2672: Attempting to start 'ora.crf' on 'node02' CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node02' CRS-2676: Start of 'ora.cssdmonitor' on 'node02' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'node02' CRS-2672: Attempting to start 'ora.diskmon' on 'node02' CRS-2676: Start of 'ora.diskmon' on 'node02' succeeded CRS-2676: Start of 'ora.crf' on 'node02' succeeded CRS-2676: Start of 'ora.cssd' on 'node02' succeeded CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'node02' CRS-2672: Attempting to start 'ora.ctssd' on 'node02' CRS-2676: Start of 'ora.ctssd' on 'node02' succeeded CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'node02' succeeded CRS-2672: Attempting to start 'ora.asm' on 'node02' CRS-2676: Start of 'ora.asm' on 'node02' succeeded CRS-2672: Attempting to start 'ora.storage' on 'node02' CRS-2676: Start of 'ora.storage' on 'node02' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'node02' CRS-2676: Start of 'ora.crsd' on 'node02' succeeded CRS-6017: Processing resource auto-start for servers: node02 CRS-2673: Attempting to stop 'ora.node02.vip' on 'node01' CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'node01' CRS-2672: Attempting to start 'ora.ons' on 'node02' CRS-2672: Attempting to start 'ora.chad' on 'node02' CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'node01' succeeded CRS-2673: Attempting to stop 'ora.scan1.vip' on 'node01' CRS-2677: Stop of 'ora.node02.vip' on 'node01' succeeded CRS-2672: Attempting to start 'ora.node02.vip' on 'node02' CRS-2677: Stop of 'ora.scan1.vip' on 'node01' succeeded CRS-2672: Attempting to start 'ora.scan1.vip' on 'node02' CRS-2676: Start of 'ora.chad' on 'node02' succeeded CRS-2676: Start of 'ora.node02.vip' on 'node02' succeeded CRS-2672: Attempting to start 'ora.LISTENER.lsnr' on 'node02' CRS-2676: Start of 'ora.scan1.vip' on 'node02' succeeded CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'node02' CRS-2676: Start of 'ora.LISTENER.lsnr' on 'node02' succeeded CRS-33672: Attempting to start resource group 'ora.asmgroup' on server 'node02' CRS-2672: Attempting to start 'ora.asmnet1.asmnetwork' on 'node02' CRS-2676: Start of 'ora.asmnet1.asmnetwork' on 'node02' succeeded CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'node02' CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'node02' succeeded CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'node02' succeeded CRS-2672: Attempting to start 'ora.asm' on 'node02' CRS-2676: Start of 'ora.ons' on 'node02' succeeded CRS-2676: Start of 'ora.asm' on 'node02' succeeded CRS-33676: Start of resource group 'ora.asmgroup' on server 'node02' succeeded. CRS-2672: Attempting to start 'ora.RECOV_VP.dg' on 'node02' CRS-2672: Attempting to start 'ora.DATAV_VP.dg' on 'node02' CRS-2672: Attempting to start 'ora.REDO2_VP.dg' on 'node02' CRS-2672: Attempting to start 'ora.REDO1_VP.dg' on 'node02' CRS-2676: Start of 'ora.RECOV_VP.dg' on 'node02' succeeded CRS-2676: Start of 'ora.DATAV_VP.dg' on 'node02' succeeded CRS-2676: Start of 'ora.REDO2_VP.dg' on 'node02' succeeded CRS-2676: Start of 'ora.REDO1_VP.dg' on 'node02' succeeded CRS-2672: Attempting to start 'ora.RACTST.db' on 'node02' CRS-2676: Start of 'ora.RACTST.db' on 'node02' succeeded CRS-2672: Attempting to start 'ora.RACTST.DBRAC_svc.svc' on 'node02' CRS-2676: Start of 'ora.RACTST.DBRAC_svc.svc' on 'node02' succeeded CRS-6016: Resource auto-start has completed for server node02 CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources CRS-4123: Oracle High Availability Services has been started.
Well, we have reconfigured RHP in case client would like to start using it. As root on first node:
[root@node01 ~]# srvctl add rhpserver -storage /opt/oracle/rhp_images -diskgroup GIMR_VP -pl_port 8896 -enableHTTPS YES
Now let’s check the config for the RHP:
[root@node01 ~]# srvctl config rhpserver Storage base path: /opt/oracle/rhp_images Disk Groups: GIMR_VP Port number: 8896 Temporary Location: Transfer port range: Rapid Home Provisioning Server is enabled Rapid Home Provisioning Server is individually enabled on nodes: Rapid Home Provisioning Server is individually disabled on nodes: Email address: Mail server address: Mail server port: Transport Level Security disabled HTTP Secure is enabled Endpoint : gns
Let’s check if RHP is configured, enabled, but not running (same status we had before the cluster rename operation):
[grid@node01 ~]$ . oraenv ORACLE_SID = [oracle] ? +ASM1 The Oracle base has been set to /u01/app/grid [grid@node01 ~]$ srvctl status rhpserver Rapid Home Provisioning Server is enabled Rapid Home Provisioning Server is not running
As the last step, let’s check if the cluster name now is the right one:
[root@node01 ~]# cemutlo -n rightcluster
You can also check the cluster name using the following command:
[grid@node01 ~]$ olsnodes -c rightcluster
Perfect! Our cluster rename operation worked like a charm!
I hope it helps.
Peace,
Vinicius