Hi everyone,
Hope you’re doing good.
Some days ago, a client called regarding an issue when applying the RU 19.28 in a Grid Infrastructure for a 3-node cluster on RHEL 8 64-bit due to an issue they were facing during the execution of opatchauto.
They are calling the script to apply the patch:
/oracle/app/193/grid/OPatch/opatchauto apply /sapcd/stage_SBP_Ansible/lq2_cluster/37957391
They got an issue right after they started the opatchauto:
OPatchauto session is initiated at Wed Nov 12 23:15:46 2025 System initialization log file is /oracle/app/193/grid/cfgtoollogs/opatchautodb/systemconfig2025-11-12_11-15-46PM.log. OPATCHAUTO-72050: System instance creation failed. OPATCHAUTO-72050: Failed while retrieving system information. OPATCHAUTO-72050: Please check log file for more details. OPatchauto session completed at Wed Nov 12 23:15:59 2025 Time taken to complete the session 0 minute, 13 seconds Topology creation failed.
Let’s take a look in the log (/oracle/app/193/grid/cfgtoollogs/opatchautodb/systemconfig2025-11-12_11-15-46PM.log) — you will need to scroll down; the log has ~540 lines and the error is in the last part:
2025-11-12 23:15:57,395 WARNING [1] oracle.dbsysmodel.driver.sdk.util.OsysUtility - Failed: oracle.cluster.common.NodeRole. CLSD/ADR initialization failed with return value -1 Verification cannot proceed The result of cluvfy command does not contain OVERALL_STATUS String. 2025-11-12 23:15:57,395 SEVERE [1] com.oracle.glcm.patch.auto.db.integration.model.productsupport.topology.TopologyCreator - Stacktrace :: oracle.dbsysmodel.driver.sdk.productdriver.ProductDriverException: Unable to determine if "/oracle/PWQ1/193" is a shared oracle home.
Luckily we found a My Oracle Support note that was really helpful:
In the note, the cause is listed as a directory permission issue, and then in the solution, there are the steps to enable debug mode for cluvfy.
As grid user:
rm -rf /tmp/cvutrace mkdir /tmp/cvutrace touch /tmp/cvutrace/cvutrace.log.0 touch /tmp/cvutrace/cvutrace.log chmod -R 776 /tmp/cvutrace
After this, as root user:
export CV_TRACELOC=/tmp/cvutrace export SRVM_TRACE=true export SRVM_TRACE_LEVEL=1 export EXECTASK_TRACE=true
Then, trigger the opatchauto apply again:
/oracle/app/193/grid/OPatch/opatchauto apply /sapcd/stage_SBP_Ansible/lq2_cluster/37957391
We will face the same error, but now we have the trace file (cvutrace.log.0) inside the /tmp/cvutrace directory:
[root@NODE02 ~]# cd /tmp/cvutrace/ [root@NODE02 cvutrace]# ls -l total 2156 -rwxrwxrw- 1 grid oinstall 0 Nov 12 23:17 cvutrace.log -rwxrwxrw- 1 grid oinstall 2204976 Nov 12 23:18 cvutrace.log.0 [root@NODE02 cvutrace]#
Let’s check the content of this trace file. You can search for the string we found in the first log:
CLSD/ADR initialization failed with return value -1 [Thread-3] [ 2025-11-12 23:18:34.681 EST ] [StreamReader.run:62] In StreamReader.run [Thread-3] [ 2025-11-12 23:18:34.708 EST ] [StreamReader.run:66] OUTPUT>Oracle Clusterware infrastructure error in CRSCTL (OS PID 2942063): CLSD/ADR initialization failed with return value -1 [Thread-3] [ 2025-11-12 23:18:34.708 EST ] [StreamReader.run:66] OUTPUT>1: clskec:has:CLSU:910 4 args[clsdAdr_CLSK_err][mod=clsdadr.c][loc=(:CLSD00050:)][msg=2025-11-12 23:18:34.708 (:CLSD00050:) dbgc_init_all failed with return code 49802. Detected in function clsdAdrInit at line number 1852. ] [Thread-3] [ 2025-11-12 23:18:34.708 EST ] [StreamReader.run:66] OUTPUT>2: clskec:has:CLSU:910 4 args[clsdAdrInit_CLSK_err][mod=clsdadr.c][loc=(:CLSD00281:)][msg=clsdAdrInit: Additional diagnostic data returned by the ADR component for dbgc_init_all failure: [Thread-3] [ 2025-11-12 23:18:34.708 EST ] [StreamReader.run:66] OUTPUT> DIA-49802: missing read, write, or execute permission on specified ADR home directory [/oracle/app/grid/diag/crs/node02] [Thread-3] [ 2025-11-12 23:18:34.708 EST ] [StreamReader.run:66] OUTPUT>DIA-49801: actual permissions [rwxr-xr-x], expected minimum permissions [rwxrwx---] for effective user [oralq2] [Thread-3] [ 2025-11-12 23:18:34.708 EST ] [StreamReader.run:66] OUTPUT>DIA-48188: user missing read, write, or exec permission on specified directory [Thread-3] [ 2025-11-12 23:18:34.708 EST ] [StreamReader.run:66] OUTPUT>Linux-x86_64 Error: 13: Permission denied [Thread-3] [ 2025-11-12 23:18:34.708 EST ] [StreamReader.run:66] OUTPUT>Additional information: 2 [Thread-3] [ 2025-11-12 23:18:34.708 EST ] [StreamReader.run:66] OUTPUT>Additional information: 504 [Thread-3] [ 2025-11-12 23:18:34.708 EST ] [StreamReader.run:66] OUTPUT>Additional information: 16877 [Thread-3] [ 2025-11-12 23:18:34.708 EST ] [StreamReader.run:66] OUTPUT>([all diagnostic data retrieved from ADR])] [Thread-3] [ 2025-11-12 23:18:34.708 EST ] [StreamReader.run:66] OUTPUT> [Thread-3] [ 2025-11-12 23:18:34.729 EST ] [StreamReader.run:66] OUTPUT>Oracle Clusterware active version on the cluster is [19.0.0.0.0]
Here is the important part:
DIA-49802: missing read, write, or execute permission on specified ADR home directory [/oracle/app/grid/diag/crs/node02]
DIA-49801: actual permissions [rwxr-xr-x], expected minimum permissions [rwxrwx—] for effective user [oralq2]
That’s it!
The permission for the ADR home directory is wrong!
The actual permission was [rwxr-xr-x] → 755
The minimum expected permission was [rwxrwx—] → 770
As root, let’s fix this:
cd /oracle/app/grid/diag/crs chmod -R 770 node02/
After this, we triggered the opatchauto apply again and the patch was applied with no issues!
Hope it helps!
Peace.
Vinicius