The OCR file is automatically backed up every 4 hours by Oracle Clusterware and can also be backed up manually on demand.
There are various OCR recovery scenarios and methods. But first let’s verify the location of the OCR files.
Connect as GRID user and execute the command below:
grid@UAT:~$ ocrcheck Status of Oracle Cluster Registry is as follows : Version : 4 Total space (kbytes) : 409568 Used space (kbytes) : 1620 Available space (kbytes) : 407948 ID : 44765691 Device/File Name : +DATA Device/File integrity check succeeded Device/File not configured Device/File not configured Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check bypassed due to non-privileged user
To view the OCR file name and path, run the following command:
oracle@UAT:~$ ocrcheck -config Oracle Cluster Registry configuration is : Device/File Name : +DATA oracle@UAT~$
To view automatic/manual backup details, run the following command:
oracle@UAT:~$ ocrconfig -showbackup UAT 2022/10/14 12:20:09 /u01/app/12.1.0/grid/cdata/wgdb-cluster/backup00.ocr 0 UAT 2022/10/14 08:20:03 /u01/app/12.1.0/grid/cdata/wgdb-cluster/backup01.ocr 0 UAT 2022/10/14 04:19:56 /u01/app/12.1.0/grid/cdata/wgdb-cluster/backup02.ocr 0 UAT 2022/10/13 08:19:30 /u01/app/12.1.0/grid/cdata/wgdb-cluster/day.ocr 0 UAT 2022/10/05 20:16:38 /u01/app/12.1.0/grid/cdata/wgdb-cluster/week.ocr 0 PROT-25: Manual backups for the Oracle Cluster Registry are not available oracle@UAT:~$
Below are some of the OCR file restore procedures:
Restore from autogenerated backup:
Stop Clusterware on all nodes using the following command as root user:
crsctl stop crs [-f]
Restore the most recently valid backup copy identified by ocrconfig –showbackup:
ocrconfig –restore backup02.ocr
After the restoration is complete restart the crs:
crsctl start crs cluvfy comp ocr –n all -verbose
Recover OCR in an ASM diskgroup, assuming that the ASM diskgroup got corrupted or couldn’t be mounted:
Stop the cluster on all nodes:
crsctl stop crs [-f]
Start the clusterware in exclusive mode:
crsctl start crs –excl -nocrs
Connect to the local ASM instance on the node and recreate the ASM diskgroup:
SQL> drop diskgroup OCR force including contents; SQL> create diskgroup OCR external redundancy disk 'diskname' attribute 'COMPATIBLE.asm'='12.1.0';
After Disk recreation do the restore from the most recent valid OCR backup:
ocrconfig –restore backup02.ocr crsctl replace votedisk +OCR
Shutdown the clusterware on all nodes and restart again:
crsctl stop crs; crsctl start crs;