Oracle Standby/DR Creation

Oracle Standby/DR Creation


Now a days HA(High Availability) and DR(Disaster Recovery) Solutions are mandatory and required for mission critical databases. In Oracle world this is achieved by Oracle Standby or Dataguard. 

We can create Oracle Standby /Dataguard using below steps:

Prepare Primary Site:


==>Enable Force Logging.
==>Create Spfile from the existing Pfile. Modify Dataguard related init Parameters:
==>Enable Archivelog Mode.
==>Create the SLRs (Standby Redo Logs).
==>Backup the Database for Standby
==>Create pfile for standby database.
==>Update the tnsnames.ora.

1. Enable Force Logging.

SYS@testdb> select NAME, FORCE_LOGGING from v$database;
NAME      FOR
--------- ---
testdb     YES

If Force Logging is not enabled then use the following command to enable the force logging.
SYS@testdb>alter database force logging;

2. Modify DataGuard related init Parameters

Login to node oraceprodhost01
[testdb oracle@oraceprodhost01 ~]$ cd $ORACLE_HOME/dbs
[testdb oracle@oraceprodhost01 ~]$ cp -pr inittestdb1.ora inittestdb1.ora_<date>

Login to node oraceprodhost01
[testdb2 oracle@oraceprodhost01 ~]$ cd $ORACLE_HOME/dbs
[testdb2 oracle@oraceprodhost01 ~]$ cp -pr inittestdb2.ora inittestdb2.ora_<date>

Create below file in SQL plus
SYS@testdb2> create spfile='+DATA/testdb/PARAMETERFILE/spfiletestdb.ora' from pfile='/tmp/inittestdb.ora';

INFO: Parameter entries for configuration in DR setup

DB_NAME=testdb
DB_UNIQUE_NAME=testdb
LOG_ARCHIVE_CONFIG='DG_CONFIG=(testdb,testdbdr)'
LOG_ARCHIVE_DEST_1='LOCATION=+RECO VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=testdb'
LOG_ARCHIVE_DEST_2='SERVICE=testdbdr SYNC AFFIRM VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=testdbdr'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
LOG_ARCHIVE_MAX_PROCESSES=8
FAL_SERVER=testdbdr
DB_FILE_NAME_CONVERT='+DATADR','+DATA'
LOG_FILE_NAME_CONVERT= '+RECODR','+RECO','+DATADR','+DATA'
STANDBY_FILE_MANAGEMENT=AUTO

The db_unique_name parameter has already been set to the appropriate value during the initial creation of the RAC database.
The log_archive_dest_state_n and remote_login_passwordfile have default values set to ENABLE and EXCLUSIVE respectively. So, only above mentioned parameters needed to be changed here.

INFO: The below entries can be executed from oraceprodhost01/oraceprodhost01.

SYS@testdb2> alter system set LOG_ARCHIVE_CONFIG='DG_CONFIG=(testdb,testdbdr)' scope=both sid='*';
SYS@testdb2> alter system set LOG_ARCHIVE_DEST_1='LOCATION=+RECO VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=testdb' scope=both sid='*';
SYS@testdb2> alter system set LOG_ARCHIVE_DEST_2='SERVICE=testdbdr SYNC AFFIRM VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=testdbdr' scope=both sid='*';
SYS@testdb2> alter system set log_archive_max_processes=8 scope=both sid='*';
SYS@testdb2> alter system set fal_server=testdbdr scope=both sid='*';
SYS@testdb2> alter system set LOG_ARCHIVE_FORMAT='%t_%s_%r.arc' scope=both sid='*' 
SYS@testdb2> alter system set db_file_name_convert='+DATADR','+DATA' scope=spfile sid='*';
SYS@testdb2> alter system set log_file_name_convert='+RECODR','+RECO','+DATADR','+DATA' scope=spfile sid='*';
SYS@testdb2> alter system set standby_file_management=AUTO scope=both sid='*';
SYS@testdb2> alter system set REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE scope=both sid='*' 

SYS@testdb2> set linesize 500 pages 0
SYS@testdb2> col value for a90
SYS@testdb2> col name for a50
SYS@testdb2> select name, value from v$parameter where name in ('db_name','db_unique_name','log_archive_config', 'log_archive_dest_1','log_archive_dest_2','log_archive_dest_state_1','log_archive_dest_state_2','remote_login_passwordfile','log_archive_format','log_archive_max_processes','fal_server','db_file_name_convert','log_file_name_convert', 'standby_file_management');

3. Enable Archivelog Mode.

SYS@testdb2> archive log list;

If not in archive log mode convert to archive log mode.

[testdb2 oracle@oraceprodhost01 ~]$ srvctl stop database -d testdb 
SYS@testdb2> startup mount
SYS@testdb2> alter database archivelog;
SYS@testdb2> alter database open;
[testdb2 oracle@oraceprodhost01 ~]$ srvctl stop database -d testdb
[testdb2 oracle@oraceprodhost01 ~]$ srvctl start database -d testdb 

4. Create the SLRs (Standby Redo Logs)

There should be minimum of (threads)*(groups Per Threads + 1) SLRs created on the standby database.  There are 2 threads with 8 groups per thread in this configuration on the primary side so there should be total of 18 SLRs at
minimum needs to be created. Let’s get the total Groups and Max size of the logfile from v$log.

SYS@testdb2> select max (bytes), count (1) from v$log;
MAX(BYTES)   COUNT(1)
---------- ----------
4294967296          8

Standby Log Location: 

SYS@testdb2> alter system set standby_file_management=manual scope=both sid='*';  
SYS@testdb2> alter database add standby logfile thread 1 group 17 ('+DATA','+RECO') size 4096M; --Completed (02Mar)
SYS@testdb2> alter database add standby logfile thread 1 group 18 ('+DATA','+RECO') size 4096M; --Completed (02Mar)
SYS@testdb2> alter database add standby logfile thread 1 group 19 ('+DATA','+RECO') size 4096M; --Completed (02Mar)
SYS@testdb2> alter database add standby logfile thread 1 group 20 ('+DATA','+RECO') size 4096M; --Completed (02Mar)
SYS@testdb2> alter database add standby logfile thread 1 group 21 ('+DATA','+RECO') size 4096M; --Completed (02Mar)
SYS@testdb2> alter database add standby logfile thread 1 group 22 ('+DATA','+RECO') size 4096M; --Completed (02Mar)
SYS@testdb2> alter database add standby logfile thread 1 group 23 ('+DATA','+RECO') size 4096M; --Completed (02Mar)
SYS@testdb2> alter database add standby logfile thread 1 group 24 ('+DATA','+RECO') size 4096M; --Completed (02Mar)
SYS@testdb2> alter database add standby logfile thread 1 group 25 ('+DATA','+RECO') size 4096M; --Completed (02Mar)
SYS@testdb2> alter database add standby logfile thread 2 group 26 ('+DATA','+RECO') size 4096M; --Completed (02Mar)
SYS@testdb2> alter database add standby logfile thread 2 group 27 ('+DATA','+RECO') size 4096M; --Completed (02Mar)
SYS@testdb2> alter database add standby logfile thread 2 group 28 ('+DATA','+RECO') size 4096M; --Completed (02Mar)
SYS@testdb2> alter database add standby logfile thread 2 group 29 ('+DATA','+RECO') size 4096M; --Completed (02Mar)
SYS@testdb2> alter database add standby logfile thread 2 group 30 ('+DATA','+RECO') size 4096M; --Completed (02Mar)
SYS@testdb2> alter database add standby logfile thread 2 group 31 ('+DATA','+RECO') size 4096M; --Completed (02Mar)
SYS@testdb2> alter database add standby logfile thread 2 group 32 ('+DATA','+RECO') size 4096M; --Completed (02Mar)
SYS@testdb2> alter database add standby logfile thread 2 group 33 ('+DATA','+RECO') size 4096M; --Completed (02Mar)
SYS@testdb2> alter database add standby logfile thread 2 group 34 ('+DATA','+RECO') size 4096M; --Completed (02Mar)
SYS@testdb2> alter system set standby_file_management=auto scope=both sid='*';

INFO: To Check Standby logfiles:
SYS@testdb2> select group#,thread#,bytes,status from v$standby_log;

5. Backup the Database for Standby 

Login to RMAN on oraceprodhost01/B.
NOTE: Required when backup is not on shared directory.
[testdb2 oracle@oraceprodhost01 ~] rman target /
run
{
sql "alter system switch logfile";
backup database;
backup current controlfile for standby;  -> Needed for standby.
backup current controlfile for standby format '/backup/testdb/02-03-2018/testdb_oraclehostprod_Standby_cntrl_%Y%M%D_%s_%p_%t.bk'; ==>Used one
sql "alter system archive log current";
}

6. Create pfile for standby database.

Our database not running on spfile.  Login to to either oraceprodhost01/B.
SYS@testdb2> create pfile='/tmp/pfile_for_standby.ora' from spfile='+DATA/testdb/spfiletestdb.ora’;

7. Update TNSNAMES.ora (On BOTH oraceprodhost01 & oraceprodhost01)

[testdb oracle@oraceprodhost01 ~]$ vi /u01/app/12.1.0.2/grid/network/admin/tnsnames.ora
[testdb2 oracle@oraceprodhost01 ~]$ vi /u01/app/12.1.0.2/grid/network/admin/tnsnames.ora
testdb =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = oraclehostprod-SCAN)(PORT = 1522))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = testdb)
    )
  )

testdb1=
        (DESCRIPTION=
                (ADDRESS=(PROTOCOL=tcp)(HOST=oraclehostprod-SCAN)(PORT=1522))
            (CONNECT_DATA=
                (SERVICE_NAME=testdb)
                (INSTANCE_NAME=testdb1)
            )
        )
testdb2=
        (DESCRIPTION=
                (ADDRESS=(PROTOCOL=tcp)(HOST=oraclehostprod-SCAN)(PORT=1522))
            (CONNECT_DATA=
                (SERVICE_NAME=testdb)
                (INSTANCE_NAME=testdb2)
            )
        )
Note: Copy the tnsnames.ora on all the instances under $ORACLE_HOME/network/admin to keep the same tnsnames.ora on all the instances.

Prepare Standby Site for DataGuard Configuration


==>Copy the RMAN backup and pfile_for_standby.ora.
==>Copy the password file.
==>Create required directories for Standby Instance.
==>Modify pfile_for_standby.ora file.
==>Copy the  tnsnames.ora  file from primary.
==>Create the ASM directories under the Diskgroups.

1. Copy the RMAN backup and pfile_for_standby.ora

Copy the backups from primary node1 ( NDC2 ) to Standby node (NDC3) under the same location(NFS).
Also copy the pfile that was created for standby on primary node1 to the dr-node1.
[testdb2 oracle@oraceprodhost01 ~] scp /tmp/pfile_for_standby.ora oraclehoststdA:/backup/testdb/pfile_for_standby.ora

2. Copy the password file

Copy the file $ORACLE_HOME/dbs/orapwtestdb1 from Primary node 1 (oraceprodhost01) to both the standby nodes under the directory $ORACLE_HOME/dbs with the name of orapwtestdbdr1 and orapwtestdb2 for oraceprodhost01 and oraclehoststdA
respectively.

[testdb oracle@oraceprodhost01 ~] scp $ORACLE_HOME/dbs/orapwtestdb1 oraclehoststdA:$ORACLE_HOME/dbs/orapwtestdbdr1
[testdb2 oracle@oraceprodhost01 ~] scp $ORACLE_HOME/dbs/orapwtestdb2 oraclehoststdB:$ORACLE_HOME/dbs/orapwtestdbdr2

3. Create required directories for Standby Instance

On oraclehoststdA:
[ oracle@oraclehoststdA ~] mkdir -p /u01/app/oracle/admin/testdbdr/adump
[ oracle@oraclehoststdA ~] mkdir -p /u01/app/oracle/diag/rdbms/testdbdr/testdbdr1
[ oracle@oraclehoststdA ~] cd /u01/app/oracle/diag/rdbms/testdbdr/testdbdr1
[ oracle@oraclehoststdA ~] mkdir trace cdump -> Create 2 folders named trace & cdump
 
On oraclehoststdB:
[ oracle@oraclehoststdB ~] mkdir -p /u01/app/oracle/admin/testdbdr/adump
[ oracle@oraclehoststdB ~] mkdir -p /u01/app/oracle/diag/rdbms/testdbdr/testdbdr2
[ oracle@oraclehoststdB ~] cd /u01/app/oracle/diag/rdbms/testdbdr/testdbdr2
[ oracle@oraclehoststdB ~] mkdir trace cdump -> Create 2 folders named trace & cdump


These directories location could be different according to the database parameters changes. 

4. Modify pfile_for_standby.ora file 

Below mentioned parameter needed to modifiy . The rest of the parameters remain same on both the primary and standby.
control_files
db_file_name_convert
db_unique_name
fal_server
log_archive_dest_1
log_archive_dest_2
log_file_name_convert
remote_listener
local_listener

Comment out below parameters in parameter file:
cluster_database
dg_broker_config_file1
dg_broker_config_file2
dg_broker_start

5. Copy the tnsnames.ora file from primary

[testdb oracle@oraceprodhost01 ~] scp /u01/app/12.1.0.2/grid/network/admin/tnsnames.ora oraclehoststdA:/u01/app/12.1.0.2/grid/network/admin/
[testdb2 oracle@oraceprodhost01 ~] scp /u01/app/12.1.0.2/grid/network/admin/tnsnames.ora oraclehoststdB:/u01/app/12.1.0.2/grid/network/admin/

6. Create the ASM directories under the DiskGroups (oraclehoststdA or oraclehoststdB)

[+ASM2 grid@oraclehoststdB ~] asmcmd
ASMCMD> cd +DATADR/
ASMCMD> mkdir testdbDR
ASMCMD> cd +DATADR/testdbDR
ASMCMD> mkdir PARAMETERFILE DATAFILE CONTROLFILE TEMPFILE ONLINELOG STANDBYLOG
 
ASMCMD> cd +RECODR
ASMCMD> mkdir testdbDR
ASMCMD> cd +RECODR/testdbDR
ASMCMD> mkdir ONLINELOG STANDBYLOG CONTROLFILE

Create the Physical Standby Database


==>Start the Instance in NOMOUNT using the pfile_for_standby.ora.
==>Restore the Standby Database using RMAN DUPLICATE command.
==>Create the Online Redo logs and Standby redo logs.
==>Modify the TNSNAMES.ORA of Standby to add the Standby Database’s TNS alias.
==>Modify the TNSNAMES.ORA of Primary to add the Standby Database’s TNS alias.
==>Start the Managed Recovery Process
==>Create the spfile and start the database on both the nodes using spfile.
==>Register the New Standby database to the OCR.

1. Start the Instance in NOMOUNT using the pfile_for_standby.ora

Add the below entry in /etc/oratab files and start the testdbdr1 instance using pfile_for_standby.ora file.
[oracle@oraclehoststdA ~] vi /etc/oratab
testdbdr:/u01/app/oracle/product/12.1.0.2/dbhome_1:N
testdbdr1:/u01/app/oracle/product/12.1.0.2/dbhome_1:N
[oracle@oraclehoststdB ~] vi /etc/oratab
testdbdr:/u01/app/oracle/product/12.1.0.2/dbhome_1:N
testdbdr2:/u01/app/oracle/product/12.1.0.2/dbhome_1:N
[oracle@oraclehoststdA ~]$ . oraenv
ORACLE_SID = [-] ? testdbdr
[testdbdr1 oracle@oraclehoststdA ~]$ sqlplus / as sysdba
SYS@testdbdr1> startup nomount pfile=/u01/app/oracle/product/12.1.0.2/dbhome_1/dbs/pfile_for_standby.ora

Mandatory: Add Static Listener Enrty for DR Database in node where you will initaite restore.In Our case we did from node oraclehoststdA.
vi /u01/app/12.1.0.2/grid/network/admin/listener.ora
==> Add Below entry to listener.ora
SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = testdbdr)
      (ORACLE_HOME = /u01/app/oracle/product/12.1.0.2/dbhome_1)
      (SID_NAME = testdbdr)
    )
  )
 
Bounce the listener:
lsnrstl stop LISTENER
lsnrstl start LISTENER
lsnrstl status LISTENER ==> Note Service status for the DR database should not be BLOCKED.

Connect to the Target database (testdb) and auxiliary instance (testdbdr) from dr-node1(oraclehoststdA) host to start the DUPLICATE.

2. Restore the RMAN backup using DUPLICATE DATABASE option (on either oraclehoststdA or oraclehoststdB)

Connect to the Target database (testdb) and auxiliary instance (testdbdr) from dr-node1(oraclehoststdA) host to start the DUPLICATE.

[testdbdr1 oracle@oraclehoststdA ~]$ rman target sys/*******@testdb auxiliary /
run
{
allocate channel d1 device type disk;
allocate channel d2 device type disk;
allocate channel d3 device type disk;
allocate channel d4 device type disk;
allocate auxiliary channel ch1 type disk;
allocate auxiliary channel ch2 type disk;
allocate auxiliary channel ch3 type disk;
allocate auxiliary channel ch4 type disk;
DUPLICATE TARGET DATABASE FOR STANDBY NOFILENAMECHECK;
release channel ch1;
release channel ch2;
release channel ch3;
release channel ch4;
release channel d1;
release channel d2;
release channel d3;
release channel d4;
}

3.  Modify the TNSNAMES.ORA of Standby to add the Standby Database’s TNS alias

xisting tnsnames.ora does not contain the TNS alias of the standby database.
It only contains TNS aliases for the Primary database because this tnsnames.ora was copied over from primary.
Add the below TNS alias for standby database/instances into the tnsnames.ora file on both the standby hosts.
[testdbdr1 oracle@oraclehoststdA ~]$ vi /u01/app/12.1.0.2/grid/network/admin/tnsnames.ora
[testdbdr2 oracle@oraclehoststdB ~]$ vi /u01/app/12.1.0.2/grid/network/admin/tnsnames.ora
testdbdr =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = oraclehoststd-SCAN)(PORT = 1522))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = testdbdr)
    )
  )

testdbdr1=
        (DESCRIPTION=
                (ADDRESS=(PROTOCOL=tcp)(HOST=oraclehoststd-SCAN)(PORT=1522))
            (CONNECT_DATA=
                (SERVICE_NAME=testdbdr)
                (INSTANCE_NAME=testdbdr1)
            )
        )
testdbdr2=
        (DESCRIPTION=
                (ADDRESS=(PROTOCOL=tcp)(HOST=oraclehoststd-SCAN)(PORT=1522))
            (CONNECT_DATA=
                (SERVICE_NAME=testdbdr)
                (INSTANCE_NAME=testdbdr2)
            )
        )

4. Modify the TNSNAMES.ORA of Primary to add the Standby Database’s TNS alias

Add the below TNS alias for standby database/instances into the tnsnames.ora file on all primary hosts (oraceprodhost01 & oraceprodhost01).
testdbdr =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = oraclehoststd-SCAN)(PORT = 1522))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = testdbdr)
    )
  )
testdbdr1=
        (DESCRIPTION=
                (ADDRESS=(PROTOCOL=tcp)(HOST=oraclehoststd-SCAN)(PORT=1522))
            (CONNECT_DATA=
                (SERVICE_NAME=testdbdr)
                (INSTANCE_NAME=testdbdr1)
            )
        )
testdbdr2=
        (DESCRIPTION=
                (ADDRESS=(PROTOCOL=tcp)(HOST=oraclehoststd-SCAN)(PORT=1522))
            (CONNECT_DATA=
                (SERVICE_NAME=testdbdr)
                (INSTANCE_NAME=testdbdr2)
            )
        )

5. Start the Managed Recovery Process

Start the Managed recovery process on dr-node1 (oraclehoststdA) and verify that the log transport and log application is happening. Alert log is a quick and easy way to see if things log transport/Gap resolution and log application
is working as expected. Start the tail –f on alert logs on both the standby nodes before starting the MRP.
SYS@testdbdr1> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
 
Perform the log switch on the PRIMARY database so initiate the log Transport.
 
SYS@testdb2> alter system switch logfile;
Similar information can be verified by running below SQLs on standby database.
SYS@testdbdr1> select * from v$archive_gap;
SYS@testdbdr1> select process, client_process, sequence#, status from v$managed_standby;
SYS@testdbdr1> select sequence#, first_time, next_time, applied from v$archived_log;
SYS@testdbdr1> select archived_thread#, archived_seq#, applied_thread#, applied_seq# from v$archive_dest_status;
SYS@testdbdr1> select thread#, max (sequence#) from v$log_history group by thread#;
SYS@testdbdr1> select thread#, max (sequence#) from v$archived_log where APPLIED='YES' group by thread#;

6. Create the spfile and start the database (oraclehoststdA) on both the nodes using spfile

SYS@testdbdr1> create spfile='+DATADR/testdbDR/spfiletestdbdr.ora' from pfile='/backup/testdb/pfile_for_standby.ora';
File created.
After creating the spfile, create the below init.ora files under $ORACLE_HOME/dbs on both the dr nodes with the spfile entry so that the instance can start with the newly created spfile.
[testdbdr1 oracle@oraclehoststdA ~]$ cd /u01/app/oracle/product/12.1.0/.2/dbhome_1/dbs/
[testdbdr1 oracle@oraclehoststdA ~]$ vi inittestdbdr1.ora
SPFILE='+DATADR/testdbDR/spfiletestdbdr.ora'  -> Add entry in inittestdbdr1.ora file
[testdbdr2 oracle@oraclehoststdB ~]$ cd /u01/app/oracle/product/12.1.0/.2/dbhome_1/dbs/
[testdbdr2 oracle@oraclehoststdB ~]$ vi inittestdbdr2.ora
SPFILE='+DATADR/testdbDR/spfiletestdbdr.ora' 

7. Register the New Standby database (either oraclehoststdA or oraclehoststdB) to the OCR(Oracle Cluster Registry)

[testdbdr1 oracle@oraclehoststdA ~]$ srvctl add database -d testdbdr -n testdb -o /u01/app/oracle/product/12.1.0.2/dbhome_1 -p +DATADR/testdbDR/spfiletestdbdr.ora -r physical_standby -a DATADR,RECODR
[testdbdr1 oracle@oraclehoststdA ~]$ srvctl add instance -d testdbdr -i testdbdr1  -n oraclehoststdA
[testdbdr1 oracle@oraclehoststdA ~]$ srvctl add instance -d testdbdr -i testdbdr2 -n oraclehoststdB
[testdbdr1 oracle@oraclehoststdA ~]$ srvctl modify database -d testdbdr -s mount

Stop the Managed Recovery Process:
SYS@testdbdr1>ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
Shutdown DR Database :
SYS@testdbdr1>Shut immediate
Start DR Database with Cluster command:
[testdbdr1 oracle@oraclehoststdA ~]$ srvctl start database -d testdbdr
 
Verify the configuration of standby database (either oraclehoststdA or oraclehoststdB) testdbdr.
 
[testdbdr1 oracle@oraclehoststdA ~]$ srvctl config database -d testdbdr
Start the Managed Recovery Process:
SYS@testdbdr1> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

Create the Standby Redo Logs (SRLs) on Standby:
DUPLICATE DATABASE command has replicated the same no. of Online Redo Logs and Standby Redologs from primary database to the Standby database. So, they were not needed to create here.

8. Change the Protection Mode


==========================================================

Please check our other blogs for UpgradeDataguard ,Standby  & DR.



Comments

Popular posts from this blog

Restart Innodb MySQL Cluster after Complete outage(All node Down)

Oracle Block Corruption - Detection and Resolution

Add or Remove a node in MySQL Innodb Cluster