Oracle Database Migration using Oracle GoldenGate
For Oracle DBA data migration is common activity in day-to-day activity. Either you need to migrate data from Different version of oracle or between two different platform like windows server to linux platform. Goldengate is used mostly for zero downtime database migration. In this blog, we will share the steps to migrate the database from one database to another database using Goldengate.
High Level Steps:
1.
Run
GoldenGate profiler
2.
Identify
schemas that will be migrated
3.
Enable
supplemental and force logging
4.
Install
Oracle GoldenGate 11g/12c for database being migrated
Setup Steps (Source):
1.
Create
GoldenGate user in the Oracle database
2.
Create
capture extract and pumps
3.
Get
current_scn
4.
Begin
export of database
Setup Steps (Target):
1.
Install
Oracle GoldenGate Software
2.
Create
Oracle GoldenGate User
3.
Create
replicat
4.
Import
data using datapump
5. Start replicat
Pre-Steps:
GoldenGate Profiler:
The Oracle GoldenGate Profiler is a tool
that is used to check an Oracle Database for any things that may cause issues
with migrations or replication. The
Profiler is a SQL script that is ran from SQL*Plus and provides a text based output
that can be reviewed before migration begins.
Download Profiler from My Oracle Support
(MOS):
MOS Note: 1298562.1
Once the Profiler is downloaded, it is ran
using SQL*Plus
1.
Copy the Profiler script to
temp directory
2.
Start SQL*Plus and run the Profiler
script
$
sqlplus / as sysdba
SQL>
spool /tmp/full-DB_CheckOracle_<date>.out
SQL>@full-DB_CheckOracle_11022014.sql
Review the output from the Profiler script
to make sure there is/is not anything that needs to be changed.
Migrating Schemas:
Before any migration work can be done, knowing what schemas need to be migrated is a requirement. Understanding the size and number of objects per schema will aid the planning for the migration.
Supplemental and Force Logging:
For any database, especially Oracle, the
database should be able to capture additional information for every
transaction. In order to do this, the
database needs to be modified to support supplemental logging. The following SQL commands are used to identify
if supplemental logging is already enabled and if not, enable it for the whole
database.
1.
Start SQL*Plus and check
supplemental logging
$
sqlplus / as sysdba
1.
2. Check v$database to ensure
that supplemental and force logging is enabled
SQL>
select supplemental_log_data_min, force_logging from v$database;
If the results come back as “NO”, then
supplemental and force logging needs to be enabled for the database. In order to do this an ALTER DATABASE command
has to be issued.
2.
Start SQL*Plus
$
sqlplus / as sysdba
3.
Alter database to enable
supplemental and force logging
SQL>
alter database add supplemental log data;
SQL>
alter database force logging;
4.
Check v$database to ensure that
supplemental logging is enabled
SQL>
select supplemental_log_data_min, force_logging from v$database;
Upgrade Goldengate on Source / Install
Oracle GoldenGate on Target:
Depending on the version of the database
being migrated, the latest version of Oracle GoldenGate needs to be downloaded
from edelievery.oracle.com or from My Oracle Support (patch sets).
The installation process takes all 10 to 15
minutes to accomplish. Steps to install
Oracle GoldenGate will not be discussed in this document. We plan to use version 12.1.2.1.5 for this
migration.
Ensure that sequence support has been added to both the source and the target.
Setup Steps (Source):
Create GoldenGate User on Target,
verify user on Source:
Oracle GoldenGate needs a user to help
identify with objects created for usage with Oracle GoldenGate. This is a use
that is typically named GGATE or OGG_USER.
1.
Start SQL*Plus
$
sqlplus / as sysdba
2.
Create Oracle GoldenGate User
and assign permissions
SQL>
create user GGATE
identified
by ggate
default
tablespace USERS
temporary
tablespace TEMP
quota
unlimited on USERS
account
unlock
/
grant
create session, alter session, resource, connect, select any dictionary,
flashback any table, select any table to ggate
/
grant
select on dba_clusters to ggate
/
grant
execute on dbms_flashback to ggate
/
grant
select any transaction to ggate
/
grant
lock any table to ggate
/
grant
alter system to ggate
/
grant
insert any table, update any table, delete any table to ggate
/
grant
create table to ggate
/
grant
dba to ggate
/
alter
user ggate default role dba, connect, resource
/
Create Extract and Pumps:
In order to start the migration, data needs
to be captured into trail files. To do
this, an Oracle GoldenGate extract process needs to be created.
1.
Start GoldenGate Command
Service Interface
$ cd $OGG_HOME
$ ./ggsci
2.
Create the extract[s] that will
be needed to extract data from the source database. Obey files will be used to create these
objects which will be distributed in a second document. DBLOGREADER Extract API
will be used to extract source data. Due
to the different ENDIAN formats, endianness conversion will happen during the
EXTRACT process.
3.
Create the pump[s] that will be
needed to be used to replicat the trails files to the target servers. Trail network compression will be used.
4.
Ensure that the target
destination directory exists prior to starting the pump.
Captures Current SCN:
To ensure that all transactions are
captured up to a certain point and help with instantiation, the CURRENT_SCN needs to be captured from
v$database.
1.
Start SQL*Plus
$ sqlplus / as sysdba
2.
Capture the current scn from
the database
SQL> select current_scn
from v$database;
Consistent Export:
To move the data over to the target side,
an consistent export needs to be taken using Oracle Datapump. The key thing in
this process is to ensure that FLASHBACK_SCN is used with the export.
Note: FLASHBACK_SCN is set to the
CURRENT_SCN captured from v$database.
1.
Create a parameter file that
will be used with Oracle Datapump Export
$ vi gg_migration_10g.par
DUMPFILE=EXPORTDIR:"expdp_gg_full_<db_name>_%U.dmp"
LOGFILE=LOGDIR:"expdp_gg_full_<db_name>.log"
PARALLEL=16
FILESIZE=20G
JOB_NAME='EXPDP_GG_FULL_<db_name>'
COMPRESSION=ALL
CONTENT=ALL
REUSE_DUMPFILES=Y
FULL=Y
FLASHBACK_SCN= <
current_scn>
2.
Ensure that the directory
referenced in the parameter file exists.
If not need to create it and give read/write permissions to SYS, SYSTEM,
and the GoldenGate user. Create a
directory to write the dump files to ASM and the LOG to a local directory.
$ sqlplus / as sysdba
SQL> select * from dba_directories;
If desired directories are
not present, create them with the following:
SQL> create or replace
directory EXPORTDIR as “+DATA_DG1”
SQL> create or replace
directory LOGDIR as “/orabkup/ggate/exports”
3.
Run the export until it is completed
$ expdp
<user>@<tns names> parfile=<parfile>
Setup Steps (Target):
Install Oracle GoldenGate:
The correct version of Oracle GoldenGate
needs to be downloaded from edelievery.oracle.com or from My Oracle Support
(patch sets).
The installation process takes all 10 to 15
minutes to accomplish. Steps to install
Oracle GoldenGate will not be discussed in this document.
Create GoldenGate User:
Oracle GoldenGate needs a user to help
identify with objects created for usage with Oracle GoldenGate. This is a use
that is typically named GGATE or OGG_USER.
1.
Start SQL*Plus
$
sqlplus / as sysdba
2.
Create Oracle GoldenGate User
and assign permissions
SQL>
create user GGATE
identified
by ggate
default
tablespace USERS
temporary
tablespace TEMP
quota
unlimited on USERS
account
unlock
/
grant
create session, alter session, resource, connect, select any dictionary,
flashback any table, select any table to ggate
/
grant
select on dba_clusters to ggate
/
grant
execute on dbms_flashback to ggate
/
grant
select any transaction to ggate
/
grant
lock any table to ggate
/
grant
alter system to ggate
/
grant
insert any table, update any table, delete any table to ggate
/
grant
create table to ggate
/
grant
dba to ggate
/
alter
user ggate default role dba, connect, resource
/
Pre-create other required objects
(tablespaces / roles)
1.
Tablespaces
2.
Roles
Create Replicats:
In order to start the migration, data needs
to be captured into trail files. To do
this, an Oracle GoldenGate extract process needs to be created.
1.
Start GoldenGate Command
Service Interface
$ cd $OGG_HOME
$ ./ggsci
2.
Create the replicat[s] that
will be needed to extract data from the source database. We will used a pre-defined OBEY file to
create the replicats.
Copy Data from SOURCE to TARGET:
1.
Utilize either multiple
DBMS_FILE_TRANSFER processes or multiple SCP processes depending on which
performs better.
Import Data:
Importing the data from the consistent
export will set the target database so the trail files know where data is to be
1.
Create a parameter file that
will be used with Oracle Datapump Import
DUMPFILE=IMPORTDIR:"expdp_gg_full_<db_name>_%U.dmp"
LOGFILE=LOGDIR:"expdp_gg_full_<db_name>.log"
JOB_NAME=”IMPDP_GG_FULL_<db_name>”
SCHEMAS=SCHEMA1,SCHEAM2,SCHEMA3...
TABLE_EXISTS_ACTION=REPLACE
EXCLUDE=TABLE/INDEX
EXCLUDE=TABLE/INDEX/STATISTICS
EXCLUDE=STATISTICS
Perform additional
analysis around excluding additional indexes and building them later.
2.
Ensure that the directory
referenced in the parameter file exists.
If not need to create it and give read/write permissions to SYS, SYSTEM,
and the GoldenGate user.
$ sqlplus / as sysdba
SQL> select * from
dba_directories;
If desired directories are
not present, create them with the following:
SQL> create or replace
directory EXPORTDIR as “+DATA_DG1”
SQL> create or replace
directory LOGDIR as “/orabkup/ggate/exports”
3.
Run the export until it is completed
$ impdp
<user>@<tns names> parfile=<parfile>
Start Replicats:
In order to begin syncing the data between
the databases, the replicats need to be started. To start the replicat and make sure that it
is started at the correct point, the AFTERCSN command will be used.
1.
Start GoldenGate Command
Service Interface
$ cd $OGG_HOME
$ ./ggsci
2.
Start replicat at the
CURRENT_SCN
GGSCI> start replicat
REPM## aftercsn <current_scn>
3.
Verify that the replicat has
started. If the replicat abends, then
review the ggserr.log and the associated report file to figure out why the
abend happened.
GGSCI> info replicat
REPM##
Appendix
SQL Used:
Migration Schema Info:
Tables:
select a.username, count(c.segment_name) as
"#TABLES", round(sum(c.bytes/(1024*1024)),2) as "SIZE_GB"
from dba_users a, dba_segments c
where a.username = c.owner
and a.username in
('ARS','CS_CLINICAL','DIB3_1','DIB3_AUX','EAS','EPHARM','EPS','FDB','FDB2','FDB2','FDB_1','FDB_2','KPDBA1','MDS','REGISTRATION_LOG','SBMO','SEC_ADMIN','SOX','TSMSYS','USPS_1','USPS_2','VPDADM','WL_LINK_QUEUE')
and c.segment_type in ('TABLE', 'TABLE
PARTITION')
group by rollup (a.username)
order by a.username;
Indexes:
select a.username, count(c.segment_name) as
"#TABLES", round(sum(c.bytes/(1024*1024)),2) as "SIZE_GB"
from dba_users a, dba_segments c
where a.username = c.owner
and a.username in
('ARS','CS_CLINICAL','DIB3_1','DIB3_AUX','EAS','EPHARM','EPS','FDB','FDB2','FDB2','FDB_1','FDB_2','KPDBA1','MDS','REGISTRATION_LOG','SBMO','SEC_ADMIN','SOX','TSMSYS','USPS_1','USPS_2','VPDADM','WL_LINK_QUEUE')
and c.segment_type in ('INDEX', 'INDEX
PARTITION')
group by rollup (a.username)
order by a.username;
You can learn more about oracle database administration:
Pro Oracle GoldenGate for the DBA
Oracle Security Tools Installation and Configuration
Please check our other blogs for Migration and GoldenGate.
Comments
Post a Comment
Please do not enter any spam link in comment Section suggestions are Always Appreciated. Thanks.. !