Oracle 11g: Copy tables to another database, using Data Pump and bash

Times are tough nowadays. You can’t rely on an only instance of your database and redundancy is a world of a century, probably, but the truth is: you have to have a copy of all your data somewhere, and you need to keep it up to date. Of course there are different solutions like Oracle Streams, Data Guard etcetera, but sometimes you need much more simple though powerful and reliable way to get a copy of your data. For example, you may need an analytic database for reports since you don’t wont let muggles access to your database and you you are not happy with all the high load they treat your precious instance. Then this story is yours.

Well, you happen to have tables inside one Oracle Database which you want to duplicate to another. So this another will be your workplace, and the source database will be the distant one.

Also, your source database has tables in different schemas while your target one has only one, different from these ones, and you want to store these tables within only one this scheme. Moreover, source DB objects placed in different tablespaces but you want to use your ones locally.

The script of the process  will be:

  • Connect to distant DB
  • Export needed table using Data Pump
  • Check if locally exists table with the same name and temporary table for import
  • If local table does not exist, create it structure using definitions from exported data, and copy its structure to temporary table using Data Pump
  • Truncate temporary table
  • Import data from export file to temporary table using Data Pump
  • Merge temporary table with persistent one using field ‘id’ or, if ‘id’ is not exists, given field.
  • And, of course, log everything

The final result will be a script which takes schema and table name as a parameter and does all the other steps on his one. We’ll run it outside cron daemon, and it will create and keep up to date our tables.

So, let’s start!

First, you’ll need to add a record to your TNS file to that distant database.

add something like that

And create database link inside your DB, using the name you set in the TNS file. I used public, but it’s up to you.

Secondly, create or choose a folder to store your temporary import|export files in. (say, mkdir /u01/app/oracle/oradata/export/ )and add it to your database (run SQL command CREATE DIRECTORY export_dir AS '/u01/app/oracle/oradata/export';

Here export_dir is an alias you’ll need later. This second step allows you to get access to filesystem files and directories.

Now step three — the script itself.

The first parameter to our script will be the schema and table name (ie # script schema.table ). So

Since we’ll run it using cron daemon, we’ll need to set environment variables. Do not forget

Then useful settings

You can find the final version of the script at the end of the page, but hear I’ll try to explain parts of it.

Essential functions

Function export_table.

You’ll have to pass two parameters to it — first one is the table name with schema (ie schema.table_name) from the remote database is it calls there, which you want to export. The second one is phrase — either «data_only» or «metadata only». Data_only parameter will force the script to export only DML while metadata_only allows you to export DDL only to create table from scratch. This function removes old dump file __table_name.dmp__ from export directory, then queries remote DB, exports data with Data Pump to brand new __table_name.dmp__, writes log to the same directory end quits. You can run it stand-alone to make dumps, by the way.

function import_table.

Again, takes schema.table_name, and prefix for newly creating table.  Separates schema and table_name, then imports data from previously created dump file, replacing tablespaces to appropriate new ones and schema name to new one (you’ve set it up in settings earlier).

function merge_tables.

Merges tmp_table_name and table_name, allows you to keep old data and add new. For current tables you can set primary field on your own to check, others will use ‘id’ field.

 Auxiliary functions

functions extract_*

simply parse string «schema.table_name» and return either «schema» or «table_name».

Function sql_query.

Allows you to run queries and return codes using sqlplus.

Function clear_tmp_table.

Truncate temporary tmp_table after inserting new data, just for datafile space economy. Temporary table will be truncated before insert as well with a little help of dpimp in function import_table.

 The logic:

And now, when script is ready, let’s simply add it to cron and work with every table we need.

That’s it!

Some things such e-mail warnings, space monitoring etc., I’ll left unmentioned, since it’s another story.

Here’s the final version of the script:

 

RelatedPost

Добавить комментарий

Ваш e-mail не будет опубликован.