Quantcast
Channel: Oracle
Viewing all articles
Browse latest Browse all 1814

Wiki Page: Database in the Cloud Operations: Moving data between different Oracle RDS

$
0
0
Written by Wissem EL-Khlifi Introduction Before diving into the world of the database in the cloud operations, it is important to understand the concepts and the differences between a database as a service offered in the cloud industry (especially offered by Amazon) versus a fully managed database. The Oracle database as a service (Oracle RDS) option of Amazon offers Oracle databases as managed services. Those managed services free up the Oracle DBA and DevOps from time-consuming database administration tasks including provisioning, backups, software patching, monitoring, and hardware scaling. The goal is to reduce the cost in maintenance, acquisition, operations and focus on the customer service and the service quality. Let’s see what features offers Amazon Oracle RDS to their customers. The Oracle RDS Features: By Default, the Oracle RDS enables the following features: Pre-configured Database Parameters: When choosing the Oracle database software edition and the Oracle RDS instance class at Oracle RDS creation step, the Oracle RDS instance will come up with a pre-configured database parameters optimized for that deployment. Monitoring and Metrics: You can use CloudWatch for monitoring alert setups including the CPU/memory/storage capacity utilization, I/O activity, and Oracle Instance connections. Automatic Software Patching: The Oracle Amazon RDS update your deployment automatically with the latest patches to assure Oracle Bug fixes and the inclusion of new features. Automated Backups: By default, the Oracle RDS service enables automatic backups and point-in-time recovery for your Oracle DB Instance. Multi-AZ Deployments: Automatic deployments of the database in different availability zone of Disaster Recovery (DR). At the time of writing this article, the Oracle RDS supports the following: Oracle 11g: Versions 11.2.0.4 and later are supported. Oracle 12c: Versions 12.1.0.2 and later are supported. As a database administrator, you may ask how your role is affected in an RDS environment? The DevOps approach DevOps is an approach of bringing development and operations into a single profile. The goal is to assure quality of the delivered stack by avoiding unnecessary efforts in operation tasks that can be automated and can be fully managed by the Oracle RDS. Read more about IaaC here: http://www.toadworld.com/platforms/oracle/w/wiki/11715.iaac-for-devops-infrastructure-automation-using-aws-cloudformation Since the service is maintained by Amazon (AWS), the DevOps are not provided with all configuration settings and operations that they used to have in a fully managed Oracle database. For example, the DevOps doesn’t have a system access to the Oracle RDS database server. As an Oracle DBA, operating in an Oracle RDS environments leaves you with very few options and tools. It is true that an Oracle RDS instance is at the end a normal Oracle database, but some operations have to be done differently. For example, you won’t use the DBCA tool to create an Oracle RDS instead as DBA or DevOps you have the options to use the AWS console or the Amazon command line tool (AWS Cli). In the following section of the article, we are going to show you how to move data between those two RDS instances giving we don’t have access to the database servers. Oracle RDS Operations: Moving data between different Oracle RDS Because in an Oracle RDS environment, there is no system access to the database server, we have two options to use the Oracle Data pump tool to move data and schemas between two different Oracle RDS instances. Using SQL client tool like: Toad, SQL Developer. You can refer to the following article if you can wish to use Toad for Oracle as a tool to use Oracle Data pump: http://documents.software.dell.com/toad-for-oracle/12.1/guide-to-using-toad-for-oracle/chapter-2-toad-without-code/import-and-export-data/export-with-data-pump Use PL/ SQL and the DBMS_DATAPUMP package to move all, or part of, a database between databases, including both data and metadata. In this section, we will use the DBMS_DATAPUMP package. Prerequisites: We suppose we have two Oracle RDS instances created and having the master user account and passwords. We suppose the master account is called appadmin . We will move data schema between the two Oracle RDS instances. As an example, we will move a schema called WIND. We suppose that this activity is repetitive. SOURCEDB is the name of the Oracle RDS instance where we are going to make the export of the data schema. TARGETDB is the name of the Oracle RDS instance where we are going to import the data schema. A network connectivity exists between SOURCEDB and TARGETDB in the same VPC (Amazon Virtual Private Cloud) and within the same VPC security group. Make sure you have the DATA_DUMP_DIR Oracle directory object created on both Oracle RDS instances. Use the DBA_DIRECTORIES view to check that. The steps have been tested on 12c Oracle RDS instances. Create and execute the export procedure on SOURCEDB: First step involves an export procedure creation on SOURCEDB RDS database. You can use Toad for Oracle tool to create the procedure DB_PROC_EXPDP_SCHEMA. CREATE OR REPLACE PROCEDURE appadmin.DB_PROC_EXPDP_SCHEMA IS V_H1 NUMBER; V_DIR_NAME VARCHAR2(30); V_SCHEMA_NAME VARCHAR2(30) := 'WIND'; BEGIN V_DIR_NAME := 'DATA_DUMP_DIR'; V_H1 := DBMS_DATAPUMP.OPEN(OPERATION => 'EXPORT', JOB_MODE => 'SCHEMA', VERSION => 'COMPATIBLE'); -- Filter for schema DBMS_DATAPUMP.ADD_FILE(HANDLE => V_H1, FILENAME => 'exp_' || V_SCHEMA_NAME || '.dmp', REUSEFILE => 1, -- REUSE_DUMPFILES=Y DIRECTORY => V_DIR_NAME, FILETYPE => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE); DBMS_DATAPUMP.ADD_FILE(HANDLE => V_H1, FILENAME => 'exp_' || V_SCHEMA_NAME || '.log', DIRECTORY => V_DIR_NAME, FILETYPE => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE); DBMS_DATAPUMP.metadata_filter(handle => V_H1, name => 'SCHEMA_LIST', VALUE => '''' || V_SCHEMA_NAME || ''''); DBMS_DATAPUMP.START_JOB(HANDLE => V_H1); DBMS_DATAPUMP.DETACH(HANDLE => V_H1); EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.PUT_LINE('Error DB_PROC_EXPDP_SCHEMA:::' || SQLERRM); END; Let’s explain the list of procedures and functions invoked from this PLSQL procedure: DBMS_DATAPUMP.OPEN: This function is used to declare a new job using the Data Pump API. We specified the type of operation used in this procedure (Export). The Job mode parameter indicated the set of selected schemas. Defaults to the schema of the current user, but here we selected the WIND schema. All objects in the selected schemas are processed. In SCHEMA mode, you cannot specify Oracle-internal schemas (for example, SYS, XDB, ORDSYS, MDSYS, CTXSYS, ORDPLUGINS, or LBACSYS). The COMPATIBLE parameter indicates the version of the metadata corresponds to the database compatibility level and the compatibility release level for feature (as given in the V$COMPATIBILITY view). Database compatibility must be set to 9.2 or higher. DBMS_DATAPUMP.ADD_FILE: This procedure adds files to the dump file set for an Export, Import, or SQL_FILE operation or specifies the log file or the output file for a SQL_FILE operation. In the DB_PROC_EXPDP_SCHEMA procedure, we specified the exp_WIND.dmp as the export file name. The REUSEFILE parameter indicated if 0, a preexisting file will cause an error (if value is set to 0). If 1, a preexisting file will be overwritten. If NULL, the default action for the file type will be applied (that is, dump files will not be overwritten). This parameter should only be non-NULL for dump files. The reuse file parameter is restricted to export jobs. In this case, we set it to 1. The File type parameter specifies the type of the file to be added. The values must be one of the following: - KU$_FILE_TYPE_DUMP_FILE (dump file for a job) - KU$_FILE_TYPE_LOG_FILE (log file for a job) - KU$_FILE_TYPE_SQL_FILE (output for SQL_FILE job) DBMS_DATAPUMP.metadata_filter: This procedure provides filters that allow you to restrict the items that are included in a job. In this case, we restricted the export to be performed only on WIND schema. After DB_PROC_EXPDP_SCHEMA procedure creation and execution on SOURCEDB RDS database. We need to find a way to access and read the export log file to check for any error in the export execution. Check the export on SOURCEDB: We can create an Oracle external table to read the export log file. Later you query the external table for any errors in the export. Let’s create the external table: CREATE TABLE appadmin.exp_wind_ext ( message VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY DATA_DUMP_DIR ACCESS PARAMETERS ( RECORDS DELIMITED BY NEWLINE FIELDS TERMINATED BY '[#EOC#]' MISSING FIELD VALUES ARE NULL ( message CHAR(4000) ) ) LOCATION ('exp_WIND.log') ) PARALLEL 5 REJECT LIMIT UNLIMITED; Create a database link between the two RDS instances: It is important to have a network connectivity because the two Oracle RDS instances to move the export file from SOURCEDB to TARGETDB. From SOURCEDB create the following database link: create database link DBLINK_TO_TARGETDB connect to appadmin identified by xxx using '//xxxx.us-east-1.rds.amazonaws.com:1521/ TARGETDB '; Transfer the Data pump export from SOURCEDB to TARGETDB: Run the following PLSQL block to transfer the dump file from the two RDS instances: SQL> set serveroutput on SQL> SQL> BEGIN DBMS_FILE_TRANSFER.PUT_FILE( source_directory_object => 'DATA_PUMP_DIR', source_file_name => 'exp_WIND.dmp', destination_directory_object => 'DATA_PUMP_DIR', destination_file_name => 'exp_WIND.dmp', destination_database => 'DBLINK_TO_TARGETDB' ); EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.PUT_LINE ('Error::: '||SQLERRM); END; / PL/SQL procedure successfully completed SQL> Run the import Data Pump from TAGRETDB: Before running the import Data Pump from TARGEDB RDS instance, make sure there is no WIND user connected in the database. In the case, kill the sessions using the following command: exec rdsadmin.rdsadmin_util.kill( , ); You can use Toad for Oracle tool to create the following import procedure DB_PROC_IMPDP_SCHEMA. CREATE OR REPLACE PROCEDURE APPADMIN.DB_PROC_IMPDP_SCHEMA IS V_H1 NUMBER; -- data pump job handle V_JOB_NOT_EXIST EXCEPTION; PRAGMA EXCEPTION_INIT(V_JOB_NOT_EXIST, -31626); V_SCHEMA_NAME VARCHAR2(30) := 'WIND'; V_DUMPFILENAME VARCHAR2(30); BEGIN V_DUMPFILENAME:='exp_' || V_SCHEMA_NAME || '.dmp'; V_H1 := DBMS_DATAPUMP.OPEN(OPERATION => 'IMPORT', JOB_MODE => 'SCHEMA', JOB_NAME => NULL); DBMS_DATAPUMP.SET_PARAMETER(V_H1, 'TABLE_EXISTS_ACTION', 'REPLACE'); DBMS_DATAPUMP.ADD_FILE(V_H1, V_DUMPFILENAME, 'DATA_DUMP_DIR'); DBMS_DATAPUMP.ADD_FILE(V_H1, 'imp_' || V_SCHEMA_NAME || '.log', 'DATA_DUMP_DIR', NULL, DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE); DBMS_DATAPUMP.START_JOB(V_H1); COMMIT; END; The import procedure calls a new procedure DBMS_DATAPUMP.SET_PARAMETER comparing to the export procedure. In the SET_PARAMETER procedure we specified to replace the WIND schema if exists. Check the import job on TARGETDB: We can create an Oracle external table to read the import log file. Let’s create the external table: CREATE TABLE APPADMIN.imp_wind_ext ( message VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY DATA_DUMP_DIR ACCESS PARAMETERS ( RECORDS DELIMITED BY NEWLINE FIELDS TERMINATED BY '[#EOC#]' MISSING FIELD VALUES ARE NULL ( message CHAR(4000) ) ) LOCATION ('imp_WIND.log') ) PARALLEL 5 REJECT LIMIT UNLIMITED; Conclusion: In this article, we have introduced the Oracle RDS features and operations. We have seen that in an RDS environment we as DBA or DevOps have to choose different methods to perform database operations like Oracle database creation and replications, DBCA and Data Pimp executables from command lines are not available for use anymore. As DevOps we have limited options in using Data Pump because of no access to the database servers. We have seen the steps of moving database schemas between different RDS instances.

Viewing all articles
Browse latest Browse all 1814

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>