Quantcast
Channel: Oracle
Viewing all articles
Browse latest Browse all 1814

Wiki Page: Migrating DB2 Database Table Data to Oracle Database Table

$
0
0
Written by Deepak Vohra IBM’s DB2 Express-C 10.5 is a lightweight relational database server. In this tutorial we shall migrate a DB2 database table to Oracle Database using Sqoop. The migration is performed in two phases: first the DB2 table data is imported into HDFS and subsequently the HDFS data is exported to Oracle Database. Setting the Environment The following software is required: -Oracle Database 11g or 12c -DB2 Express-C 10.5 ( http://www-01.ibm.com/software/data/db2/express- c/download.html) -Sqoop 1.4.5 -Hadoop 2.5.0 -Java 7 Oracle Database is installed on Oracle Linux 6.6, which is installed on Oracle VirtualBox, and DB2 database is installed on Windows 7. DB2 database is connected to from VirtualBox, but a different configuration may also be used such as both Oracle Database and DB2 database installed on VirtualBox Linux. Create a directory called /sqoop to install the software and set its permissions to global (777). mkdir /sqoop chmod -R 777 /sqoop cd /sqoop Download and extract Java 7. tar zxvf jdk-7u55-linux-i586.gz Download and extract the Hadoop 2.5.0 tar.gz file. wget http://archive-primary.cloudera.com/cdh5/cdh/5/hadoop-2.5.0-cdh5.2.0.tar.gz tar -xvf hadoop-2.5.0-cdh5.2.0.tar.gz Create symlinks for the Hadoop conf and bin directories. ln -s /sqoop/hadoop-2.5.0-cdh5.2.0/bin-mapreduce1 /sqoop/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1/bin ln -s /sqoop/hadoop-2.5.0-cdh5.2.0/etc/hadoop /sqoop/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1/conf Download and extract the Sqoop tar.gz file. wget http://archive-primary.cloudera.com/cdh5/cdh/5/sqoop-1.4.5-cdh5.2.0.tar.gz tar -xvf sqoop-1.4.5-cdh5.2.0.tar.gz Copy the DB2 database JDBC jar files db2jcc.jar , db2jcc_license_cu.jar and db2jcc4.jar to the Sqoop lib directory. cp /db2jcc.jar /sqoop/sqoop-1.4.5-cdh5.2.0/lib cp /db2jcc_license_cu.jar /sqoop/sqoop-1.4.5-cdh5.2.0/lib cp /db2jcc4.jar /sqoop/sqoop-1.4.5-cdh5.2.0/lib Also copy the Oracle Database JDBC jar file to Sqoop lib directory. cp ojdbc6.jar /sqoop/sqoop-1.4.5-cdh5.2.0/lib Set the environment variables for Oracle Database, Sqoop, Hadoop and Java 7 in the shell bash file. vi ~/.bashrc export HADOOP_PREFIX=/sqoop/hadoop-2.5.0-cdh5.2.0 export HADOOP_CONF=$HADOOP_PREFIX/etc/hadoop export SQOOP_HOME=/sqoop/sqoop-1.4.5-cdh5.2.0 export JAVA_HOME=/sqoop/jdk1.7.0_55 export HADOOP_MAPRED_HOME=/sqoop/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1 export HADOOP_HOME=/sqoop/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1 export HADOOP_CLASSPATH=$HADOOP_HOME/*:$HADOOP_HOME/lib/*:SQOOP_HOME/lib/* export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_MAPRED_HOME/bin:$SQOOP_HOME/bin export CLASSPATH=$HADOOP_CLASSPATH export HADOOP_NAMENODE_USER=sqoop export HADOOP_DATANODE_USER=sqoop Set the configuration properties fs.defaultFS and hadoop.tmp.dir in the /sqoop/hadoop-2.5.0-cdh5.2.0/etc/hadoop/core-site.xml configuration file. fs.defaultFS hdfs://10.0.2.15:8020 hadoop.tmp.dir /var/lib/hadoop-0.20/cache Create the directory specified in the hadoop.tmp.dir property and set its permissions to global (777). mkdir -p /var/lib/hadoop-0.20/cache chmod -R 777 /var/lib/hadoop-0.20/cache Set the dfs.permissions.superusergroup , dfs.namenode.name.dir , dfs.replication and dfs.permissions properties in the configuration file /sqoop/hadoop-2.5.0-cdh5.2.0/etc/hadoop/hdfs-site.xml . dfs.permissions.superusergroup hadoop dfs.namenode.name.dir /data/1/dfs/nn dfs.replication 1 dfs.permissions false Create the NameNode storage directory and set its permissions to global (777). mkdir -p /data/1/dfs/nn chmod -R 777 /data/1/dfs/nn Format the NameNode and start the NameNode and DataNode. hadoop namenode -format hadoop namenode hadoop datanode We need to copy the Sqoop lib directories in the HDFS. Create the Sqoop lib directory structure sqoop/sqoop-1.4.5-cdh5.2.0/lib in HDFS, set its permissions to global (777) and put the Sqoop lib jars to HDFS. hadoop dfs -mkdir hdfs://10.0.2.15:8020/sqoop/sqoop-1.4.5-cdh5.2.0/lib hadoop dfs -chmod -R 777 hdfs://10.0.2.15:8020/sqoop/sqoop-1.4.5-cdh5.2.0/lib hdfs dfs -put /sqoop/sqoop-1.4.5-cdh5.2.0/lib/* hdfs://10.0.2.15:8020/sqoop/sqoop-1.4.5-cdh5.2.0/lib Creating Database Tables We shall be migrating a DB2 database table to Oracle Database. We need to create the source and target database tables. We shall use the SAMPLE database table DEPT as the source table. The DEPT is pre-created in the SAMPLE database and does not need to be created. The DEPT table data may be listed with a SELECT statement. Create a database table also called DEPT and with the same definition as the DB2 database DEPT table in Oracle Database. First, connect to the OE schema and subsequently run the following SQL query. CREATE TABLE DEPT (DEPTNO VARCHAR2(4000), DEPTNAME VARCHAR2(4000), MGRNO VARCHAR2(4000), ADMRDEPT VARCHAR2(4000), LOCATION VARCHAR2(4000)); The DEPT table gets created in Oracle Database. Importing DB2 Table Data to HDFS Next, run the sqoop import command to import the DEPT table data from DB2 database into HDFS. Specify the following arguments to the sqoop import command. Argument Description Value --connect Conenction URL to DB2 database "jdbc:db2://10.0.2.2:50000/sample" --password Password to connect to DB2 database Different for different users --username Username to connect to DB2 database Different for different users --table DB2 table to be migrated to Oracle Database "DEPT" --columns DEPT table columns "DEPTNO,DEPTNAME,MGRNO,ADMRDEPT,LOCATION" --split-by Primary key "DEPTNO" --target-dir HDFS directory to which DB2 table data is imported "/tmp/db2_sqoopimport" –verbose Verbose output from sqoop import command Run the following sqoop import command; the username and password would be different for different users. sqoop import --connect "jdbc:db2://10.0.2.2:50000/sample" --password "pwd" --username "DEEPAK VOHRA" --table "DEPT" --columns "DEPTNO,DEPTNAME,MGRNO,ADMRDEPT,LOCATION" --split-by "DEPTNO" --target-dir "/tmp/db2_sqoopimport" –verbose Sqoop 1.4.5-cdh-5.2.0 gets started. A MapReduce job runs to import 14 rows of data from the DEPT table to HDFS directory /tmp/db2_sqoopimport . A detailed output from the sqoop import command is listed: [root@localhost sqoop]# sqoop import --connect "jdbc:db2://10.0.2.2:50000/sample" --password "pwd" --username "DEEPAK VOHRA" --table "DEPT" --columns "DEPTNO,DEPTNAME,MGRNO,ADMRDEPT,LOCATION" --split-by "DEPTNO" --target-dir "/tmp/db2_sqoopimport" --verbose 15/03/21 14:32:44 DEBUG manager.DefaultManagerFactory: Trying with scheme: jdbc:db2: 15/03/21 14:32:45 INFO manager.SqlManager: Using default fetchSize of 1000 15/03/21 14:32:45 DEBUG sqoop.ConnFactory: Instantiated ConnManager org.apache.sqoop.manager.Db2Manager@1b98b20 15/03/21 14:32:45 INFO tool.CodeGenTool: Beginning code generation 15/03/21 14:32:45 DEBUG manager.SqlManager: Execute getColumnInfoRawQuery : SELECT t.* FROM DEPT AS t WHERE 1=0 15/03/21 14:32:48 DEBUG manager.SqlManager: No connection paramenters specified. Using regular API for making connection. 15/03/21 14:33:11 DEBUG manager.SqlManager: Using fetchSize for next query: 1000 15/03/21 14:33:11 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM DEPT AS t WHERE 1=0 15/03/21 14:33:14 DEBUG orm.ClassWriter: selected columns: 15/03/21 14:33:14 DEBUG orm.ClassWriter: DEPTNO 15/03/21 14:33:14 DEBUG orm.ClassWriter: DEPTNAME 15/03/21 14:33:14 DEBUG orm.ClassWriter: MGRNO 15/03/21 14:33:14 DEBUG orm.ClassWriter: ADMRDEPT 15/03/21 14:33:14 DEBUG orm.ClassWriter: LOCATION 15/03/21 14:33:15 DEBUG orm.ClassWriter: Writing source file: /tmp/sqoop-root/compile/b704cda14ae8b9834d3ae05bfcc152d8/DEPT.java 15/03/21 14:33:15 DEBUG orm.ClassWriter: Table name: DEPT 15/03/21 14:33:15 DEBUG orm.ClassWriter: Columns: DEPTNO:1, DEPTNAME:12, MGRNO:1, ADMRDEPT:1, LOCATION:1, 15/03/21 14:33:15 DEBUG orm.ClassWriter: sourceFilename is DEPT.java 15/03/21 14:33:15 DEBUG orm.CompilationManager: Found existing /tmp/sqoop-root/compile/b704cda14ae8b9834d3ae05bfcc152d8/ 15/03/21 14:33:15 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /sqoop/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1 15/03/21 14:36:32 INFO mapred.JobClient: Running job: job_local1441254714_0001 15/03/21 14:36:32 INFO mapred.LocalJobRunner: OutputCommitter set in config null 15/03/21 14:36:33 INFO mapred.JobClient: map 0% reduce 0% 15/03/21 14:36:34 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 15/03/21 14:36:36 INFO mapred.LocalJobRunner: Waiting for map tasks 15/03/21 14:36:36 INFO mapred.LocalJobRunner: Starting task: attempt_local1441254714_0001_m_000000_0 15/03/21 14:36:37 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead 15/03/21 14:36:39 INFO util.ProcessTree: setsid exited with exit code 0 15/03/21 14:36:39 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@6c2b98 15/03/21 14:36:39 DEBUG db.DBConfiguration: Fetching password from job credentials store 15/03/21 14:36:43 INFO db.DBInputFormat: Using read commited transaction isolation 15/03/21 14:36:44 INFO mapred.MapTask: Processing split: 1=1 AND 1=1 15/03/21 14:36:47 INFO db.DBRecordReader: Working on split: 1=1 AND 1=1 15/03/21 14:36:47 DEBUG db.DataDrivenDBRecordReader: Using query: SELECT DEPTNO, DEPTNAME, MGRNO, ADMRDEPT, LOCATION FROM DEPT WHERE ( 1=1 ) AND ( 1=1 ) 15/03/21 14:36:47 DEBUG db.DBRecordReader: Using fetchSize for next query: 1000 15/03/21 14:36:47 INFO db.DBRecordReader: Executing query: SELECT DEPTNO, DEPTNAME, MGRNO, ADMRDEPT, LOCATION FROM DEPT WHERE ( 1=1 ) AND ( 1=1 ) 15/03/21 14:36:48 DEBUG mapreduce.AutoProgressMapper: Instructing auto-progress thread to quit. 15/03/21 14:36:48 INFO mapreduce.AutoProgressMapper: Auto-progress thread is finished. keepGoing=false 15/03/21 14:36:48 DEBUG mapreduce.AutoProgressMapper: Waiting for progress thread shutdown... 15/03/21 14:36:48 DEBUG mapreduce.AutoProgressMapper: Progress thread shutdown detected. 15/03/21 14:36:48 INFO mapred.LocalJobRunner: 15/03/21 14:36:54 INFO mapred.LocalJobRunner: 15/03/21 14:36:54 INFO mapred.JobClient: map 100% reduce 0% 15/03/21 14:36:57 INFO mapred.Task: Task:attempt_local1441254714_0001_m_000000_0 is done. And is in the process of commiting 15/03/21 14:36:57 INFO mapred.LocalJobRunner: 15/03/21 14:36:57 INFO mapred.Task: Task attempt_local1441254714_0001_m_000000_0 is allowed to commit now 15/03/21 14:37:00 INFO output.FileOutputCommitter: Saved output of task 'attempt_local1441254714_0001_m_000000_0' to /tmp/db2_sqoopimport 15/03/21 14:37:00 INFO mapred.LocalJobRunner: 15/03/21 14:37:00 INFO mapred.Task: Task 'attempt_local1441254714_0001_m_000000_0' done. 15/03/21 14:37:00 INFO mapred.LocalJobRunner: Finishing task: attempt_local1441254714_0001_m_000000_0 15/03/21 14:37:00 INFO mapred.LocalJobRunner: Map task executor complete. 15/03/21 14:37:00 INFO mapred.JobClient: Job complete: job_local1441254714_0001 15/03/21 14:37:01 INFO mapred.JobClient: Counters: 18 15/03/21 14:37:01 INFO mapred.JobClient: File System Counters 15/03/21 14:37:01 INFO mapred.JobClient: FILE: Number of bytes read=17969120 15/03/21 14:37:01 INFO mapred.JobClient: FILE: Number of bytes written=18261503 15/03/21 14:37:01 INFO mapred.JobClient: FILE: Number of read operations=0 15/03/21 14:37:01 INFO mapred.JobClient: FILE: Number of large read operations=0 15/03/21 14:37:01 INFO mapred.JobClient: FILE: Number of write operations=0 15/03/21 14:37:01 INFO mapred.JobClient: HDFS: Number of bytes read=0 15/03/21 14:37:01 INFO mapred.JobClient: HDFS: Number of bytes written=519 15/03/21 14:37:01 INFO mapred.JobClient: HDFS: Number of read operations=1 15/03/21 14:37:01 INFO mapred.JobClient: HDFS: Number of large read operations=0 15/03/21 14:37:01 INFO mapred.JobClient: HDFS: Number of write operations=2 15/03/21 14:37:01 INFO mapred.JobClient: Map-Reduce Framework 15/03/21 14:37:01 INFO mapred.JobClient: Map input records=14 15/03/21 14:37:01 INFO mapred.JobClient: Map output records=14 15/03/21 14:37:01 INFO mapred.JobClient: Input split bytes=87 15/03/21 14:37:01 INFO mapred.JobClient: Spilled Records=0 15/03/21 14:37:01 INFO mapred.JobClient: CPU time spent (ms)=0 15/03/21 14:37:01 INFO mapred.JobClient: Physical memory (bytes) snapshot=0 15/03/21 14:37:01 INFO mapred.JobClient: Virtual memory (bytes) snapshot=0 15/03/21 14:37:01 INFO mapred.JobClient: Total committed heap usage (bytes)=86065152 15/03/21 14:37:01 INFO mapreduce.ImportJobBase: Transferred 519 bytes in 149.3706 seconds (3.4746 bytes/sec) 15/03/21 14:37:01 INFO mapreduce.ImportJobBase: Retrieved 14 records. Exporting HDFS Data to Oracle Database Table In this section we shall export the HDFS data to Oracle Database. Run the sqoop export command with the following arguments. Argument Description Value --connect Oracle Database connection URL "jdbc:oracle:thin:@localhost:1521:ORCL" --hadoop-home Hadoop Home directory "/sqoop/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1" --password Oracle Database password OE --username Oracle Database username OE --export-dir The HDFS directory to be exported "/tmp/db2_sqoopimport" --table Oracle Database table to export to “DEPT” --verbose Sets verbose output mode Run the following sqoop export command; the username and password would be different. sqoop export --connect "jdbc:oracle:thin:@localhost:1521:ORCL" --hadoop-home "/sqoop/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1" --password "OE" --username "OE" --export-dir "/tmp/db2_sqoopimport" --table "DEPT" --verbose A MapReduce job runs to export 14 records to Oracle Database. A more detailed output from the sqoop export command is as follows. [root@localhost sqoop]# sqoop export --connect "jdbc:oracle:thin:@localhost:1521:ORCL" --hadoop-home "/sqoop/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1" --password "OE" --username "OE" --export-dir "/tmp/db2_sqoopimport" --table "DEPT" --verbose 15/03/21 14:54:59 DEBUG manager.SqlManager: Using fetchSize for next query: 1000 15/03/21 14:54:59 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM DEPT t WHERE 1=0 15/03/21 14:55:11 DEBUG manager.OracleManager$ConnCache: Caching released connection for jdbc:oracle:thin:@localhost:1521:ORCL/OE 15/03/21 14:55:11 DEBUG orm.ClassWriter: selected columns: 15/03/21 14:55:11 DEBUG orm.ClassWriter: DEPTNO 15/03/21 14:55:11 DEBUG orm.ClassWriter: DEPTNAME 15/03/21 14:55:11 DEBUG orm.ClassWriter: MGRNO 15/03/21 14:55:11 DEBUG orm.ClassWriter: ADMRDEPT 15/03/21 14:55:11 DEBUG orm.ClassWriter: LOCATION 15/03/21 14:55:12 DEBUG orm.ClassWriter: Writing source file: /tmp/sqoop-root/compile/e11986e9d5d44e9533ad1b141917d272/DEPT.java 15/03/21 14:55:12 DEBUG orm.ClassWriter: Table name: DEPT 15/03/21 14:55:12 DEBUG orm.ClassWriter: Columns: DEPTNO:12, DEPTNAME:12, MGRNO:12, ADMRDEPT:12, LOCATION:12, 15/03/21 14:56:51 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 15/03/21 14:57:17 INFO input.FileInputFormat: Total input paths to process : 1 15/03/21 14:57:17 DEBUG mapreduce.ExportInputFormat: Target numMapTasks=4 15/03/21 14:57:17 DEBUG mapreduce.ExportInputFormat: Total input bytes=519 15/03/21 14:57:17 DEBUG mapreduce.ExportInputFormat: maxSplitSize=129 15/03/21 14:57:17 INFO input.FileInputFormat: Total input paths to process : 1 15/03/21 14:57:18 DEBUG mapreduce.ExportInputFormat: Generated splits: 15/03/21 14:57:18 DEBUG mapreduce.ExportInputFormat: Paths:/tmp/db2_sqoopimport/part-m-00000:0+129 Locations:localhost:; 15/03/21 14:57:18 DEBUG mapreduce.ExportInputFormat: Paths:/tmp/db2_sqoopimport/part-m-00000:129+129 Locations:localhost:; 15/03/21 14:57:18 DEBUG mapreduce.ExportInputFormat: Paths:/tmp/db2_sqoopimport/part-m-00000:258+129 Locations:localhost:; 15/03/21 14:57:18 DEBUG mapreduce.ExportInputFormat: Paths:/tmp/db2_sqoopimport/part-m-00000:387+66,/tmp/db2_sqoopimport/part-m-00000:453+66 Locations:localhost:; 15/03/21 14:58:16 INFO mapred.LocalJobRunner: OutputCommitter set in config null 15/03/21 14:58:16 INFO mapred.JobClient: Running job: job_local1842591076_0001 15/03/21 14:58:16 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.sqoop.mapreduce.NullOutputCommitter 15/03/21 14:58:19 INFO mapred.LocalJobRunner: Waiting for map tasks 15/03/21 14:58:19 INFO mapred.LocalJobRunner: Starting task: attempt_local1842591076_0001_m_000000_0 15/03/21 14:58:19 INFO mapred.JobClient: map 0% reduce 0% 15/03/21 14:58:21 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead 15/03/21 14:58:23 INFO util.ProcessTree: setsid exited with exit code 0 15/03/21 14:58:23 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@1ffb47 15/03/21 14:58:24 INFO mapred.MapTask: Processing split: Paths:/tmp/db2_sqoopimport/part-m-00000:387+66,/tmp/db2_sqoopimport/part-m-00000:453+66 15/03/21 14:58:42 DEBUG db.DBConfiguration: Fetching password from job credentials store 15/03/21 14:58:43 INFO mapred.JobClient: map 75% reduce 0% 15/03/21 14:58:46 INFO mapred.LocalJobRunner: 15/03/21 14:58:46 DEBUG mapreduce.AsyncSqlOutputFormat: Committing transaction of 1 statements 15/03/21 14:58:46 INFO mapred.Task: Task:attempt_local1842591076_0001_m_000003_0 is done. And is in the process of commiting 15/03/21 14:58:46 INFO mapred.LocalJobRunner: 15/03/21 14:58:46 INFO mapred.Task: Task 'attempt_local1842591076_0001_m_000003_0' done. 15/03/21 14:58:46 INFO mapred.LocalJobRunner: Finishing task: attempt_local1842591076_0001_m_000003_0 15/03/21 14:58:46 INFO mapred.LocalJobRunner: Map task executor complete. 15/03/21 14:58:47 INFO mapred.JobClient: map 100% reduce 0% 15/03/21 14:58:47 INFO mapred.JobClient: Job complete: job_local1842591076_0001 15/03/21 14:58:47 INFO mapred.JobClient: Counters: 18 15/03/21 14:58:47 INFO mapred.JobClient: File System Counters 15/03/21 14:58:47 INFO mapred.JobClient: FILE: Number of bytes read=82841288 15/03/21 14:58:47 INFO mapred.JobClient: FILE: Number of bytes written=84089268 15/03/21 14:58:47 INFO mapred.JobClient: FILE: Number of read operations=0 15/03/21 14:58:47 INFO mapred.JobClient: FILE: Number of large read operations=0 15/03/21 14:58:47 INFO mapred.JobClient: FILE: Number of write operations=0 15/03/21 14:58:47 INFO mapred.JobClient: HDFS: Number of bytes read=3444 15/03/21 14:58:47 INFO mapred.JobClient: HDFS: Number of bytes written=0 15/03/21 14:58:47 INFO mapred.JobClient: HDFS: Number of read operations=78 15/03/21 14:58:47 INFO mapred.JobClient: HDFS: Number of large read operations=0 15/03/21 14:58:47 INFO mapred.JobClient: HDFS: Number of write operations=0 15/03/21 14:58:47 INFO mapred.JobClient: Map-Reduce Framework 15/03/21 14:58:47 INFO mapred.JobClient: Map input records=14 15/03/21 14:58:47 INFO mapred.JobClient: Map output records=14 15/03/21 14:58:47 INFO mapred.JobClient: Input split bytes=611 15/03/21 14:58:47 INFO mapred.JobClient: Spilled Records=0 15/03/21 14:58:47 INFO mapred.JobClient: CPU time spent (ms)=0 15/03/21 14:58:47 INFO mapred.JobClient: Physical memory (bytes) snapshot=0 15/03/21 14:58:47 INFO mapred.JobClient: Virtual memory (bytes) snapshot=0 15/03/21 14:58:48 INFO mapred.JobClient: Total committed heap usage (bytes)=454574080 15/03/21 14:58:48 INFO mapreduce.ExportJobBase: Transferred 3.3633 KB in 120.5955 seconds (28.5583 bytes/sec) 15/03/21 14:58:48 INFO mapreduce.ExportJobBase: Exported 14 records. 15/03/21 14:58:48 DEBUG util.ClassLoaderStack: Restoring classloader: sun.misc.Launcher$AppClassLoader@3d4817 [root@localhost sqoop]# Querying the Migrated Table Data Having migrated the DB2 table data to Oracle Database table the Oracle Database table may be queried using a SELECT statement. Run a SELECT statement in SQL *Plus. The 14 records migrated from DB2 database get listed. The output from the SELECT statement is as follows. SQL> SELECT * FROM DEPT; DEPTNO -------------------------------------------------------------------------------- DEPTNAME -------------------------------------------------------------------------------- MGRNO -------------------------------------------------------------------------------- ADMRDEPT -------------------------------------------------------------------------------- LOCATION -------------------------------------------------------------------------------- D11 MANUFACTURING SYSTEMS 000060 D01 D21 ADMINISTRATION SYSTEMS 000070 D01 E01 SUPPORT SERVICES 000050 A00 E11 OPERATIONS 000090 E01 E21 SOFTWARE SUPPORT 000100 E01 F22 BRANCH OFFICE F2 E01 G22 BRANCH OFFICE G2 E01 H22 BRANCH OFFICE H2 E01 I22 BRANCH OFFICE I2 E01 J22 BRANCH OFFICE J2 E01 A00 SPIFFY COMPUTER SERVICE DIV. 000010 A00 B01 PLANNING 000020 A00 C01 INFORMATION CENTER 000030 A00 D01 DEVELOPMENT CENTER A00 14 rows selected. SQL> In this tutorial we migrated a DB2 database table to Oracle Database.

Viewing all articles
Browse latest Browse all 1814

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>