Quantcast
Channel: Oracle
Viewing all articles
Browse latest Browse all 1814

Wiki Page: Loading Sequence File Data into Oracle Database

$
0
0
Sequence files are used in Hadoop as input/output formats and the temporary outputs generated by map tasks are also stored as Sequence files. Sequence files store data as binary key/value pairs. If the requirement is to load data from a Sequence file into Oracle Database the Oracle Loader for Hadoop does not support Sequence files as an input format. A Hive external table defined on a Sequence file may be used as input to OLH for loading Sequence file data into Oracle Database. In this tutorial we shall first create a sequence file using Oracle XQuery Engine for Hadoop, which was introduced in an earlier tutorial . Subsequently we shall define a Hive table stored as a Sequence file in HDFS. We shall run Oracle Loader for Hadoop to load the Hive external table data into Oracle Database. This tutorial has the following sections. Setting the Environment Install Oracle VirtualBox 4.3 and install Oracle Linux 6.5 as a guest OS. The following software is required for this tutorial. -Oracle Database 11g -Hadoop 2.0.0 CDH 4.7 -Hive 0.10.0 CDH 4.7 -Oracle Loader for Hadoop 3.0.0 -Oracle XQuery for Hadoop 3.0.0 -Java 7 Oracle XQuery for Hadoop 3.0.0 is a transformation engine that is used to create a sequence file from a text file in HDFS using a query script. Oracle XQuery for Hadoop 3.0.0 is based on XPath, XQuery and XQuery Update Facility and downloaded from http://www.oracle.com/technetwork/database/database-technologies/bdc/big-data-connectors/downloads/index.html as a oxh-3.0.0-cdh4.6.0.zip file. Create a directory /sequence to install the software and set the directory permissions. mkdir /sequence chmod -R 777 /sequence cd /sequence Download and install Java 7. wget http://download.oracle.com/otn-pub/java/jdk/7u55-b13/jdk-7u55-linux-i586.tar.gz tar zxvf jdk-7u55-linux-i586.gz Donwload and install Hadoop 2.0.0 CDH 4.6. wget http://archive.cloudera.com/cdh4/cdh/4/hadoop-2.0.0-cdh4.6.0.tar.gz tar -xvf hadoop-2.0.0-cdh4.6.0.tar.gz Create Symlinks for Hadoop bin and conf directories. ln -s /sequence/hadoop-2.0.0-cdh4.6.0/bin /sequence/hadoop-2.0.0-cdh4.6.0/share/hadoop/mapreduce2/bin ln -s /sequence/hadoop-2.0.0-cdh4.6.0/etc/hadoop /sequence/hadoop-2.0.0-cdh4.6.0/share/hadoop/mapreduce2/conf Extract the Oracle Loader for Hadoop 3.0.0 and Oracle XQuery for Hadoop 3.0.0 zip files to the /sequence directory. unzip oraloader-3.0.0-h2.x86_64.zip unzip oxh-3.0.0-cdh4.6.0.zip Download and install Hive 0.10.0 CDH 4.6. wget http://archive.cloudera.com/cdh4/cdh/4/hive-0.10.0-cdh4.6.0.tar.gz tar -xvf hive-0.10.0-cdh4.6.0.tar.gz Set the fs.defaultFS and hadoop.tmp.dir properties in /sequence/hadoop-2.0.0-cdh4.6.0/etc/hadoop/core-site.xml configuration file. The core-site.xml is listed below. fs.defaultFS hdfs://10.0.2.15:8020 hadoop.tmp.dir file:///var/lib/hadoop-0.20/cache Create the directory to be used as the Hadoop tmp directory and set its permissions. mkdir -p /var/lib/hadoop-0.20/cache chmod -R 777 /var/lib/hadoop-0.20/cache Set the dfs.permissions.superusergroup , dfs.namenode.name.dir , dfs.replication , and dfs.permissions properties in /sequence/hadoop-2.0.0-cdh4.6.0/etc/hadoop/hdfs-site.xml , which is listed below. dfs.permissions.superusergroup hadoop dfs.namenode.name.dir file:///data/1/dfs/nn dfs.replication 1 dfs.permissions false Create the NameNode storage directory and set its permissions to global (777). mkdir -p /data/1/dfs/nn chmod -R 777 /data/1/dfs/nn Create the hive-site.xml configuration file from the default template file. cp /sequence/hive-0.10.0-cdh4.6.0/conf/hive-default.xml.template /sequence/hive-0.10.0-cdh4.6.0/conf/hive-site.xml Set the hive.metastore.warehouse.dir and hive.metastore.uris properties in the /sequence/hive-0.10.0-cdh4.6.0/conf/hive-site.xml configuration file. hive.metastore.warehouse.dir hdfs://10.0.2.15:8020/user/hive/warehouse hive.metastore.uris thrift://localhost:10000 Set the environment variables for Oracle Database, Oracle Loader for Hadoop, Oracle XQuery for Hadoop, Hadoop, Hive, and Java in the bash shell. vi ~/.bashrc export HADOOP_PREFIX=/json/hadoop-2.0.0-cdh4.6.0 export HADOOP_CONF=$HADOOP_PREFIX/etc/hadoop export HIVE_HOME=/sequence/hive-0.10.0-cdh4.6.0 export HIVE_CONF=$HIVE_HOME/conf export OLH_HOME=/sequence/oraloader-3.0.0-h2 export OXH_HOME=/sequence/oxh-3.0.0-cdh4.6.0 export JAVA_HOME=/sequence/jdk1.7.0_55 export ORACLE_HOME=/home/oracle/app/oracle/product/11.2.0/dbhome_1 export ORACLE_SID=ORCL export HADOOP_MAPRED_HOME=/sequence/hadoop-2.0.0-cdh4.6.0/bin export HADOOP_HOME=/sequence/hadoop-2.0.0-cdh4.6.0/share/hadoop/mapreduce2 export HADOOP_CLASSPATH=$HADOOP_HOME/*:$HADOOP_HOME/lib/*:$HIVE_HOME/lib/*:$OLH_HOME/jlib/*:$HIVE_CONF:$OXH_HOME/lib/* export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_MAPRED_HOME:$ORACLE_HOME/bin:$HIVE_HOME/bin export CLASSPATH=$HADOOP_CLASSPATH Start the NameNode and DataNode, which constitute the HDFS. hadoop namenode -format hadoop namenode hadoop datanode Create the HDFS directory specified in the hive.metastore.warehouse.dir property and set its permissions. hadoop dfs -mkdir hdfs://10.0.2.15:8020/user/hive/warehouse hadoop dfs -chmod -R g+w hdfs://10.0.2.15:8020/user/hive/warehouse We also need to copy OLH, OXH an Hive to the runtime classpath of the Oracle Loader for Hadoop in HDFS. Create a directory /sequence in HDFS, set its permissions, and copy the oraloader-3.0.0-h2 , hive-0.10.0-cdh4.6.0 and oxh-3.0.0-cdh4.6.0 from the local file system into HDFS. hdfs dfs -mkdir hdfs://localhost:8020/sequence hadoop dfs -chmod -R g+w hdfs://localhost:8020/sequence hdfs dfs -put /sequence/oraloader-3.0.0-h2 hdfs://localhost:8020/sequence hdfs dfs -put /sequence/hive-0.10.0-cdh4.6.0 hdfs://localhost:8020/sequence hdfs dfs -put /sequence/oxh-3.0.0-cdh4.6.0 hdfs://10.0.2.15:8020/sequence We also need to create the Oracle Database table OE.WLSLOG into which the Sequence file data is to be loaded. CREATE TABLE OE.wlslog (time_stamp VARCHAR2(255), category VARCHAR2(255), type VARCHAR2(255), servername VARCHAR2(255), code VARCHAR2(255), msg VARCHAR2(255)); Creating a Sequence File In this section we shall create a sequence file from a text file using Oracle Xquery for Hadoop. The OXH engine requires a query script which specifies the transformations to be applied to input data and the output to generate. Create a ‘,’ delimited text file wlslog.txt with the following log data. Apr-8-2014-7:06:16-PM-PDT,Notice,WebLogicServer,AdminServer,BEA-000365,Server state changed to STANDBY Apr-8-2014-7:06:17-PM-PDT,Notice,WebLogicServer,AdminServer,BEA-000365,Server state changed to STARTING Apr-8-2014-7:06:18-PM-PDT,Notice,WebLogicServer,AdminServer,BEA-000365,Server state changed to ADMIN Apr-8-2014-7:06:19-PM-PDT,Notice,WebLogicServer,AdminServer,BEA-000365,Server state changed to RESUMING Apr-8-2014-7:06:20-PM-PDT,Notice,WebLogicServer,AdminServer,BEA-000331,Started WebLogic AdminServer Apr-8-2014-7:06:21-PM-PDT,Notice,WebLogicServer,AdminServer,BEA-000365,Server state changed to RUNNING Apr-8-2014-7:06:22-PM-PDT,Notice,WebLogicServer,AdminServer,BEA-000360,Server started in RUNNING mode Create a directory /wlslog in HDFS, set its permissions to global (777) and put the wlslog.txt file in the HDFS directory. hdfs dfs -mkdir hdfs://localhost:8020/wlslog hadoop dfs -chmod -R g+w hdfs://localhost:8020/wlslog hdfs dfs -put wlslog.txt hdfs://localhost:8020/wlslog Create a query script wlslog.xq to specify the transformations for converting a text file to a sequence file. First, import the sequence and text modules. Access the text file in HDFS using the text:collection function in the for clause of a FLWOR expression. In the let clause create a variable for each line of text data in the text file. In the return clause use the seq:put function to put each line of text file data as sequence file data. The query script is wlslog.xq is listed. import module "oxh:seq"; import module "oxh:text"; for $line in text:collection("/wlslog/wlslog.txt") let $msg := xs:string($line) return seq:put($msg) Run the Oracle XQuery for Hadoop engine on the query script with the following command in which the output path is specified as the /output_seq directory in HDFS. hadoop jar $OXH_HOME/lib/oxh.jar wlslog.xq -output /output_seq Oracle XQuery for Hadoop 3.0.0 gets started and the query script starts getting processed. A MapReduce job runs to transform the text file wlslog.txt to a sequence file in the /output_seq directory in HDFS. A more detailed output from the Hadoop command to run Oracle XQuery for Hadoop is listed: [root@localhost sequence]# hadoop jar $OXH_HOME/lib/oxh.jar wlslog.xq -output /output_seq 14/07/13 14:07:47 INFO hadoop.xquery: OXH: Oracle XQuery for Hadoop 3.0.0 (build 3.0.0-cdh4.6.0-mr1 @mr2). Copyright (c) 2014, Oracle. All rights reserved. 14/07/13 14:07:49 INFO hadoop.xquery: Executing query "wlslog.xq". Output path: "hdfs://10.0.2.15:8020/output_seq" 14/07/13 14:07:57 INFO hadoop.xquery: Submitting map-reduce job "oxh:wlslog.xq#0" id="db0605ca-10b7-45c2-b220-ac09bab340ff.0", inputs=[hdfs://10.0.2.15:8020/wlslog/wlslog.txt], output=hdfs://10.0.2.15:8020/output_seq 14/07/13 14:07:59 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1414664977_0001 14/07/13 14:08:09 INFO mapreduce.Job: The url to track the job: http://localhost:8080/ 14/07/13 14:08:09 INFO hadoop.xquery: Waiting for map-reduce job oxh:wlslog.xq#0 14/07/13 14:08:09 INFO mapreduce.Job: Running job: job_local1414664977_0001 14/07/13 14:08:09 INFO mapred.LocalJobRunner: OutputCommitter set in config null 14/07/13 14:08:10 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 14/07/13 14:08:10 INFO mapreduce.Job: Job job_local1414664977_0001 running in uber mode : false 14/07/13 14:08:10 INFO mapreduce.Job: map 0% reduce 0% 14/07/13 14:08:10 INFO mapred.LocalJobRunner: Waiting for map tasks 14/07/13 14:08:10 INFO mapred.LocalJobRunner: Starting task: attempt_local1414664977_0001_m_000000_0 14/07/13 14:08:12 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ] 14/07/13 14:08:12 INFO mapred.MapTask: Processing split: hdfs://10.0.2.15:8020/wlslog/wlslog.txt:0+781 14/07/13 14:08:17 INFO mapred.LocalJobRunner: 14/07/13 14:08:17 INFO mapred.Task: Task:attempt_local1414664977_0001_m_000000_0 is done. And is in the process of committing 14/07/13 14:08:17 INFO mapred.LocalJobRunner: 14/07/13 14:08:17 INFO mapred.Task: Task attempt_local1414664977_0001_m_000000_0 is allowed to commit now 14/07/13 14:08:17 INFO output.FileOutputCommitter: Saved output of task 'attempt_local1414664977_0001_m_000000_0' to hdfs://10.0.2.15:8020/output_seq/_temporary/0/task_local1414664977_0001_m_000000 14/07/13 14:08:17 INFO mapred.LocalJobRunner: map 14/07/13 14:08:17 INFO mapred.Task: Task 'attempt_local1414664977_0001_m_000000_0' done. 14/07/13 14:08:17 INFO mapred.LocalJobRunner: Finishing task: attempt_local1414664977_0001_m_000000_0 14/07/13 14:08:17 INFO mapred.LocalJobRunner: Map task executor complete. 14/07/13 14:08:17 INFO mapreduce.Job: map 100% reduce 0% 14/07/13 14:08:17 INFO mapreduce.Job: Job job_local1414664977_0001 completed successfully 14/07/13 14:08:18 INFO mapreduce.Job: Counters: 23 File System Counters FILE: Number of bytes read=12541 FILE: Number of bytes written=15081749 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=14731467 HDFS: Number of bytes written=916 HDFS: Number of read operations=167 HDFS: Number of large read operations=0 HDFS: Number of write operations=3 Map-Reduce Framework Map input records=7 Map output records=0 Input split bytes=104 Spilled Records=0 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=81 CPU time spent (ms)=0 Physical memory (bytes) snapshot=0 Virtual memory (bytes) snapshot=0 Total committed heap usage (bytes)=22724608 File Input Format Counters Bytes Read=781 File Output Format Counters Bytes Written=0 14/07/13 14:08:18 INFO hadoop.xquery: Finished executing "wlslog.xq". Output path: "hdfs://10.0.2.15:8020/output_seq" List the output files generated by OXH in HDFS output directory with the following command. hdfs dfs –ls /output_seq One of the files listed, the part-m-00000 file, is the sequence file generated from the text file wlslog.txt . Copy the sequence file to the /wlslog directory in HDFS and remove the text file wlslog.txt . hdfs dfs -cp /output_seq/part-m-00000 /wlslog hdfs dfs -rm /wlslog/wlslog.txt Creating a Hive External Table Next, we shall create a Hive table stored as sequence file. Start the Hive Thrift server. hive --service hiveserver Start the Hive command shell. hive A Hive table may be defined on a sequence file by using the STORED AS SEQUENCEFILE clause. Run the following CREATE TABLE command in the Hive shell. In the LOCATION clause specify the 'hdfs://localhost:8020/wlslog ' directory in HDFS. hive>CREATE TABLE wlslog (TIME_STAMP STRING,CATEGORY STRING,TYPE STRING,SERVERNAME STRING,CODE STRING,MSG STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS SEQUENCEFILE LOCATION 'hdfs://localhost:8020/wlslog'; A Hive table stored as a sequence file gets created. Run a SELECT statement in the Hive shell to list the data from the sequence file in the HDFS. Running the Oracle Loader for Hadoop Having created a Hive table stored as a sequence file, next we shall load data from the Hive table into Oracle Database using Oracle Loader for Hadoop. Create the following OLH configuration file OraLoadConf.xml , which we also used in some of the earlier tutorials on OLH. mapreduce.inputformat.class oracle.hadoop.loader.lib.input.HiveToAvroInputFormat oracle.hadoop.loader.input.hive.databaseName default oracle.hadoop.loader.input.hive.tableName wlslog mapreduce.job.outputformat.class oracle.hadoop.loader.lib.output.JDBCOutputFormat mapreduce.output.fileoutputformat.outputdir oraloadout oracle.hadoop.loader.loaderMap.targetTable OE.WLSLOG oracle.hadoop.loader.connection.url jdbc:oracle:thin:@${HOST}:${TCPPORT}:${SID} TCPPORT 1521 HOST localhost SID ORCL oracle.hadoop.loader.connection.user OE oracle.hadoop.loader.connection.password OE Run the following Hadoop command to start the Oracle Loader for Hadoop and load the Hive table data into Oracle Database using the access parameters for the input/output specified in the configuration file. hadoop jar $OLH_HOME/jlib/oraloader.jar oracle.hadoop.loader.OraLoader -conf OraLoadConf.xml -libjars $OLH_HOME/jlib/oraloader.jar Oracle Loader for Hadoop 3.0.0 gets started. A MapReduce job runs to load the Hive table data into Oracle Database. A more detailed output from the Oracle Loader is listed: [root@localhost sequence]# hadoop jar $OLH_HOME/jlib/oraloader.jar oracle.hadoop.loader.OraLoader -conf OraLoadConf.xml -libjars $OLH_HOME/jlib/oraloader.jar Oracle Loader for Hadoop Release 3.0.0 - Production Copyright (c) 2011, 2014, Oracle and/or its affiliates. All rights reserved. 14/07/13 14:14:26 INFO loader.OraLoader: Oracle Loader for Hadoop Release 3.0.0 - Production Copyright (c) 2011, 2014, Oracle and/or its affiliates. All rights reserved. 14/07/13 14:14:26 INFO loader.OraLoader: Built-Against: hadoop-2.2.0-cdh5.0.0-beta-2 hive-0.12.0-cdh5.0.0-beta-2 avro-1.7.3 jackson-1.8.8 14/07/13 14:14:33 INFO loader.OraLoader: oracle.hadoop.loader.loadByPartition is disabled because table: WLSLOG is not partitioned 14/07/13 14:14:33 INFO loader.OraLoader: oracle.hadoop.loader.enableSorting disabled, no sorting key provided 14/07/13 14:14:33 INFO loader.OraLoader: Reduce tasks set to 0 because of no partitioning or sorting. Loading will be done in the map phase. 14/07/13 14:14:33 INFO output.DBOutputFormat: Setting map tasks speculative execution to false for : oracle.hadoop.loader.lib.output.JDBCOutputFormat 14/07/13 14:14:39 INFO loader.OraLoader: Sampling time=0D:0h:0m:0s:895ms (895 ms) 14/07/13 14:14:39 INFO loader.OraLoader: Submitting OraLoader job OraLoader 14/07/13 14:14:42 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId= 14/07/13 14:14:47 INFO hive.metastore: Trying to connect to metastore with URI thrift://localhost:10000 14/07/13 14:14:47 INFO hive.metastore: Connected to metastore. 14/07/13 14:14:48 INFO mapred.FileInputFormat: Total input paths to process : 1 14/07/13 14:14:48 INFO mapreduce.JobSubmitter: number of splits:1 14/07/13 14:14:48 WARN conf.Configuration: mapred.job.classpath.files is deprecated. Instead, use mapreduce.job.classpath.files 14/07/13 14:14:48 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1283717826_0001 14/07/13 14:15:02 INFO mapreduce.Job: The url to track the job: http://localhost:8080/ 14/07/13 14:15:02 INFO mapred.LocalJobRunner: OutputCommitter set in config null 14/07/13 14:15:02 INFO mapred.LocalJobRunner: OutputCommitter is oracle.hadoop.loader.lib.output.DBOutputCommitter 14/07/13 14:15:03 INFO mapred.LocalJobRunner: Waiting for map tasks 14/07/13 14:15:03 INFO mapred.LocalJobRunner: Starting task: attempt_local1283717826_0001_m_000000_0 14/07/13 14:15:03 INFO loader.OraLoader: map 0% reduce 0% 14/07/13 14:15:05 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ] 14/07/13 14:15:05 INFO mapred.MapTask: Processing split: hdfs://localhost:8020/wlslog/part-m-00000:0+916 14/07/13 14:15:07 INFO output.DBOutputFormat: conf prop: defaultExecuteBatch: 100 14/07/13 14:15:07 INFO output.DBOutputFormat: conf prop: loadByPartition: false 14/07/13 14:15:08 INFO output.DBOutputFormat: Insert statement: INSERT INTO "OE"."WLSLOG" ("TIME_STAMP", "CATEGORY", "TYPE", "SERVERNAME", "CODE", "MSG") VALUES (?, ?, ?, ?, ?, ?) 14/07/13 14:15:08 INFO mapred.LocalJobRunner: 14/07/13 14:15:10 INFO mapred.Task: Task:attempt_local1283717826_0001_m_000000_0 is done. And is in the process of committing 14/07/13 14:15:10 INFO mapred.LocalJobRunner: 14/07/13 14:15:10 INFO mapred.Task: Task attempt_local1283717826_0001_m_000000_0 is allowed to commit now 14/07/13 14:15:10 INFO output.JDBCOutputFormat: Committed work for task attempt attempt_local1283717826_0001_m_000000_0 14/07/13 14:15:10 INFO output.FileOutputCommitter: Saved output of task 'attempt_local1283717826_0001_m_000000_0' to hdfs://10.0.2.15:8020/user/root/oraloadout/_temporary/0/task_local1283717826_0001_m_000000 14/07/13 14:15:10 INFO mapred.LocalJobRunner: map 14/07/13 14:15:10 INFO mapred.Task: Task 'attempt_local1283717826_0001_m_000000_0' done. 14/07/13 14:15:10 INFO mapred.LocalJobRunner: Finishing task: attempt_local1283717826_0001_m_000000_0 14/07/13 14:15:10 INFO mapred.LocalJobRunner: Map task executor complete. 14/07/13 14:15:11 INFO loader.OraLoader: map 100% reduce 0% 14/07/13 14:15:11 INFO loader.OraLoader: Job complete: OraLoader (job_local1283717826_0001) 14/07/13 14:15:11 INFO loader.OraLoader: Counters: 23 File System Counters FILE: Number of bytes read=10412535 FILE: Number of bytes written=11376235 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=10424116 HDFS: Number of bytes written=9769979 HDFS: Number of read operations=239 HDFS: Number of large read operations=0 HDFS: Number of write operations=36 Map-Reduce Framework Map input records=7 Map output records=7 Input split bytes=1026 Spilled Records=0 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=94 CPU time spent (ms)=0 Physical memory (bytes) snapshot=0 Virtual memory (bytes) snapshot=0 Total committed heap usage (bytes)=20639744 File Input Format Counters Bytes Read=0 File Output Format Counters Bytes Written=1613 [root@localhost sequence]# Selecting Loaded Data Having loaded the sequence file data into Oracle Database table OE.WLSLOG we may run a SELECT statement in SQL Plus to list the loaded data. The output from the SELECT statement lists the 7 rows of data loaded into Oracle Database table OE.WLSLOG from the sequence file. SQL> SELECT * FROM OE.WLSLOG; TIME_STAMP -------------------------------------------------------------------------------- CATEGORY -------------------------------------------------------------------------------- TYPE -------------------------------------------------------------------------------- SERVERNAME -------------------------------------------------------------------------------- CODE -------------------------------------------------------------------------------- MSG -------------------------------------------------------------------------------- Apr-8-2014-7:06:16-PM-PDT Notice WebLogicServer AdminServer BEA-000365 Server state changed to STANDBY Apr-8-2014-7:06:17-PM-PDT Notice WebLogicServer AdminServer BEA-000365 Server state changed to STARTING Apr-8-2014-7:06:18-PM-PDT Notice WebLogicServer AdminServer BEA-000365 Server state changed to ADMIN Apr-8-2014-7:06:19-PM-PDT Notice WebLogicServer AdminServer BEA-000365 Server state changed to RESUMING Apr-8-2014-7:06:20-PM-PDT Notice WebLogicServer AdminServer BEA-000331 Started WebLogic AdminServer Apr-8-2014-7:06:21-PM-PDT Notice WebLogicServer AdminServer BEA-000365 Server state changed to RUNNING Apr-8-2014-7:06:22-PM-PDT Notice WebLogicServer AdminServer BEA-000360 Server started in RUNNING mode 7 rows selected. SQL> In this tutorial we loaded sequence file data into Oracle Database using Oracle Loader for Hadoop.

Viewing all articles
Browse latest Browse all 1814

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>