Quantcast
Channel: Oracle
Viewing all articles
Browse latest Browse all 1814

Wiki Page: Streaming Data from one Oracle Database Table to Another

$
0
0
Written by Deepak Vohra A common use case is to copy data from one Oracle Database table to another. One option is to use the CREATE TABLE AS SELECT SQL statement, which creates a new table with the same definition as another and copies data from the first table. Another option is to use INSERT SQL statement to add the same data to another table. For bulk transfer of data tools such as SQL*Loader, Data Pump Export and Data Pump Import may be used. Apache Sqoop could be used to bulk transfer from an Oracle Database table into HDFS and subsequently export from HDFS to another Oracle Database table. All these options incur a time lag as the SQL statements have to be run or a tool has to be run. A more direct option is to use Apache Flume to stream data from one Database table to another. The only requirement is that the database table into which data is to be copied is required to be created before streaming the data. In this tutorial we shall use Apache Flume to stream data from one Oracle Database table to another as shown in the following illustration. Setting the Environment Creating Oracle Database Tables Installing Flume SQL Source Installing Flume SQL Sink Configuring Apache Flume Running Flume Agent Querying Oracle Database Table Streaming Data, not just Bulk Transferring Setting the Environment The following software is required for this tutorial. Oracle Database Apache Flume Flume SQL Source Flume JDBS Sink JOOQ Java 7 Create a directory to install the software and set its permissions to global (777). mkdir /flume chmod -R 777 /flume cd /flume Download and extract the Apache Flume tar.gz file. wget http://archive.apache.org/dist/flume/1.6.0/apache-flume-1.6.0-bin.tar.gz tar -xvf apache-flume-1.6.0-bin.tar.gz Set the environment variables for Oracle Database, Apache Flume, Maven and Java. vi ~/.bashrc export MAVEN_HOME=/flume/apache-maven-3.3.3-bin export FLUME_HOME=/flume/apache-flume-1.6.0-bin export FLUME_CONF=/flume/apache-flume-1.6.0-bin/conf export ORACLE_HOME=/home/oracle/app/oracle/product/11.2.0/dbhome_1 export ORACLE_SID=ORCL export JAVA_HOME=/flume/jdk1.7.0_55 export PATH=$PATH:$FLUME_HOME/bin:$ ORACLE_HOME/bin: $MAVEN_HOME/bin export CLASSPATH=$FLUME_HOME/lib/* Creating Oracle Database Tables Drop and create the Oracle Database table OE.WLSLOG with the following SQL script. DROP TABLE OE.WLSLOG; CREATE TABLE OE.WLSLOG (id INTEGER PRIMARY KEY, time_stamp VARCHAR2(4000), category VARCHAR2(4000), type VARCHAR2(4000), servername VARCHAR2(4000), code VARCHAR2(4000), msg VARCHAR2(4000)); Add data to the table with the following SQL script. INSERT INTO OE.WLSLOG(id,time_stamp,category,type,servername,code,msg) VALUES(1,'Apr-8-2014-7:06:16-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to STANDBY'); INSERT INTO OE.WLSLOG(id,time_stamp,category,type,servername,code,msg) VALUES(2,'Apr-8-2014-7:06:17-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to STARTING'); INSERT INTO OE.WLSLOG(id,time_stamp,category,type,servername,code,msg) VALUES(3,'Apr-8-2014-7:06:18-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to ADMIN'); INSERT INTO OE.WLSLOG(id,time_stamp,category,type,servername,code,msg) VALUES(4,'Apr-8-2014-7:06:19-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to RESUMING'); INSERT INTO OE.WLSLOG(id,time_stamp,category,type,servername,code,msg) VALUES(5,'Apr-8-2014-7:06:20-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000361','Started WebLogic AdminServer'); INSERT INTO OE.WLSLOG(id,time_stamp,category,type,servername,code,msg) VALUES(6,'Apr-8-2014-7:06:21-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to RUNNING'); INSERT INTO OE.WLSLOG(id,time_stamp,category,type,servername,code,msg) VALUES(7,'Apr-8-2014-7:06:22-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000360','Server started in RUNNING mode'); We also need to create the database table into which data is to be streamed. The table definition could include just one column as Flume configuration could be used to concatenate the data in each row. Create a table OE.WLSLOG_COPY to copy the data into and add only column to the table. CREATE TABLE OE.WLSLOG_COPY(msg VARCHAR2(4000)); Installing Flume SQL Source The Flume source for Oracle Database is packaged with the Apache Flume distribution. Download the keedio/flume-ng-sql-source from the git hub. Cd (change directory) to the flume-ng-sql-source directory and compile and package the Flume SQL source to a jar file. git clone https://github.com/keedio/flume-ng-sql-source.git cd flume-ng-sql-source mvn package Create a plugins.d/sql-source/lib directory for the SQL Source plugin, set its permissions to global and copy the flume-ng-sql-source-1.3-SNAPSHOT.jar file to the lib directory. mkdir -p $FLUME_HOME/plugins.d/sql-source/lib chmod -R 777 $FLUME_HOME/plugins.d/sql-source/lib cp flume-ng-sql-source-1.3-SNAPSHOT.jar $FLUME_HOME/plugins.d/sql-source/lib Similarly, create a libext directory plugins.d/sql-source/libext for the Oracle Database JDBC jar file ojdbc6.jar , set its permissions to global (777) and copy the ojdbc6.jar to the libext directory. mkdir $FLUME_HOME/plugins.d/sql-source/libext chmod -R 777 $FLUME_HOME/plugins.d/sql-source/libext cp ojdbc6.jar $FLUME_HOME/plugins.d/sql-source/libext We also need to copy the ojdbc6.jar and flume-ng-sql-source-1.3-SNAPSHOT.jar to the Flume lib directory, which adds the jars to the runtime classpath of Flume. cp flume-ng-sql-source-1.3-SNAPSHOT.jar $FLUME_HOME/lib cp ojdbc6.jar $FLUME_HOME/lib Installing Flume SQL Sink Download, compile and package the Stratio Flume Ingestion source code. git clone https://github.com/Stratio/flume-ingestion.git mvn compile mvn package Copy the stratio-jdbc-sink-0.5.0-SNAPSHOT.jar jar file generated to Flume lib directory. cp stratio-jdbc-sink-0.5.0-SNAPSHOT.jar $FLUME_HOME/lib Download the Jooq jar from http://central.maven.org/maven2/org/jooq/jooq/3.6.2/jooq-3.6.2.jar and copy the jar to Flume lib directory. cp jooq-3.6.2 $FLUME_HOME/lib Configuring Apache Flume Create a configuration file flume.conf and specify the following configuration properties to the file. Configuration Property Description Value agent.sources Sets the Flume Source. sql-source agent.sinks Sets the Flume sink. jdbcSink agent.channels Sets the Flume channel. ch1 agent.sources.sql-source.channels Sets the channel on the source. ch1 agent.channels.ch1.capacity Sets the channel capacity. 1000000 agent.channels.ch1.type Sets the channel type. memory agent.sources.sql-source.type Sets the SQL Source type class. org.keedio.flume.source.SQLSource agent.sources.sql-source.connection.url Sets the connection URL for Oracle Database. jdbc:oracle:thin:@127.0.0.1:1521:ORCL agent.sources.sql-source.user Sets the username for Oracle Database. OE agent.sources.sql-source.password Sets the password for Oracle Database. OE agent.sources.sql-source.table Sets the Oracle Database table. WLSLOG agent.sources.sql-source.columns.to.select Sets the columns to select. The * setting selects all columns. * agent.sources.sql-source.incremental.column.name Sets the incremental column name. id agent.sources.sql-source.incremental.value Sets the incremental column value to start streaming from. 0 agent.sources.sql-source.run.query.delay Sets the frequency in millisecond to poll sql source. 10000 agent.sources.sql-source.status.file.path Sets the directory path for the SQL source status file. /var/lib/flume agent.sources.sql-source.status.file.name Sets the status file. sql-source.status agent.sinks.jdbcSink.type Sets the sink type com.stratio.ingestion.sink.jdbc. JDBCSink agent.sinks.jdbcSink.connectionString Sets the connection uri for Oracle Database jdbc:oracle:thin:@127.0.0.1:1521: ORCL agent.sinks.jdbcSink.username Sets the username for Oracle Database OE agent.sinks.jdbcSink.password Sets the password for Oracle Database OE agent.sinks.jdbcSink.batchSize Sets the batch size 10 agent.sinks.jdbcSink.channel Sets the channel on the sink memoryChannel agent.sinks.jdbcSink.sqlDialect Sets the SQL dialect. Oracle specific dialect is not provided but the DERBY dialect is similar and should be set DERBY agent.sinks.jdbcSink.driver Sets the JDBC driver class oracle.jdbc.OracleDriver agent.sinks.jdbcSink.sql Sets the custom SQL for adding data to Oracle Database INSERT INTO OE.WLSLOG_COPY(msg) VALUES(${body:varchar}) The flume.conf is listed. agent.channels = ch1 agent.sinks = oradb agent.sources = sql-source agent.channels.ch1.type = memory agent.channels.ch1.capacity = 1000000 agent.sources.sql-source.channels = ch1 agent.sources.sql-source.type = org.keedio.flume.source.SQLSource # URL to connect to database agent.sources.sql-source.connection.url = jdbc:oracle:thin:@127.0.0.1:1521:ORCL # Database connection properties agent.sources.sql-source.user = OE agent.sources.sql-source.password = OE agent.sources.sql-source.table = OE.WLSLOG agent.sources.sql-source.columns.to.select = * # Increment column properties agent.sources.sql-source.incremental.column.name = id # Increment value is from you want to start taking data from tables (0 will import entire table) agent.sources.sql-source.incremental.value = 0 # Query delay, each configured milisecond the query will be sent agent.sources.sql-source.run.query.delay=10000 # Status file is used to save last readed row agent.sources.sql-source.status.file.path = /var/lib/flume agent.sources.sql-source.status.file.name = sql-source.status agent.sinks.oradb.channel = ch1 agent.sinks.oradb.type = com.stratio.ingestion.sink.jdbc.JDBCSink agent.sinks.oradb.connectionString = jdbc:oracle:thin:@127.0.0.1:1521:ORCL agent.sinks.oradb.username=OE agent.sinks.oradb.password=OE agent.sinks.oradb.table = OE.WLSLOG_COPY agent.sinks.oradb.batchSize = 10 agent.sinks.oradb.sqlDialect=DERBY agent.sinks.oradb.driver=oracle.jdbc.OracleDriver agent.sinks.oradb.sql=INSERT INTO OE.WLSLOG_COPY(msg) VALUES(${body:varchar}) Create the directory for the status file, which keeps track of the rows already streamed and set its permissions to global (777). If the status file directory was already created for another application remove the status file from the directory as it shall be automatically created and incremented by Flume. sudo mkdir -p /var/lib/flume sudo chmod -R 777 /var/lib/flume cd /var/lib/flume rm sql-source.status Copy the flume.conf to the Flume configuration directory. cp flume.conf $FLUME_HOME/conf/flume.conf Create the Flume environment file flume-env.sh from the template file. cp $FLUME_HOME/conf/flume-env.sh.template $FLUME_HOME/conf/flume-env.sh Running Flume Agent To stream data from one Oracle Database table to another, both of which could be in the same database and the same schema (as we have configured in this tutorial) run the Flume agent. flume-ng agent --conf $FLUME_CONF -f $FLUME_CONF/flume.conf -n agent -Dflume.root.logger=INFO,console Data gets streamed from one table to another and the Flume agent continues to run. A more detailed output from Flume agent is as follows. [root@localhost flume]# flume-ng agent --conf $FLUME_CONF -f $FLUME_CONF/flume.conf -n agent -Dflume.root.logger=INFO,console 15:24:15.782 [conf-file-poller-0] INFO o.a.f.n.PollingPropertiesFileConfigurationProvider - Reloading configuration file:/flume/apache-flume-1.6.0-bin/conf/flume.conf 15:24:16.075 [conf-file-poller-0] INFO o.a.flume.conf.FlumeConfiguration - Processing:oradb 15:24:16.095 [conf-file-poller-0] DEBUG o.a.flume.conf.FlumeConfiguration - Created context for oradb: driver 15:24:16.099 [conf-file-poller-0] INFO o.a.flume.conf.FlumeConfiguration - Processing:oradb 15:24:16.100 [conf-file-poller-0] INFO o.a.flume.conf.FlumeConfiguration - Processing:oradb 15:24:16.101 [conf-file-poller-0] INFO o.a.flume.conf.FlumeConfiguration - Added sinks: oradb Agent: agent 15:24:16.101 [conf-file-poller-0] INFO o.a.flume.conf.FlumeConfiguration - Processing:oradb 15:24:16.103 [conf-file-poller-0] INFO o.a.flume.conf.FlumeConfiguration - Processing:oradb 15:24:16.112 [conf-file-poller-0] INFO o.a.flume.conf.FlumeConfiguration - Processing:oradb 15:24:16.113 [conf-file-poller-0] INFO o.a.flume.conf.FlumeConfiguration - Processing:oradb 15:24:16.113 [conf-file-poller-0] INFO o.a.flume.conf.FlumeConfiguration - Processing:oradb 15:24:16.116 [conf-file-poller-0] INFO o.a.flume.conf.FlumeConfiguration - Processing:oradb 15:24:16.170 [conf-file-poller-0] INFO o.a.flume.conf.FlumeConfiguration - Processing:oradb 15:24:16.187 [conf-file-poller-0] DEBUG o.a.flume.conf.FlumeConfiguration - Starting validation of configuration for agent: agent, initial-configuration: AgentConfiguration[agent] SOURCES: {sql-source={ parameters:{run.query.delay=10000, columns.to.select=*, connection.url=jdbc:oracle:thin:@127.0.0.1:1521:ORCL, incremental.value=0, channels=ch1, table=WLSLOG, status.file.name=sql-source.status, type=org.keedio.flume.source.SQLSource, user=OE, password=OE, incremental.column.name=id, status.file.path=/var/lib/flume} }} CHANNELS: {ch1={ parameters:{capacity=1000000, type=memory} }} SINKS: {oradb={ parameters:{username=OE, sql=INSERT INTO WLSLOG_COPY(msg) VALUES(${body:varchar}), sqlDialect=DERBY, batchSize=10, driver=oracle.jdbc.OracleDriver, connectionString=jdbc:oracle:thin:@127.0.0.1:1521:ORCL, table=WLSLOG_COPY, type=com.stratio.ingestion.sink.jdbc.JDBCSink, channel=ch1, password=OE} }} 15:24:16.989 [conf-file-poller-0] DEBUG o.a.flume.conf.FlumeConfiguration - Created channel ch1 15:24:17.635 [conf-file-poller-0] DEBUG o.a.flume.conf.FlumeConfiguration - Creating sink: oradb using OTHER 15:24:17.791 [conf-file-poller-0] DEBUG o.a.flume.conf.FlumeConfiguration - Post validation configuration for agent AgentConfiguration created without Configuration stubs for which only basic syntactical validation was performed[agent] SOURCES: {sql-source={ parameters:{run.query.delay=10000, columns.to.select=*, connection.url=jdbc:oracle:thin:@127.0.0.1:1521:ORCL, incremental.value=0, channels=ch1, table=WLSLOG, status.file.name=sql-source.status, type=org.keedio.flume.source.SQLSource, user=OE, password=OE, incremental.column.name=id, status.file.path=/var/lib/flume} }} CHANNELS: {ch1={ parameters:{capacity=1000000, type=memory} }} SINKS: {oradb={ parameters:{username=OE, sql=INSERT INTO WLSLOG_COPY(msg) VALUES(${body:varchar}), sqlDialect=DERBY, batchSize=10, driver=oracle.jdbc.OracleDriver, connectionString=jdbc:oracle:thin:@127.0.0.1:1521:ORCL, table=WLSLOG_COPY, type=com.stratio.ingestion.sink.jdbc.JDBCSink, channel=ch1, password=OE} }} 15:24:17.833 [conf-file-poller-0] DEBUG o.a.flume.conf.FlumeConfiguration - Channels:ch1 15:24:17.859 [conf-file-poller-0] DEBUG o.a.flume.conf.FlumeConfiguration - Sinks oradb 15:24:17.871 [conf-file-poller-0] DEBUG o.a.flume.conf.FlumeConfiguration - Sources sql-source 15:24:17.872 [conf-file-poller-0] INFO o.a.flume.conf.FlumeConfiguration - Post-validation flume configuration contains configuration for agents: [agent] 15:24:17.941 [conf-file-poller-0] INFO o.a.f.n.AbstractConfigurationProvider - Creating channels 15:24:18.496 [conf-file-poller-0] INFO o.a.f.channel.DefaultChannelFactory - Creating instance of channel ch1 type memory 15:24:18.697 [conf-file-poller-0] INFO o.a.f.n.AbstractConfigurationProvider - Created channel ch1 15:24:18.763 [conf-file-poller-0] INFO o.a.f.source.DefaultSourceFactory - Creating instance of source sql-source, type org.keedio.flume.source.SQLSource 15:24:18.768 [conf-file-poller-0] DEBUG o.a.f.source.DefaultSourceFactory - Source type org.keedio.flume.source.SQLSource is a custom type 15:24:18.804 [lifecycleSupervisor-1-0] DEBUG o.a.f.lifecycle.LifecycleSupervisor - checking process:{ file:/flume/apache-flume-1.6.0-bin/conf/flume.conf counterGroup:{ name:null counters:{file.checks=1, file.loads=1} } provider:org.apache.flume.node.PollingPropertiesFileConfigurationProvider agentName:agent } supervisoree:{ status:{ lastSeen:1447273455419 lastSeenState:IDLE desiredState:START firstSeen:1447273455419 failures:0 discard:false error:false } policy:org.apache.flume.lifecycle.LifecycleSupervisor$SupervisorPolicy$AlwaysRestartPolicy@44b22e } 15:24:18.806 [lifecycleSupervisor-1-0] DEBUG o.a.f.lifecycle.LifecycleSupervisor - Status check complete 15:24:18.995 [conf-file-poller-0] INFO org.keedio.flume.source.SQLSource - Reading and processing configuration values for source sql-source 15:24:19.161 [conf-file-poller-0] INFO o.k.flume.source.SQLSourceHelper - /var/lib/flume/sql-source.status correctly formed 15:24:21.808 [lifecycleSupervisor-1-1] DEBUG o.a.f.lifecycle.LifecycleSupervisor - checking process:{ file:/flume/apache-flume-1.6.0-bin/conf/flume.conf counterGroup:{ name:null counters:{file.checks=1, file.loads=1} } provider:org.apache.flume.node.PollingPropertiesFileConfigurationProvider agentName:agent } supervisoree:{ status:{ lastSeen:1447273458806 lastSeenState:START desiredState:START firstSeen:1447273455419 failures:0 discard:false error:false } policy:org.apache.flume.lifecycle.LifecycleSupervisor$SupervisorPolicy$AlwaysRestartPolicy@44b22e } 15:24:21.811 [lifecycleSupervisor-1-1] DEBUG o.a.f.lifecycle.LifecycleSupervisor - Status check complete 15:24:24.815 [lifecycleSupervisor-1-0] DEBUG o.a.f.lifecycle.LifecycleSupervisor - checking process:{ file:/flume/apache-flume-1.6.0-bin/conf/flume.conf counterGroup:{ name:null counters:{file.checks=1, file.loads=1} } provider:org.apache.flume.node.PollingPropertiesFileConfigurationProvider agentName:agent } supervisoree:{ status:{ lastSeen:1447273461811 lastSeenState:START desiredState:START firstSeen:1447273455419 failures:0 discard:false error:false } policy:org.apache.flume.lifecycle.LifecycleSupervisor$SupervisorPolicy$AlwaysRestartPolicy@44b22e } 15:24:24.869 [lifecycleSupervisor-1-0] DEBUG o.a.f.lifecycle.LifecycleSupervisor - Status check complete 15:24:27.875 [lifecycleSupervisor-1-2] DEBUG o.a.f.lifecycle.LifecycleSupervisor - checking process:{ file:/flume/apache-flume-1.6.0-bin/conf/flume.conf counterGroup:{ name:null counters:{file.checks=1, file.loads=1} } provider:org.apache.flume.node.PollingPropertiesFileConfigurationProvider agentName:agent } supervisoree:{ status:{ lastSeen:1447273464869 lastSeenState:START desiredState:START firstSeen:1447273455419 failures:0 discard:false error:false } policy:org.apache.flume.lifecycle.LifecycleSupervisor$SupervisorPolicy$AlwaysRestartPolicy@44b22e } 15:24:27.878 [lifecycleSupervisor-1-2] DEBUG o.a.f.lifecycle.LifecycleSupervisor - Status check complete 2015-11-11 15:24:29,794 (conf-file-poller-0) [INFO - org.hibernate.annotations.common.reflection.java.JavaReflectionManager. (JavaReflectionManager.java:66)] HCANN000001: Hibernate Commons Annotations {4.0.5.Final} 2015-11-11 15:24:30,685 (conf-file-poller-0) [INFO - org.hibernate.Version.logVersion(Version.java:54)] HHH000412: Hibernate Core {4.3.10.Final} 15:24:30.896 [lifecycleSupervisor-1-1] DEBUG o.a.f.lifecycle.LifecycleSupervisor - checking process:{ file:/flume/apache-flume-1.6.0-bin/conf/flume.conf counterGroup:{ name:null counters:{file.checks=1, file.loads=1} } provider:org.apache.flume.node.PollingPropertiesFileConfigurationProvider agentName:agent } supervisoree:{ status:{ lastSeen:1447273467878 lastSeenState:START desiredState:START firstSeen:1447273455419 failures:0 discard:false error:false } policy:org.apache.flume.lifecycle.LifecycleSupervisor$SupervisorPolicy$AlwaysRestartPolicy@44b22e } 15:24:30.901 [lifecycleSupervisor-1-1] DEBUG o.a.f.lifecycle.LifecycleSupervisor - Status check complete Querying Oracle Database Table Having made a copy of the table data query the OE.WLSLOG_COPY table. SELECT * FROM OE.WLSLOG_COPY; The data streamed to the table gets listed. Exit the SQL*Plus as we shall be adding more data in the Oracle Database table OE.WLSLOG . Streaming Data, not just Bulk Transferring The difference between Apache Sqoop and Apache Flume is that while Apache Sqoop terminates after transferring data Apache Flume continues to run and as new data becomes available at the source the data is streamed to the sink. To demonstrate add three more rows of data with the following SQL script in SQL*Plus while the Flume agent is running. INSERT INTO OE.WLSLOG(id,time_stamp,category,type,servername,code,msg) VALUES(8,'Apr-8-2014-7:06:20-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000361','Started WebLogic AdminServer'); INSERT INTO OE.WLSLOG(id,time_stamp,category,type,servername,code,msg) VALUES(9,'Apr-8-2014-7:06:21-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to RUNNING'); INSERT INTO OE.WLSLOG(id,time_stamp,category,type,servername,code,msg) VALUES(10,'Apr-8-2014-7:06:22-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000360','Server started in RUNNING mode'); Exit SQL*Plus after adding the three rows of data. Flume agent streams the 3 new rows. The Flume agent may be shut down if no more data is to be streamed. Shutdown metrics indicate the Flume channel continues to poll the Flume source for new data and the Flume sink continues to poll the Flume channel for new data. Flume Channel Metrics channel.event. take.attempt channel.event. take.success channel.event. put.attempt channel.event. put.success 136 10 10 10 Flume Sink Metrics sink.event.drain.attempt sink.event.drain.sucess 140 10 Flume Source Metrics sql-source. events_count 10 The Shutdown Metrics are as follows. 16:08:42.502 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Component type: CHANNEL, name: ch1 stopped 16:08:42.503 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: CHANNEL, name: ch1. channel.start.time == 1447275737379 16:08:42.504 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: CHANNEL, name: ch1. channel.stop.time == 1447276122502 16:08:42.505 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: CHANNEL, name: ch1. channel.capacity == 1000000 16:08:42.507 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: CHANNEL, name: ch1. channel.current.size == 0 16:08:42.517 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: CHANNEL, name: ch1. channel.event.put.attempt == 10 16:08:42.536 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: CHANNEL, name: ch1. channel.event.put.success == 10 16:08:42.541 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: CHANNEL, name: ch1. channel.event.take.attempt == 136 16:08:42.546 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: CHANNEL, name: ch1. channel.event.take.success == 10 16:08:42.568 [SinkRunner-PollingRunner-DefaultSinkProcessor] DEBUG org.apache.flume.SinkRunner - Polling runner exiting. Metrics:{ name:null counters:{runner.backoffs.consecutive=0} } 16:08:42.575 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Component type: SINK, name: oradb stopped 16:08:42.576 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: SINK, name: oradb. sink.start.time == 1447275737411 16:08:42.576 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: SINK, name: oradb. sink.stop.time == 1447276122575 16:08:42.577 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: SINK, name: oradb. sink.batch.complete == 0 16:08:42.581 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: SINK, name: oradb. sink.batch.empty == 12 16:08:42.593 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: SINK, name: oradb. sink.batch.underflow == 2 16:08:42.756 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: SINK, name: oradb. sink.connection.closed.count == 0 16:08:42.756 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: SINK, name: oradb. sink.connection.creation.count == 0 16:08:42.756 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: SINK, name: oradb. sink.connection.failed.count == 0 16:08:42.756 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: SINK, name: oradb. sink.event.drain.attempt == 140 16:08:42.756 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: SINK, name: oradb. sink.event.drain.sucess == 10 2015-11-11 16:08:43,116 (PollableSourceRunner-SQLSource-sql-source) [INFO - org.hibernate.engine.transaction.internal.TransactionFactoryInitiator.initiateService(TransactionFactoryInitiator.java:62)] HHH000399: Using default transaction strategy (direct JDBC transactions) 16:08:44.528 [agent-shutdown-hook] INFO org.keedio.flume.source.SQLSource - Stopping sql source sql-source ... 16:08:44.529 [agent-shutdown-hook] INFO o.k.flume.source.HibernateHelper - Closing hibernate session 2015-11-11 16:08:44,570 (agent-shutdown-hook) [INFO - org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl.stop(DriverManagerConnectionProviderImpl.java:281)] HHH000030: Cleaning up connection pool [jdbc:oracle:thin:@127.0.0.1:1521:ORCL] 16:08:44.765 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Component type: SOURCE, name: SOURCESQL.sql-source stopped 16:08:44.771 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: SOURCE, name: SOURCESQL.sql-source. source.start.time == 1447275737405 16:08:44.773 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: SOURCE, name: SOURCESQL.sql-source. source.stop.time == 1447276124765 16:08:44.791 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: SOURCE, name: SOURCESQL.sql-source. average_throughput == 0 16:08:44.795 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: SOURCE, name: SOURCESQL.sql-source. current_throughput == 0 16:08:44.798 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: SOURCE, name: SOURCESQL.sql-source. events_count == 10 16:08:44.800 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: SOURCE, name: SOURCESQL.sql-source. max_throughput == 1 16:08:44.808 [agent-shutdown-hook] INFO o.a.f.n.PollingPropertiesFileConfigurationProvider - Configuration provider stopping 16:08:44.834 [agent-shutdown-hook] DEBUG o.a.f.n.PollingPropertiesFileConfigurationProvider - Configuration provider stopped 16:08:44.846 [agent-shutdown-hook] DEBUG o.a.f.lifecycle.LifecycleSupervisor - Lifecycle supervisor stopped [root@localhost flume]# Run a query statement on the WLSLOG_COPY table again. 10 rows get listed instead of the earlier 7 rows. In this tutorial we streamed data from one Oracle Database table data to another using Apache Flume.

Viewing all articles
Browse latest Browse all 1814

Latest Images

Trending Articles



Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>