Quantcast
Channel: Oracle
Viewing all 1814 articles
Browse latest View live

Wiki Page: Dealing the targets after the database upgrade - EM 12c

$
0
0
Introduction This article describes you(few) post tasks to be done if the database is upgraded to any recent version or downgrade to lower version when it is being monitored by the Enterprise Manager 12c. This article written based on the upgrade of database from 11.2.0.3 to 12.1.0.2 and what are the changes to be done from the enterprise manager 12c. Database changes After the upgrade of the database(11.2.0 -> 12.0.1), by default we can see the instance as running because agent can push the database with the active instances, listeners which are running. For example Maindb we have upgraded, after the upgrade we able to monitor the databaes and status of the databaes shows healthy and we can see the active sessions, top sqls and all kind of information like below. Navigation: Target à Databases à “select ORCL” The database is monitoring with the old values i.e. Old Oracle HOME of 11gR2. From the below pictures we can see what the oracle home is configured for the database(11.2.0) infact we are into 12.1.0.2. From the database navigate to the configuration as below. Now we can see the configuration of the target “ORCL” Maindb Database, here the Oracle Home path still shows to the Oracle Home i.e. 11gR2 of path /u01/app/oracle/product/11.2.0/db_1, now we have to change the oracle home to point out to the new Home “/u01/app/oracle/product/12.1.0/db_1” Now we have changed to the new Home which points to 12.1.0.2 After changing , you test the connection by clicking "Test connection" , If the result succeeds then we can confirm that session able to establish. Now we can see the configuration is validated with the correct Oracle Home by clicking the "Next/Submit" button. The overall steps helps to avoid misleading monitoring and to ensure OEM is assuming the database is running on the new Home. Listener As we seen above the database was in healthy status even though target monitored by the Old Oracle home, In case of database Listener the target status shows as DOWN from EM12c. By the belowimage it is clear that status from EM shows as down, we will connect to the listener and update the configuration by changing the oracle home Connect to the actual server ORA-C1 and gather the Oracle home and the network directory. [oracle@ORA-C1 ~]$ ps -ef|grep tns root 21 2 0 2015 ? 00:00:00 [netns] oracle 3920 3902 0 12:06 pts/0 00:00:00 grep tns oracle 29031 1 0 Jan23 ? 03:00:19 /u01/app/oracle/product/ 12.1.0 /db_1//bin/tnslsnr LISTENER -inherit [oracle@ORA-C1 ~]$ ps -ef|grep pmon oracle 3979 3902 0 12:07 pts/0 00:00:00 grep pmon oracle 10024 1 0 Jan23 ? 00:45:48 ora_pmon_DV02 [oracle@ORA-C1 ~]$ pwdx 10024 10024: /u01/app/oracle/product/12.1.0/db_1/dbs [oracle@ORA-C1 ~]$ echo $ORACLE_HOME /u01/app/oracle/product/12.1.0/db_1 [oracle@ORA-C1 ~]$ We will access the monitoring configuration of the Listener by using the below navigation in order to check the configuration of the listener. Here change/alter the locations of listener.ora directory and oracle home as below. Now the status shows as the configuration changes were performed and whenever with the next agent push the actual data will be retrieved. Earlier the status was in down and now the status should show to Up (green) , If still the status shows as down then then try to upload agent manually like below. [oracle@ORA-C1 ~]$ . oraenv ORACLE_SID = [oracle] ? agent12c The /u01/app/oracle/12cc_agent/core/12.1.0.5.0/bin/orabase binary does not exist You can set ORACLE_BASE manually if it is required. Resetting ORACLE_BASE to its previous value or ORACLE_HOME The Oracle base remains unchanged with value /u00/app/oracle [oracle@ORA-C1 ~]$ emctl status agent Oracle Enterprise Manager Cloud Control 12c Release Copyright (c) 1996, 2015 Oracle Corporation. All rights reserved. --------------------------------------------------------------- Agent Version : 12.1.0.5.0 OMS Version : 12.1.0.5.0 Protocol Version : 12.1.0.1.0 Agent Home : /u01/app/oracle/12cc_agent/agent_inst Agent Log Directory : /u01/app/oracle/12cc_agent/agent_inst/sysman/log Agent Binaries : /u01/app/oracle/12cc_agent/core/12.1.0.5.0 Agent Process ID : 21024 Parent Process ID : 20909 Agent URL : https://ORA-C1.oracle-ckpt.com:3872/emd/main/ Local Agent URL in NAT : https://ORA-C1.oracle-ckpt.com:3872/emd/main/ Repository URL : https://ORA-CC.oracle-ckpt.com:4903/empbs/upload Started at : 2016-02-01 03:55:24 Started by user : oracle Operating System : Linux version 2.6.32-300.7.1.el5uek (amd64) Last Reload : (none) Last successful upload : 2016-03-30 12:14:35 Last attempted upload : 2016-03-30 12:14:35 Total Megabytes of XML files uploaded so far : 57.19 Number of XML files pending upload : 0 Size of XML files pending upload(MB) : 0 Available disk space on upload filesystem : 55.23% Collection Status : Collections enabled Heartbeat Status : Ok Last attempted heartbeat to OMS : 2016-03-30 12:17:28 Last successful heartbeat to OMS : 2016-03-30 12:17:28 Next scheduled heartbeat to OMS : 2016-03-30 12:18:28 --------------------------------------------------------------- Agent is Running and Ready [oracle@ORA-C1 ~]$ emctl upload agent Oracle Enterprise Manager Cloud Control 12c Release 5 Copyright (c) 1996, 2015 Oracle Corporation. All rights reserved. --------------------------------------------------------------- EMD upload completed successfully [oracle@ORA-C1 ~]$ emctl status agent Oracle Enterprise Manager Cloud Control 12c Release 5 Copyright (c) 1996, 2015 Oracle Corporation. All rights reserved. --------------------------------------------------------------- Agent Version : 12.1.0.5.0 OMS Version : 12.1.0.5.0 Protocol Version : 12.1.0.1.0 Agent Home : /u01/app/oracle/12cc_agent/agent_inst Agent Log Directory : /u01/app/oracle/12cc_agent/agent_inst/sysman/log Agent Binaries : /u01/app/oracle/12cc_agent/core/12.1.0.5.0 Agent Process ID : 21024 Parent Process ID : 20909 Agent URL : https://ORA-C1.oracle-ckpt.com:3872/emd/main/ Local Agent URL in NAT : https://ORA-C1.oracle-ckpt.com:3872/emd/main/ Repository URL : https://ORA-CC.oracle-ckpt.com:4903/empbs/upload Started at : 2016-02-01 03:55:24 Started by user : oracle Operating System : Linux version 2.6.32-300.7.1.el5uek (amd64) Last Reload : (none) Last successful upload : 2016-03-30 12:18:07 Last attempted upload : 2016-03-30 12:18:07 Total Megabytes of XML files uploaded so far : 57.2 Number of XML files pending upload : 0 Size of XML files pending upload(MB) : 0 Available disk space on upload filesystem : 55.23% Collection Status : Collections enabled Heartbeat Status : Ok Last attempted heartbeat to OMS : 2016-03-30 12:18:28 Last successful heartbeat to OMS : 2016-03-30 12:18:28 Next scheduled heartbeat to OMS : 2016-03-30 12:19:28 --------------------------------------------------------------- Agent is Running and Ready [oracle@ORA-C1 ~]$ We will check the latest status of the listener target from EM12c. As we see above image, after performing changes to the configuration the agent status showing up and in healthy status. Conclusion We’ve seen how to perform the changes of the database and listener configuration from Enterprise Manager 12c after the upgrade of the database in order to avoid false monitoring. ­

Blog Post: Oracle Solaris 11 Display Issue – /usr/openwin/bin/xdpyinfo

$
0
0
I was trying to Install Database on Solaris 11.2 on one of our customer but the run Installer failed with the following error: oracle@soltest:/u01/sw_home$ ./runInstaller Starting Oracle Universal Installer... Checking Temp space: must be greater than 180 MB. Actual 5779 MB Passed Checking swap space: must be greater than 150 MB. Actual 6961 MB Passed Checking monitor: must be configured to display at least 256 colors >>> Could not execute auto check for display colors using command /usr/openwin/bin/xdpyinfo. Check if the DISPLAY variable is set. Failed <<<< Some requirement checks failed. You must fulfill these requirements before continuing with the installation, Continue? (y/n) [n] oracle@soltest:/u01/sw_home$ When we check the display variable then its already pointing it to the correct environment and we also ensured the “xhost +” command too executed from root user. But still same issue exists. Cause: In Solaris 11 the new user roles has been introduced and we cannot login to the system using the direct “root” user. We have to login with the admin user which was configured during initial install. So the Display environment variable will be configured for the admin user with which we are logged in In current env. “soladmin” is the OS user which was created during the initial install. Solution: Execute xhost+ command from the admin user account and  it should work fine. soladmin@soltest:~$ xhost + access control disabled, clients can connect from any host soladmin@soltest:~$ -Verify the display oracle@soltest:/u01/sw_home$ export DISPLAY=:0.0 oracle@soltest:/u01/sw_home$ echo $DISPLAY :0.0 oracle@soltest:/u01/sw_home$ If direct root login is not enabled then we must use admin account created during installation for executing xhost+ command. Thanks for reading regards, X A H E E R

Forum Post: Script Output Text Colour

$
0
0
I use a dark background as my default in TOAD, but this seems to cause problems with Script Output / Output tab. Unless I am missing something, there seems to be no way to change the text colour to anything but black, which makes the output unreadable. See below (see image). Can you add an option somewhere please?

Blog Post: How to Use GoldenGate Token with COLMAP?

$
0
0
Tokens are used to capture and store the environment variable values in the header of the GoldenGate trail record file. The trail file header contains a lot of information about the physical environment that produced the trail file and trail file contents. We can use this information to map token data to a target column […]

Wiki Page: MAA – Creating Single instance Physical Standby for a RAC primary - 12c

$
0
0
Written by Nassyam Basha Introduction This article explains how to configure Data Guard between RAC primary and standby alone (Oracle restart) Standby database with easy and all advanced methods to achieve Oracle maximum availability architecture with the Data Guard broker and finally how to register the database with high availability services to manage database using service control utility. This article is purely for 12c and of course the procedure is applicable for even earlier versions and release but with few changes in compatibility parameter. MAA Setup We have seen MAA documents with earlier versions but the data guard configuration is with manual method. In this article we have used all advanced and available methods to build Standby from RAC to stand alone. As we know there are various methods to refresh standby from primary and the preference of the method is purely dependent on the downtime, network transfer rate, space constraints and so on. In this article we have used the Active Duplicate method to refresh the database for standby use and configured Data Guard using broker with easy steps. Basically this article goal is to show how simple is to configure the setup. Usually there will be lot of confusion in Data Guard parameters when the RAC is primary and standby is Stand alone. So to lift out these confusions I will strongly recommend using Broker to simplify the setup and to avoid misconfiguration of the setup. Now question is my customer is not preferring the Data Guard broker, but being DBA its his/her role to explain the things how the Data Guard broker ease the things and how beautiful features introduced in 12c like we can forecast whether the switchover WILL be successful or not and to validate the configuration between primary and standby so on. Before the configuration first let’s understand the proposed configuration and rough architecture. As said earlier the primary is RAC database with server ORA-R2N1 and ORA-R2N2 and the standby stand alone database hostname is ORA-R2N3 and the Data Guard will be managed using the Data Guard Broker. Apart from that the configuration and the steps are same for any Operating system except few things in copying files and using the commands. Configuration on Primary database prior to Duplicate We have to be very careful before creating standby database and the mandatory of them are below one by one. The very important thing is the primary database should be in archive log mode and whether you use FRA or not it is optional but highly recommended to fulfill the MAA concept, so that oracle will manage archive log files, backups and flash logs based on the space considerations. 1. Archivelog mode – Indeed production database will be in archivelog mode, if you are using for any test purpose then you can enable archiving from the mount status using the command “alter database archivelog” and then open the database. SQL> archive log list Database log mode Archive Mode Automatic archival Enabled Archive destination USE_DB_RECOVERY_FILE_DEST Oldest online log sequence 14 Next log sequence to archive 15 Current log sequence 15 SQL> show parameter db_recovery_file_dest NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ db_recovery_file_dest string +DGFRA1 db_recovery_file_dest_size big integer 5025M SQL> 2. Force Logging – Few of the tables/objects might be created in nologging to stop the heavy redo generation, but with this behavior standby will not function, Hence ensure the primary database is in force logging mode to ensure all the changes have redo for standby availability to recover the changes. SQL> select database_role,force_logging from v$database; DATABASE_ROLE FORCE_LOGGING ---------------- --------------------------------------- PRIMARY NO SQL> alter database force logging; Database altered. SQL> 3. Standby Logfile Groups – This is the best practice to create the standby log files on primary so that in the duplicate process the standby log files will be created automatically same as primary if we specify valid LOG_FILE_NAME_CONVERT values. If we do not create the standby log file groups on primary then we can also again create on standby after the duplicate or restore. Ensure the standby log file size is same as the online log file size and the number of standby log filegroups is same as online log file groups. SQL> alter database add standby logfile size 100m; 4. Listener & Oracle net service – As we know from 11gR2 we have concept of scan listener for more flexibility on load balancing and no more risk in case of add/delete nodes so on. So review the primary and standby listeners and the net services we have defined. For RAC we are suing scan ip and where for standby we are using regular IP address. [oracle@ora-r2n1 ~]$ srvctl status scan_listener SCAN Listener LISTENER_SCAN1 is enabled SCAN listener LISTENER_SCAN1 is running on node ora-r2n1 [oracle@ora-r2n1 ~]$ srvctl status listener -n ora-r2n1 Listener LISTENER is enabled on node(s): ora-r2n1 Listener LISTENER is running on node(s): ora-r2n1 [oracle@ora-r2n1 ~]$ srvctl status listener -n ora-r2n2 Listener LISTENER is enabled on node(s): ora-r2n2 Listener LISTENER is running on node(s): ora-r2n2 [oracle@ora-r2n1 ~]$ 5. Initialization parameters - Prior to diag destination introduction we have to create many directories to store adump,bdump, udump , cdump so on. Now our job is so easy and if we want explicitly want to have different location then of course we can assign like adump I have defined in below init configuration. This article is purely for 12c so do not forget about multitenant , if the primary database contains pluggable database then we must add parameter enable_pluggable_database=true. We can also skip adding PDB database but it DOES NOT MEET MAA. If you take a look my primary database disk group names are different and for standby disk groups are different, hence I have used DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT for online and standby log files. 6. Password file – For RAC primary database we have password file in shared location from 12c and not in local locations, Now copy the password file from primary to standby host and we can place in local location because there are no other instances to use the password file. [oracle@ora-r2n1 admin]$ srvctl config database -d canada Database unique name: CANADA Database name: Oracle home: /u01/app/oracle/product/12.1.0.1/db_1 Oracle user: oracle Spfile: Password file: +DG1/CANADA/orapwcanada Domain: oracle-ckpt.com Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: CANADA Database instances: CANADA1,CANADA2 Disk Groups: DG1,DGFRA1 Mount point paths: Services: Type: RAC Start concurrency: Stop concurrency: Database is administrator managed [oracle@ora-r2n1 admin]$ asmcmd ASMCMD> pwcopy +DG1/CANADA/orapwcanada /home/oracle/orapwINDIA copying +DG1/CANADA/orapwcanada -> /home/oracle/orapwINDIA ASMCMD> exit [oracle@ora-r2n1 admin]$ ls -ltr /home/oracle/orapwINDIA -rw-r----- 1 oracle oinstall 7680 Mar 14 23:00 /home/oracle/orapwINDIA [oracle@ora-r2n1 admin]$ scp /home/oracle/orapwINDIA ora-r2n3:/u01/app/oracle/product/12.1.0.1/db_1/dbs/ oracle@ora-r2n3's password: orapwINDIA 100% 7680 7.5KB/s 00:00 [oracle@ora-r2n1 admin]$ - [oracle@ora-r2n3 ~]$ cd $ORACLE_HOME/dbs [oracle@ora-r2n3 dbs]$ hostname ora-r2n3.oracle-ckpt.com [oracle@ora-r2n3 dbs]$ ls -ltr orapwINDIA -rw-r----- 1 oracle oinstall 7680 Mar 14 23:01 orapwINDIA [oracle@ora-r2n3 dbs]$ 7. Startup the Instance – Now we have in place of the init file and oracle net service, password file. Start the instance in nomount status after creating directory of adump to store audit files. [oracle@ora-r2n3 dbs]$ export ORACLE_SID=INDIA [oracle@ora-r2n3 dbs]$ mkdir -p /u01/app/oracle/admin/INDIA/adump [oracle@ora-r2n3 dbs]$ sqlplus / as sysdba SQL*Plus: Release 12.1.0.1.0 Production on Sun Feb 21 00:12:35 2016 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to an idle instance. SQL> startup nomount ORACLE instance started. Total System Global Area 705662976 bytes Fixed Size 2292384 bytes Variable Size 297796960 bytes Database Buffers 402653184 bytes Redo Buffers 2920448 bytes SQL> 8. Connectivity Test – We are performing Active duplicate hence ensure you able to connect both primary and standby database using service name and not with “/ “ [oracle@ora-r2n3 dbs]$ rman target sys/oracle@canada auxiliary sys/oracle@india Recovery Manager: Release 12.1.0.1.0 - Production on Mon Mar 14 23:06:59 2016 Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved. connected to target database: ORCL (DBID=1434250731) connected to auxiliary database: ORCL (not mounted) RMAN> 9. Script to create Standby – When managing the big tier databases and of course it is time taking to complete duplicate, so ensure to run the duplicate with script to run in background. vi /home/oracle/nassyam/restore_MAADG1.sh export ORACLE_SID=INDIA export ORACLE_HOME=/u01/app/oracle/product/12.1.0.1/db_1 export PATH=$ORACLE_HOME/bin:$PATH export NLS_DATE_FORMAT='YYYY-MM-DD:hh24:mi:ss' date echo "Begin restore" rman target sys/oracle@canada auxiliary sys/oracle@india cmdfile=/home/oracle/nassyam/restore_MAADG.rcv log=/home/oracle/nassyam/restore_MAADG.log date echo "End restore" ---> exit chmod 775 /home/oracle/nassyam/restore_MAADG.sh vi /home/oracle/nassyam/restore_MAADG.rcv run { ALLOCATE CHANNEL MAADG1 DEVICE TYPE disk; ALLOCATE CHANNEL MAADG2 DEVICE TYPE disk; ALLOCATE AUXILIARY CHANNEL MAADG3 DEVICE TYPE disk; ALLOCATE AUXILIARY CHANNEL MAADG5 DEVICE TYPE disk; duplicate target database for standby from active database ; RELEASE CHANNEL MAADG1; RELEASE CHANNEL MAADG2; RELEASE CHANNEL MAADG3; RELEASE CHANNEL MAADG4; } ==================================================================== exit chmod 777 /home/oracle/nassyam/restore_MAADG.rcv $ nohup /home/oracle/nassyam/restore_MAADG.sh & 10. Deploy the Duplicate – As mentioned earlier the duplicate we are launching in background, [oracle@ora-r2n3 nassyam]$ ls -ltr total 8 -rwxrwxr-x 1 oracle oinstall 363 Mar 15 20:37 restore_MAADG.sh -rwxrwxr-x 1 oracle oinstall 357 Mar 15 20:41 restore_MAADG.rcv [oracle@ora-r2n3 nassyam]$ nohup /home/oracle/nassyam/restore_MAADG.sh & [1] 11406 [oracle@ora-r2n3 nassyam]$ nohup: appending output to `nohup.out' [oracle@ora-r2n3 nassyam]$ tail -100f restore_MAADG.log Recovery Manager: Release 12.1.0.1.0 - Production on Tue Mar 15 20:44:13 2016 Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved. connected to target database: ORCL (DBID=1434250731) connected to auxiliary database: ORCL (not mounted) RMAN> run 2> { 3> ALLOCATE CHANNEL MAADG1 DEVICE TYPE disk; 4> ALLOCATE CHANNEL MAADG2 DEVICE TYPE disk; 5> ALLOCATE AUXILIARY CHANNEL MAADG3 DEVICE TYPE disk; 6> ALLOCATE AUXILIARY CHANNEL MAADG5 DEVICE TYPE disk; 7> duplicate target database for standby from active database; 8> RELEASE CHANNEL MAADG1; 9> RELEASE CHANNEL MAADG2; 10> RELEASE CHANNEL MAADG3; 11> RELEASE CHANNEL MAADG4; 12> } 13> 14> using target database control file instead of recovery catalog allocated channel: MAADG1 channel MAADG1: SID=48 instance=CANADA1 device type=DISK ...... ..... .... Starting Duplicate Db at 2016-03-15:20:44:28 contents of Memory Script: { backup as copy reuse targetfile '+DG1/CANADA/orapwcanada' auxiliary format '/u01/app/oracle/product/12.1.0.1/db_1/dbs/orapwINDIA' ; } executing Memory Script ...... ..... .... Starting backup at 2016-03-15:20:44:28 Finished backup at 2016-03-15:20:44:30 sql statement: alter system set control_files = ''+DATA/INDIA/CONTROLFILE/current.275.906583475'', ''+FRA/INDIA/CONTROLFILE/current.261.906583475'' comment= ''Set by RMAN'' scope=spfile Starting restore at 2016-03-15:21:06:21 channel MAADG3: starting datafile backup set restore channel MAADG3: using network backup set from service canada channel MAADG3: restoring control file channel MAADG3: restore complete, elapsed time: 00:00:07 output file name=+DATA/INDIA/CONTROLFILE/current.274.906584785 output file name=+FRA/INDIA/CONTROLFILE/current.260.906584787 Finished restore at 2016-03-15:21:06:30 ...... ..... .... Starting restore at 2016-03-15:21:06:38 channel MAADG3: starting datafile backup set restore channel MAADG3: using network backup set from service canada channel MAADG3: specifying datafile(s) to restore from backup set channel MAADG3: restoring datafile 00001 to +DATA channel MAADG5: starting datafile backup set restore channel MAADG5: using network backup set from service canada channel MAADG5: specifying datafile(s) to restore from backup set channel MAADG5: restoring datafile 00003 to +DATA channel MAADG3: restore complete, elapsed time: 00:00:35 channel MAADG3: starting datafile backup set restore channel MAADG3: using network backup set from service canada channel MAADG3: specifying datafile(s) to restore from backup set channel MAADG3: restoring datafile 00004 to +DATA channel MAADG5: restore complete, elapsed time: 00:00:35 channel MAADG5: starting datafile backup set restore channel MAADG5: using network backup set from service Canada ...... ..... .... Finished restore at 2016-03-15:21:07:50 sql statement: alter system archive log current contents of Memory Script: { switch clone datafile all; } executing Memory Script datafile 1 switched to datafile copy input datafile copy RECID=10 STAMP=906584876 file name=+DATA/INDIA/DATAFILE/system.278.906584801 ...... ..... .... datafile 8 switched to datafile copy input datafile copy RECID=16 STAMP=906584878 file name=+DATA/INDIA/DATAFILE/undotbs2.284.906584861 Finished Duplicate Db at 2016-03-15:21:08:31 released channel: MAADG1 released channel: MAADG2 released channel: MAADG3 released channel: MAADG5 11. Check Standby Database status – After the successful duplicate of database, we can see that standby database is created and it’s in mount status. SQL> select db_unique_name,database_role,open_mode from v$database; DB_UNIQUE_NAME DATABASE_ROLE OPEN_MODE ------------------------------ ---------------- -------------------- INDIA PHYSICAL STANDBY MOUNTED SQL> 12. Configure Oracle Restart for standby – The deployed database is normal database, now we have to attach to the Oracle restart so that we can manage database using srvctl, below is the procedure to do. The standby instance must use the SPFILE in order to configure Data Guard broker and hence first we will create pfile from the spfile which was created during Duplicate and then we create spfile in ASM from the local pfile. SQL> show parameter pfile NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ spfile string SQL> create spfile='+DATA' from pfile; File created. SQL> ASMCMD> pwd +data/india/parameterfile ASMCMD> ls -lt Type Redund Striped Time Sys Name PARAMETERFILE UNPROT COARSE MAR 15 21:00:00 Y spfile.293.906585955 ASMCMD> [oracle@ora-r2n3 dbs]$ mv initINDIA.ora initINDIA_15Mar.ora [oracle@ora-r2n3 dbs]$ vi initINDIA.ora [oracle@ora-r2n3 dbs]$ cat initINDIA.ora spfile='+DATA/INDIA/PARAMETERFILE/spfile.293.906585955' [oracle@ora-r2n3 dbs]$ SQL> shut immediate; ORA-01109: database not open Database dismounted. ORACLE instance shut down. SQL> startup mount ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance ORACLE instance started. Total System Global Area 705662976 bytes Fixed Size 2292384 bytes Variable Size 297796960 bytes Database Buffers 402653184 bytes Redo Buffers 2920448 bytes Database mounted. SQL> show parameter pfile NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ spfile string +DATA/INDIA/PARAMETERFILE/spfi le.293.906585955 SQL> Now the database is started with spfile which located in ASM. Next add the database to the oracle restart. [oracle@ora-r2n3 dbs]$ echo $ORACLE_HOME /u01/app/oracle/product/12.1.0.1/db_1 [oracle@ora-r2n3 dbs]$ srvctl add database -d INDIA -o /u01/app/oracle/product/12.1.0.1/db_1 -m oracle-ckpt.com -n orcl -p +DATA/INDIA/PARAMETERFILE/spfile.293.906585955 -s OPEN -r PHYSICAL_STANDBY -y automatic -a DATA,FRA [oracle@ora-r2n3 dbs]$ [oracle@ora-r2n3 dbs]$ srvctl config database -d india Database unique name: INDIA Database name: orcl Oracle home: /u01/app/oracle/product/12.1.0.1/db_1 Oracle user: oracle Spfile: +DATA/INDIA/PARAMETERFILE/spfile.293.906585955 Password file: Domain: oracle-ckpt.com Start options: open Stop options: immediate Database role: PHYSICAL_STANDBY Management policy: AUTOMATIC Database instance: INDIA Disk Groups: DATA,FRA Services: [oracle@ora-r2n3 dbs]$ After adding database to the configuration, we can manage database using srvctl to stop and start. [oracle@ora-r2n3 dbs]$ srvctl status database -d india Database is not running. [oracle@ora-r2n3 dbs]$ srvctl stop database -d india PRCC-1016 : INDIA was already stopped [oracle@ora-r2n3 dbs]$ ps -ef|grep pmon oracle 9311 1 0 18:18 ? 00:00:01 asm_pmon_+ASM oracle 9388 1 0 18:18 ? 00:00:01 ora_pmon_CDBGG oracle 14029 1 0 21:37 ? 00:00:00 ora_pmon_INDIA oracle 14407 12859 0 21:47 pts/2 00:00:00 grep pmon [oracle@ora-r2n3 dbs]$ SQL> shut immediate; ORA-01109: database not open Database dismounted. ORACLE instance shut down. SQL> exit Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics and Real Application Testing options [oracle@ora-r2n3 dbs]$ srvctl start database -d india -o mount [oracle@ora-r2n3 dbs]$ srvctl status database -d india Database is running. [oracle@ora-r2n3 dbs]$ 13. Configure Data Guard – Surprised why we are using broker without configuring any Data Guard parameters in SPFILE? Yes that is true, we are going to use the advanced method and simplified method to avoid misconfiguration to the Data Guard. Enable the data guard broker using the dg_broker_start to TRUE. SQL> select database_role from v$database; DATABASE_ROLE ---------------- PRIMARY SQL> alter system set dg_broker_start=true scope=both sid='*'; System altered. SQL> SQL> select database_role from v$database; DATABASE_ROLE ---------------- PHYSICAL STANDBY Configure the Data Guard configuration files to create under the ASM disk group, because there will be common configuration files for the RAC node 1 and Node 2. SQL> alter system set dg_broker_config_file1='+DG1/ORCL/dr1CANADA.dat' scope=both sid='*'; System altered. SQL> alter system set dg_broker_config_file2='+DGFRA1/ORCL/dr2CANADA.dat' scope=both sid='*'; System altered. SQL> alter system set dg_broker_start=true scope=both sid='*'; System altered. SQL> [oracle@ora-r2n1 admin]$ ps -ef|grep dmon oracle 1323 1 0 21:55 ? 00:00:00 ora_dmon_CANADA1 oracle 1353 30647 0 21:56 pts/1 00:00:00 grep dmon root 23119 1 0 18:14 ? 00:00:09 /u01/app/12.1.0.1/grid/bin/cssdmonitor The whole Data Guard configuration is only three commands to create configuration, adding standby database and enabling the configuration like below. [oracle@ora-r2n1 admin]$ dgmgrl / DGMGRL for Linux: Version 12.1.0.1.0 - 64bit Production Copyright (c) 2000, 2012, Oracle. All rights reserved. Welcome to DGMGRL, type "help" for information. Connected as SYSDG. DGMGRL> DGMGRL> create configuration haconfig as primary database is canada connect identifier is canada; Configuration "haconfig" created with primary database "canada" DGMGRL> add database india as connect identifier is india maintained as physical; Database "india" added DGMGRL> DGMGRL> enable configuration Enabled. DGMGRL> show configuration Configuration - haconfig Protection Mode: MaxPerformance Databases: canada - Primary database india - Physical standby database Fast-Start Failover: DISABLED Configuration Status: SUCCESS DGMGRL> 2016-03-15 22:31:05.292 >> Starting Data Guard Broker bootstrap << 2016-03-15 22:31:05.293 Broker Configuration File Locations: 2016-03-15 22:31:05.293 dg_broker_config_file1 = "+DG1/ORCL/dr1CANADA.dat" 2016-03-15 22:31:05.294 dg_broker_config_file2 = "+DGFRA1/ORCL/dr2CANADA.dat" 2016-03-15 22:31:05.298 DMON: Attach state object 2016-03-15 22:31:05.366 DMON: Broker state reconciled, version = 0, state = 00000000 2016-03-15 22:31:05.366 DMON: Broker State Initialized Summary We’ve seen the step by step procedure for “MAA – Creating Single instance Physical Standby for a RAC primary - 12c” using the simple steps to achieve flexible architecture and how easy to configure Data Guard using broker which is highly recommended with advanced methods to achieve such configuration.

Blog Post: Oracle TNS-12535 and Dead Connection Detection

$
0
0
These days everything goes to the cloud or it has been collocated somewhere in a shared infrastructure. In this post I’ll talk about sessions being disconnected from your databases, firewalls and dead connection detection. Changes We moved number of 11g databases from one data centre to another. Symptoms Now probably many of you have seen the following error in your database alertlog “TNS-12535: TNS:operation timed out” or if you haven’t you will definitely see it some day. Consider the following error from database alert log: Fatal NI connect error 12170. VERSION INFORMATION: TNS for Linux: Version 11.2.0.3.0 - Production Oracle Bequeath NT Protocol Adapter for Linux: Version 11.2.0.3.0 - Production TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.3.0 - Production Time: 12-MAR-2015 10:28:08 Tracing not turned on. Tns error struct: ns main err code: 12535 TNS-12535: TNS:operation timed out ns secondary err code: 12560 nt main err code: 505 TNS-00505: Operation timed out nt secondary err code: 110 nt OS err code: 0 Client address: (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.0.10)(PORT=49831)) Thu Mar 12 10:28:09 2015 Now this error indicate timing issues between the server and the client. It’s important to mention that those errors are RESULTANT, they are informational and not the actual cause of the disconnect. Although this error might happen for number of reasons it is commonly associated with firewalls or slow networks. Troubleshooting The best way to understand what’s happening is to build a histogram of the duration of the sessions. In particular we want to understand whether disconnects are sporadic and random or they follow a specific pattern. To do so you need to parse the listener log and locate the following line from the above example: (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.0.10)(PORT=49831)) Since the port is random you might not get same record or if you do it might be days apart. Here’s what I found in the listener: 12-MAR-2015 08:16:52 * (CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=ORCL)(CID=(PROGRAM=app)(HOST=apps01)(USER=scott))) * (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.0.10)(PORT=49831)) * establish * ORCL * 0 In other words – at 8:16 the user scott established connection from host 192.168.0.10. Now if you compare both records you’ll get the duration of the session: Established: 12-MAR-2015 08:16:52 Disconnected: Thu Mar 12 10:28:09 2015 Here are couple of other examples: alertlog: Client address: (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.0.10)(PORT=20620)) Thu Mar 12 10:31:20 2015 listener.log: 12-MAR-2015 08:20:04 * (CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=ORCL)(CID=(PROGRAM=app)(HOST=apps01)(USER=scott))) * (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.0.10)(PORT=20620)) * establish * ORCL * 0 alertlog: Client address: (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.0.10)(PORT=48157)) Thu Mar 12 10:37:51 2015 listener.log: 12-MAR-2015 08:26:36 * (CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=ORCL)(CID=(PROGRAM=app)(HOST=apps01)(USER=scott))) * (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.0.10)(PORT=48157)) * establish * ORCL * 0 alertlog: Client address: (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.0.11)(PORT=42618)) Tue Mar 10 19:09:09 2015 listener.log 10-MAR-2015 16:57:54 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=root))(SERVICE_NAME=ORCL1)(SERVER=DEDICATED)) * (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.0.11)(PORT=42618)) * establish * ORCL1 * 0 As you may have noticed the errors follow very strict pattern – each one gets disconnect exactly 2hrs 11mins after it has been established. Cause Given the repetitive behaviour of the issue and that it happened for multiple databases and application servers we can conclude that’s definitely a firewall issue. The firewall recognizes the TCP protocol and keeps a record of established connections and it also recognizes TCP connection closure packets (TCP FIN type packet). However sometimes the client may abruptly end communication without closing the end points properly by sending FIN packet in which case the firewall will not know that the end-points will no longer use the opened channel. To resolve this problem firewall imposes a BLACKOUT on those connections that stay idle for a predefined amount of time. The only issues with BLACKOUT is that neither or the sides will be notified. In our case the firewall will disconnect IDLE sessions after around 2hrs of inactivity. Solution The solution for database server is to use Dead Connection Detection (DCD) feature. DCD detects when a connection has terminated unexpectedly and flags the dead session so PMON can release the resources associated with it. DCD sets a timer when a session is initiated and when the timer expires SQL*Net on the server sends a small 10 bytes probe packet to the client to make sure connection is still active. If the client has terminated unexpectedly the server will get an error and the connection will be closed and the associated resources will be released. If the connection is still active then the probe packet is discarded and the timer is reset. To enable DCD you need to set SQLNET.EXPIRE_TIME in sqlnet.ora of you RDBMS home! cat >> $ORACLE_HOME/network/admin/sqlnet.ora SQLNET.EXPIRE_TIME=10 This will set the timer to 10 minutes. Remember that sessions need to reconnect for the change to take place it won’t work for existing connections. Firewalls become smarter and they can now inspect packages even deeper. Make sure the following settings are also disabled: – SQLNet fixup protocol – Deep Packet Inspection (DPI) – SQLNet packet inspection – SQL Fixup I had similar issue with Dataguard already, read more here – Smart Firewalls How to test Dead Connection Detection You might want to test or make sure DCD really works. You’ve got multiple options here – Oracle SQL client trace, Oracle SQL Server Trace, Sniff the network with packet analyzer OR using strace to trace the server process. I used strace since I had access to the database server and it was non intrusive. 1. Establish a connection to the database through SQL*Net 2. Find the processes number for your session: SQL>  select SPID from v$process where ADDR in (select PADDR from v$session where username='SVE'); SPID ------------------------ 62761 3. Trace the process [oracle@dbsrv ~]$ strace -tt -f -p 62761 Process 62761 attached - interrupt to quit 11:36:58.158348 --- SIGALRM (Alarm clock) @ 0 (0) --- 11:36:58.158485 rt_sigprocmask(SIG_BLOCK, [], NULL, 8) = 0 .... 11:46:58.240065 --- SIGALRM (Alarm clock) @ 0 (0) --- 11:46:58.240211 rt_sigprocmask(SIG_BLOCK, [], NULL, 8) = 0 ... 11:46:58.331063 write(20, "\0\n\0\0\6\20\0\0\0\0", 10) = 10 ... What I did was to attach to the process, simulate some activity at 11:36 and then leave the session IDLE. Then 10 minutes later the server process sent an empty packet to the client to check if the connection is still alive. Conclusion Errors in alertlog disappeared after I enabled the DCD. Make sure to enable DCD if you host your databases in a shared infrastructure or there are firewalls between your database and application servers. References How to Check if Dead Connection Detection (DCD) is Enabled in 9i ,10g and 11g (Doc ID 395505.1) Alert Log Errors: 12170 TNS-12535/TNS-00505: Operation Timed Out (Doc ID 1628949.1) Resolving Problems with Connection Idle Timeout With Firewall (Doc ID 257650.1) Dead Connection Detection (DCD) Explained (Doc ID 151972.1) The post Oracle TNS-12535 and Dead Connection Detection appeared first on Svetoslav Gyurov Oracle blog .

Wiki Page: Enterprise Manager

$
0
0
This section contains articles about the installation, configuration and use of Oracle's Enterprise Manager.

Wiki Page: Enterprise Manager Hybrid Cloud

$
0
0
This section contains articles about the Hybrid Cloud version of Oracle's Enterprise Manager.

Wiki Page: Using the Enterprise Manager Hybrid Cloud - Part I

$
0
0
by Porus Homi Havewala Oracle Enterprise Manager 12c Release 5 , released in June 2015, allows an on-premise Enterprise Manager OMS (Oracle Management Service) to install Hybrid Cloud Agents on your Oracle Public Cloud (OPC) Database Servers and WebLogic servers – and this is the first time this has been made possible. This opens up an entire new world of hybrid cloud management , where you can use an on-premise Enterprise Manager to monitor and manage both on-premise and cloud databases, compare database, WLS, and server configurations, apply compliance standards equally, and also makes it possible to clone Oracle 12c PDBs and Java applications from on-premise to a cloud CDB, and back again – all via the on-premise EM console. The most powerful feature of Release 5 is that it is able to manage the Hybrid Cloud from a single management interface. So you can have your databases and WLS servers that are on the company premises, as well as your Oracle public cloud (OPC) based databases and WLS servers all managed via one central on-premise Enterprise Manager Installation. Normal EM12c agents are installed on your on-premise servers, and special hybrid cloud agents installed (via the push mechanism of Enterprise Manager) on your cloud servers. The hybrid cloud agents work through a Hybrid Gateway - this is one of your on-premise EM12c agents that has been designated as such. This is a specialized ssh tunnel of sorts. Once the agents start talking to the OMS, you are able to see all your databases either on-premise or on the cloud - and clone PDBs easily to and fro from the cloud via Enterprise Manager, as seen in the screenshot below. Besides this, you can also use the other features of the Enterprise Manager packs, such as Diagnostics, Tuning, Database Lifecycle Management (DBLM) and so on in the Hybrid Cloud. For example, as part of DBLM, you can perform configuration comparisons between on-premise and cloud databases or WLS servers or host servers, and also compliance checks. In this way you can make sure your entire enterprise cloud - on-premise as well as public cloud - is compliant, and adheres to configuration guidelines with controlled configuration deviations. In this article series, we will look at the steps for setting up the Hybrid Cloud via Enterprise Manager. We will go through the pre-steps, and then install a Hybrid Cloud Agent. Next, we will go through the steps of configuration management and compliance for the Hybrid Cloud, and finally we will test out the cloning of PDBs back and forth from the cloud. Pre-setup steps for the Hybrid cloud include setting up one of your OMS agents as the Hybrid Gateway agent, creating SSH keys for the OMS server, and creating a Named Credential with SSH key credentials for the hybrid cloud. We will first go through the pre-setup steps. Pre-setup: For the Hybrid Cloud capability, the following pre-steps are required on the EM12105 installation. Pre-Step 1: First, register any agent in your EM12c local (on-premise) installation as the Hybrid Gateway agent. Preferably, choose an agent which is not your main OMS server agent, i.e. the agent can be on one of your target servers which is not too heavily loaded with its own targets. The Enterprise Manager Command line interface (emcli) is used for the purpose of registering the Hybrid Gateway agent. (For more information on emcli, please refer to the documentation on http://docs.oracle.com/cd/E24628_01/em.121/e17786/toc.htm ) Login in as the oracle user, and move to where emcli is installed on your agent or oms server. Login first to emcli as sysman, and then issue the register command: ./emcli login -username=sysman ./emcli register_hybridgateway_agent -hybridgateway_agent_list='em12c.sainath.com:3872' This registers the agent as a hybrid gateway agent. There are ways to register an additional agent as a slave or secondary agent that can take over the monitoring if the master or primary gateway agent goes down, but for now we will only set up one agent in this example run. Important Note: If you have a lot of cloud database servers, don’t use only one gateway agent to communicate to all of these, instead set up multiple gateway agents to talk to different cloud servers. For example, one gateway agent can be used to talk to 5-10 cloud servers, another gateway agent can be used to talk to other cloud servers, and so on. The architecture in this case is very important, since it needs to be set up in a well-planned manner. This relationship between which gateway agent talks to which hybrid cloud agent is set up when the hybrid agent is installed on each cloud server, as we will see later on in this article series. Pre-Step 2: Next, generate SSH keys for the OMS Server as follows: Login to the OMS host as the Oracle Unix user, and type: ssh-keygen -t rsa Enter passphrase Important: Do not use a passphrase, just press enter. These keys will be used in a Named Credential in Enterprise Manager, and a passphrase is not supported for use with SSH keys in Named Credentials. The ssh-regkey utility has now generated two files in the .ssh sub-directory under the Oracle Unix user’s home, as seen below: id_rsa id_rsa.pub We will continue the Hybrid Cloud setup using Enterprise Manager, in Part II of this article series. Part II is here .

Wiki Page: Using the Enterprise Manager Hybrid Cloud - Part II

$
0
0
by Porus Homi Havewala Oracle Enterprise Manager 12c Release 5 , released in June 2015, allows an on-premise Enterprise Manager OMS (Oracle Management Service) to install Hybrid Cloud Agents on your Oracle Cloud Database servers. In this article series, we are looking at the steps for setting up the Hybrid Cloud via Enterprise Manager. We will go through the pre-steps, and then install a Hybrid Cloud Agent. Next, we will follow the steps of configuration management and compliance for the Hybrid Cloud, and finally we will test out the cloning of PDBs back and forth from the cloud. Pre-setup steps for the Hybrid cloud include setting up one of your local Enterprise Manager agents (either on the OMS server or a target server, the latter is preferred) as the Hybrid Gateway agent, creating SSH keys for the OMS server, and creating a Named Credential with SSH key credentials for the hybrid cloud. In the first part of this article series, we looked at the first two pre-steps. We will now continue. Pre-Step 3: Create a Named Credential for use with the Hybrid Cloud as follows. Log in to the Enterprise Manager console as SYSMAN. Select Setup.. Security.. Named Credentials. Create a Named Credential “ NC_OPC_DBCS ”, this should be selected with “Authenticating Target Type” as “Host”, and “Credential Type” as “SSH Key Credentials”, and “Scope” as “Global”. If “SSH Key Credentials” do not appear in the drop down list of Credential Type, then you need to run the work-around listed in My Oracle Support (MOS) Note 1640062.1, this is a PL/SQL block to be executed as SYSMAN in the OMS repository. On the create screen, in the private and public key fields, cut and paste the appropriate SSH keys from the Oracle Home’s .ssh directory. This is shown in the screenshot below. Use the username “oracle”. Don’t test the Named Credential, just save it. Pre-Step 4: When running the Enterprise Manager deployment procedure “Clone to Cloud” that we will see later on, the “Secure Copy Files” step may fail with the error message "rsync: Failed to exec ssh: Permission denied (13)” in certain cases where the local OMS server has been set up with “enforcing” SELINUX security. A quick workaround to this is to change SELINUX to “permissive” in the file /etc/selinux/config, as the root user on the OMS server: # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=permissive # SELINUXTYPE= can take one of these two values: # targeted - Targeted processes are protected, # mls - Multi Level Security protection. SELINUXTYPE=targeted Reboot your system. After reboot, confirm that the getenforce command returns permissive: [root@em12c ~]# getenforce Permissive Note that if your company mandates that SELINUX should be set to “enforcing” for security reasons, then you will need to configure the SELINUX to allow RSYNC to be executed from the agent (i.e. via script). This is more complicated than the quick workaround above, and as such is beyond the scope of this article. You will need to work with your security administrator for the correct steps to set up SELINUX to allow individual commands such as RSYNC. Other Requirements The other requirements are as follows, and these are obvious. You will need an account (either trial or production) for the Oracle Public Cloud (OPC) at cloud.oracle.com, for the Database Cloud Service. You will have created an Oracle database service (a server with an Oracle database) on the Oracle public cloud in advance, and it would be up and running. You will need the IP address of this cloud database server which will be used in this Hybrid Cloud setup. You would have also set up putty access from your laptop to the cloud database server. For example, the following screenshot shows an Oracle Public Cloud database server that has been created. The IP address is displayed, but blanked out for privacy reasons. Note that we have created the “ Extreme Performance ” type of Enterprise Edition database, so that the Enterprise Manager Management packs such as Diagnostics, Tuning, Database Lifecycle Management (DBLM) Packs etc. can be used on the cloud database. The type of database is also displayed in the screenshot below. You can then install the Hybrid Cloud Agent as follows. Initial Steps Login as the oracle unix user to the EM12c OMS Server, and change to the .ssh directory under the oracle unix home: cd .ssh Open the file “id_rsa.pub” in this directory using vi, and copy the text to the clipboard. This is the OMS server public key. This was generated during the pre-setup steps (as explained in the pre-setup instructions). From your laptop, open an SSH session using Putty to the OPC database server, and as the oracle unix user, perform these steps: cd ~/.ssh vi authorized_keys In this file, paste the OMS public key (make sure no line breaks), and save the file. Then from a UNIX session on the OMS server, ssh to the Oracle Public Cloud database server using the IP address, and accept the connection when asked. We continue the Hybrid Cloud setup using Enterprise Manager, in Part III of this article series .

Wiki Page: Using the Enterprise Manager Hybrid Cloud - Part III

$
0
0
Oracle Enterprise Manager 12c Release 5 , released in June 2015, allows an on-premise Enterprise Manager OMS (Oracle Management Service) to install Hybrid Cloud Agents on your Oracle Cloud Database servers. In this article series, we are looking at the steps for setting up the Hybrid Cloud via Enterprise Manager. We have gone through the pre-steps, and the next step is to install a Hybrid Cloud Agent. After this is done, we will follow the steps of configuration management and compliance for the Hybrid Cloud, and finally we will test out the cloning of PDBs back and forth from the cloud. In the previous part of this article series, we looked at the pre-steps and other requirements for setting up the hybrid cloud, such as having an Oracle Public Cloud (OPC) database server ready, and setting up the SSH connectivity from the OMS server to the OPC database server. We will now continue with the actual installation of the cloud agent. Installing the Cloud Agent Log in to the Enterprise Manager console as SYSMAN/welcome1. Select Setup.. Add Target.. Add Targets Manually. S elect “Add Host Targets” and click on “Add Hosts” In the Add Host Targets screen, type in the IP address of the Oracle Public Cloud database server. Select the platform as Linux x86-64 (the only one that can be chosen at the time of writing for the hybrid cloud). Click on Next. Since you have typed in an IP address, a warning appears: Ignore this warning about not using a fully qualified host name. The Installation Details screen appears. Note about using the IP Address: If you want to use a full Host name for this IP address, you will need to add it to /etc/hosts on the OMS server as well on the Oracle Public Cloud database server in advance. Otherwise, the “ emctl secure agent ” command which is run by Enterprise Manager at the end of the agent install procedure will not work. On this page, enter the Installation base directory for the agent. Adhering to Cloud database standards which are derived from the earlier well-known Oracle Flexible Architecture (OFA) standards, this is put under /u01/app/oracle/product where other Oracle database software has been installed when the OPC database server was created. As the Named Credential, select “NC_OPC_DBCS”. This named credential uses SSH key credentials, and was pre-created with the SSH private and public keys of the OMS server. Important: You need to expand Optional Details, and tick on “ Configure Hybrid Cloud Agent ”. Select the agent, which is the only agent on the OMS server. Note that if this is not ticked and the Hybrid Cloud agent is not selected, the cloud agent install on the OPC database server will proceed but will ultimately fail. Click on Next. The Review page appears. Click on “Deploy Agent”. The procedure starts. The first step is the Agent’s “Initialization” as seen below, followed by remote pre-requisite checks and then the actual deployment. While the deployment is in progress, you can monitor it if you SSH to the Oracle Public Cloud database server. Change directory to the Agent home. You will see that the size of the Cloud agent that is installed has been considerably reduced from the normal agent size, as seen below. [oracle@AHUTESTSERVER]$ cd /u01/app/oracle/product/agentHome [oracle@AHUTESTSERVER agentHome]$ du -hs . 248M . The “Remote Prerequisite” step also completes. A warning is displayed about the root privileges required to run the root.sh script at the end of the agent install. Ignore this warning, and continue with deployment using the menu. The Agent deployment is successful. You now need to run the root.sh script key manually. You can use Putty to do this. Login as the opc unix user to the Cloud database server, and then “ sudo –s ” to work as root as seen below. As root, run the root.sh script (copy and paste the full path and name from the deployment summary screen). Note that this is important for proper agent functioning. You can also SSH to the Oracle Public Cloud server from the OMS server, and check the status of the agent. [oracle@em12c ~]$ ssh Authorized uses only. All activity may be monitored and reported. On the Oracle Public Cloud server, check where the agent has been installed via the first command below, and then move to the bin directory that is displayed. [oracle@AHUTESTSERVER ~]$ cat /etc/oragchomelist /u01/app/oracle/product/agentHome/core/12.1.0.5.0:/u01/app/oracle/product/agentHome/agent_inst [oracle@AHUTESTSERVER ~]$ cd /u01/app/oracle/product/agentHome/agent_inst/bin Check the status of the agent via the following command: [oracle@AHUTESTSERVER bin]$ ./emctl status agent Oracle Enterprise Manager Cloud Control 12c Release 5 Copyright (c) 1996, 2015 Oracle Corporation. All rights reserved. --------------------------------------------------------------- Agent Version : 12.1.0.5.0 OMS Version : 12.1.0.5.0 Protocol Version : 12.1.0.1.0 Agent Home : /u01/app/oracle/product/agentHome/agent_inst Agent Log Directory : /u01/app/oracle/product/agentHome/agent_inst/sysman/log Agent Binaries : /u01/app/oracle/product/agentHome/core/12.1.0.5.0 Agent Process ID : 18092 Parent Process ID : 18032 Agent URL : https:// :3872/emd/main/ Local Agent URL in NAT : https:// :3872/emd/main/ Repository URL : https:// :1748/empbs/upload Started at : 2015-06-05 09:54:22 Started by user : oracle Operating System : Linux version 2.6.39-400.109.1.el6uek.x86_64 (amd64) Last Reload : (none) Last successful upload : 2015-06-05 09:55:47 Last attempted upload : 2015-06-05 09:55:47 Total Megabytes of XML files uploaded so far : 0.19 Number of XML files pending upload : 0 Size of XML files pending upload(MB) : 0 Available disk space on upload filesystem : 68.90% Collection Status : Collections enabled Heartbeat Status : Ok Last attempted heartbeat to OMS : 2015-06-05 09:59:27 Last successful heartbeat to OMS : 2015-06-05 09:59:27 Next scheduled heartbeat to OMS : 2015-06-05 10:00:28 --------------------------------------------------------------- Agent is Running and Ready The Hybrid Agent is up and running - and its xml files are also being uploaded via the hybrid gateway back to the on-premise Enterprise Manager repository. We will continue the Hybrid Cloud setup using Enterprise Manager, in Part IV of this article series. The next step will be to discover the Cloud database and listener in Enterprise Manager.

Wiki Page: Using the Enterprise Manager Hybrid Cloud - Part VI

$
0
0
by Porus Homi Havewala Ever since Enterprise Manager 12c Release 5, released in June 2015, Oracle Enterprise Manager 12c (and now 13c, released in December 2015) allows an on-premise Enterprise Manager OMS (Oracle Management Service) to install Hybrid Cloud Agents on your Oracle Cloud Database servers. In this article series, we are looking at the steps for setting up and using the Hybrid Cloud via Enterprise Manager. In the previous part of this article series, we have gone through the pre-steps and then installed a Hybrid Cloud Agent. We discovered the databases and started to monitor the hybrid cloud. We then looked at configuration comparisons and configuration management for the Hybrid Cloud, as well as compliance standards enforcement. We started to discuss the cloning of PDBs back and forth from the cloud, and said that the PSU patch may need to be applied on the cloud database. However, please read the following note “Preserving the Agent Home when patching the Cloud Database” before you start the patching process on the cloud database. Note: Preserving the Agent Home when patching the Cloud Database: If you have installed the Enterprise Manager agent under /u01/app/oracle (the Oracle Base directory), and if you then use the cloud database patching menu to apply a later PSU to the database, then the Agent home will be moved to /u01/app.ORG/oracle as part of the database patching process. You can either move the agent subdirectory back manually after the patching is completed, or use the following workaround before you start the patching process: Login to the cloud database server as the oracle unix user, and create the file “/var/opt/oracle/patch/files_to_save.ora”. In this file, add the agent directory (full path) to preserve. Then, login to the cloud database server as the opc unix user. Issue the following commands: sudo -s cd /var/opt/oracle/patch vi dbpatchm.cfg In this root-owned file, locate these lines: # create /var/opt/oracle/patch/files_to_save.ora with full path of directory or # files to preserve any special files you may have in your /u01/app directory. # set this to yes, if you have files_to_save.ora special_files="no" Change the line special_files="no" to special_files="yes". Save the file. After you follow these steps, the next time you do any patching from the cloud database patching menu, the agent directory will be preserved in the original location and you will not need to move it back after the patching has completed. The Actual Cloning Log in as SYSMAN to the Enterprise Manager console. You can currently see both on-premise databases (ahuprod) and cloud databases (OPCTST) in a single pane of glass, as can be seen below in Targets.. Databases. Right-click on the on-premise PDB “SALES”, which is in the ahuprod CDB. Now for the actual cloning, which has been in-built into the menu. Select Oracle Database.. Cloning .. Clone to Oracle Cloud. The Source and Destination screen appears. Fill in the Credentials for the on-premise side, as well as the Cloud side. For the Cloud “Database Host credential”, be sure to use the “NC_OPC_DBCS” credential which has been pre-created with the local (OMS) server private and public keys. For the Cloud Database “SYSDBA Container Database credential”, create a new Named Credential using the sys password that you specified when you previously created the Cloud database. Enter the name of the new PDB on the cloud side as “SALESDEV”, since this will be a development pluggable database. Enter a more descriptive Display Name if you wish. Also select a user name and password for the PDB administrator of the new PDB. At this point, you can click on “Clone” to start the procedure. Instead of that, click on the “Advanced” button. This switches to Advanced mode, and a multiple page workflow appears. Click on Next. The “Clone to Oracle Cloud Configuration” page appears. As per Cloud database standards - that are based on the popular Oracle Flexible Architecture (OFA) standards, the PDB data files will reside on /u02, in the /u02/app/oracle/oradata/ directory. You can also enter storage limits if you wish. Click on Next. This page shows the importance of the Advanced mode, since it is possible to select a Masking Definition if it has been created for the source database. Masking is seamlessly integrated with Enterprise Manager. This makes sure that confidential data is masked when being cloned from an on-premise production database to a cloud development database. In the Schedule screen, you can select a future date or run immediately. The Review screen then appears. Click on Clone. The procedure starts to execute. The clone completes successfully in under 14 minutes (depending on the PDB size and the internet connectivity) as can be seen below. You can examine each step and its output if you wish. Note that the “rsync” unix command is being used by the procedure to fast-copy files to the cloud. When you move to Targets.. Databases, you can now see the new SALES Development PDB under the cloud OPCTST CDB. Drill down to the Sales Development PDB Home page. The Oracle Cloud icon is visible on the top of the page. Note that database performance details are not seen – since in this case, when the cloud database OPCTST was initially created, the plain Enterprise Edition database was selected instead of Extreme or High performance. The plain EE database in the cloud does not have the Diagnostics or Tuning pack licenses that are required for these features. Clone from Oracle Cloud Now, you can try the opposite. Go to the Cloud PDB, right-click, and select Clone from Oracle Cloud. You are now bringing the PDB back to on-premise. Select the correct credentials. Select “Advanced” mode once again. Go through the next few screens as before. Call the new on-premise database “SALESSTG”. It will be used as a staging database. The cloning of the PDB from the cloud now starts. The cloning from the cloud completes successfully in about 9 minutes (depending on factors such as the PDB size and internet connectivity). After the cloning completes, you can see the new SALE Staging PDB under the on-premise ahuprod CDB. In the next and final part VII of this article series, we will see how we can also unplug from on-premise, and plug into the Oracle Cloud database.

Wiki Page: Using the Enterprise Manager Hybrid Cloud - Part V

$
0
0
by Porus Homi Havewala Oracle Enterprise Manager 12c Release 5 , released in June 2015, allows an on-premise Enterprise Manager OMS (Oracle Management Service) to install Hybrid Cloud Agents on your Oracle Cloud Database servers. In this article series, we are looking at the steps for setting up and using the Hybrid Cloud via Enterprise Manager. In the previous part of this article series, we have gone through the pre-steps and then installed a Hybrid Cloud Agent. We discovered the databases and started to monitor the hybrid cloud. We then looked at configuration comparisons and configuration management for the Hybrid Cloud. We will now look at compliance standards enforcement, and finally look at the cloning of PDBs back and forth from the cloud. Compliance S tandards and enforcement: Let us set up common compliance checks for our on-premise and Cloud databases. Select Enterprise.. Compliance.. Library from the Enterprise Manager console. On the Compliance Library screen that appears, move to the “Compliance Standards” tab. Select the out-of-box compliance standard “ Configuration Best Practices for Oracle Database ” (as an example), and click on “Associate Targets” as seen below. On the Association screen, add the cloud database as well as the on-premise databases by clicking on “Add” and then selecting these databases in a multi-select action. When you proceed to save the associations to this standard, the following informational screens appear: In the same way, we can associate other supplied or your own customized compliance standards to the on-premise and cloud databases. After a few minutes, to see the results of the compliance check, select Enterprise.. Compliance.. Results from the Enterprise Manager console. This shows the average score of each compliance standard that you have associated to the databases. You can see from the results that the basic security configuration has some critical violations , 11 in total. Drill down on these. Out of the 11 violations, three critical violations are for the cloud database. Drill down again to these. The three critical security violations for the cloud database are seen below. The first is easily understandable – excessive privileges have been granted, that can be revoked. What about the other violations? To understand or examine these violations in detail, select Enterprise.. Compliance.. Library from the Enterprise Manager console. On the Compliance Library screen that appears, move to the “Compliance Standard Rules” tab. Search for the rule you want to understand, such as the second one in the list above: “ Password Complexity Verification Function Usage ”. Drill down on it. The full description of the rule appears. After understanding the rule, you can now find the solution so as to satisfy this compliance check. In this way, you have enforced the same compliance standard checks on your on-premise as well as cloud databases. So you can apply your corporate compliance standards to all your enterprise databases for the first time, no matter where they are. Enterprise Manager has managed to achieve enforcement of the same compliance standards on the Oracle public cloud as well as on-premise. As the next step, we are now ready to test out the cloning of PDBs back and forth from the Oracle Public Cloud and on-premise databases - using Enterprise Manager Cloud Control. Cloning PDBs from On-Premise to Cloud, and back One of the main features of the Hybrid Cloud is that it is possible to easily move on-premise PDBs to an Oracle Cloud CDB. For example, you are moving a PDB to the cloud so that some development work can be completed. Before you start this procedure, please note a few things. There is a current restriction in the process that says that the Patch Set level of the PDBs being patched need to be the same. Suppose the on-premise 12c CDB “ahuprod” has been patched to the April Database PSU (Patch Set Update). This means all the PDBs in this CDB are also on this PSU. The Cloud database, on the other hand, may have a different patch set level depending on when you created it. Up to the first week of June 2015, all cloud databases were being created with the January PSU applied. After the first week, all new cloud databases had the April PSU applied. Cloud databases created later on, say in August, may have the July PSU. You can check this by going to your cloud service console, drilling down to your cloud database, and drilling down on “View Patch Information” in the Administration box on the screen. Say on June 2015, if it says “No patches available”, it means the cloud database is on the April PSU (latest at that time), which should be fine. If it shows the April PSU, then it means the cloud database is on the earlier PSU, so you should apply the April PSU first on the database. You can check the current PSU, of course, by moving to the OPatch directory under the Oracle Home on the Cloud server, and issuing the command “opatch lsinventory”. This will display the patches that have been applied on the Oracle Home. After you have completed the PSU patch application, you can proceed with the actual cloning, since the source and destination CDBs are now at the latest PSU. We will continue the Hybrid Cloud cloning using Enterprise Manager, in Part VI of this article series. Keep on reading.

Wiki Page: Using the Enterprise Manager Hybrid Cloud - Part IV

$
0
0
by Porus Homi Havewala Oracle Enterprise Manager 12c Release 5 , released in June 2015, allows an on-premise Enterprise Manager OMS (Oracle Management Service) to install Hybrid Cloud Agents on your Oracle Cloud Database servers. In this article series, we are going through the steps for setting up the Hybrid Cloud via Enterprise Manager. In the previous part of this series, we have gone through the pre-steps and then installed a Hybrid Cloud Agent. We will now look at discovering the Cloud database and listener, and monitoring the hybrid cloud. After this is done, we will follow the steps of configuration management and compliance for the Hybrid Cloud, and finally in the remaining parts of this series we will test out the cloning of PDBs back and forth from the cloud. Discovering the Database and Listener on the Cloud Server In the Enterprise Manager console, select Targets.. Hosts. You can see the cloud server IP in the list of hosts monitored by Enterprise Manager, this is blanked out in the screenshot for privacy reasons. Drilling down to the host shows the host home page with the configuration and performance details. Note that the host home page appears like a normal Enterprise Manager host, except for the “Oracle Cloud” icon at the left top corner of the home page. All the monitoring and metrics are available. Move to Targets.. Databases. Currently, the cloud database and listener have not been discovered. So select “Add.. Oracle Database”. Specify the IP address of the Oracle Public Cloud database server. At this point, before proceeding, unlock the dbsnmp user in the Oracle Public Cloud database and change the user password. This will be needed for monitoring the cloud database. [oracle@em12c ~]$ ssh Authorized uses only. All activity may be monitored and reported. [oracle@AHUTESTSERVER ~]$ . oraenv ORACLE_SID = [AHUTEST] ? The Oracle base has been set to /u01/app/oracle [oracle@AHUTESTSERVER ~]$ sqlplus / as sysdba SQL*Plus: Release 12.1.0.2.0 Production on Fri Jun 5 10:07:11 2015 Copyright (c) 1982, 2014, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Partitioning, Oracle Label Security, OLAP, Advanced Analytics and Real Application Testing options SQL> alter user dbsnmp account unlock identified by &password 2 / User altered. When you click “Next”, the Oracle Public Cloud database and listener will both be discovered, as shown below. Enter the Monitor Username as dbsnmp and the Monitor Password you have used, and click on “Test Connection”. This is successful. The review screen appears with the discovered CDB, PDBs and Listener on the Cloud server. Click on Save. The Cloud database now appears in the list of Enterprise Manager targets. The status is shown as pending, but you can drill down to the details. The Database Home page for the Cloud database now appears. The Cloud Database Home page appears just like a normal Database target in Enterprise Manager, except that the “Oracle Cloud” icon is visible at the top of the page. You can monitor and manage it just like a normal on-premise database. Comparing On-premise and Cloud Database configurations You can now compare configurations as follows. Select Enterprise.. Configuration.. Compare from the Enterprise Manager console. Select your on-premise 12c database “ ahuprod.sainath.com ” (an example) as the first configuration. In the next page that appears, click on “Add Configurations”. Search for the latest configuration, and add the Cloud database AHUTEST. Move to the next screen, and select an appropriate comparison template or no template. A comparison template helps avoid alerting on obvious differences. Schedule the compare job to run immediately. Examine the Review screen, and submit. The results of the Configuration comparison job appear as seen below. Effectively, you have compared the configuration of a local on-premise database with that of a cloud database. This comparison can be done at the server level as well – i.e. between two unix servers as well as WebLogic Servers. The licensing needed on the on-premise side is the DBLM (Database Lifecycle Management) pack for database comparisons, and the WLS Management Pack EE for WebLogic Server comparisons. On the Oracle Public Cloud side, you need the appropriate packs as well to be included in your cloud subscription. This applies to other capabilities such as compliance checks as well. You can save a configuration at a certain date and time as a “Gold” configuration (for example immediately after setting up a database server) and then run a configuration comparison of your current configuration to the gold configuration at regular intervals, to alert you if there are any differences. In the next Part V of this article series, we will look at compliance standard enforcement for the hybrid cloud, so that you can apply your corporate compliance standards to all your enterprise databases for the first time, no matter where they are. When this is set up, Enterprise Manager is able to enforce the same compliance standards on the Oracle public cloud as well as on-premise. We will then look at PDB cloning back and forth from the Oracle Public Cloud. Stay tuned.

Wiki Page: Using the Enterprise Manager Hybrid Cloud - Part VII

$
0
0
by Porus Homi Havewala Ever since Enterprise Manager 12c Release 5, released in June 2015, Oracle Enterprise Manager 12c (and now 13c, released in December 2015) allows an on-premise Enterprise Manager OMS (Oracle Management Service) to install Hybrid Cloud Agents on your Oracle Cloud Database servers. In this article series, we are looking at the steps for setting up and using the Hybrid Cloud via Enterprise Manager. In the previous part of this article series, we have gone through the pre-steps and then installed a Hybrid Cloud Agent. We discovered the databases and started to monitor the hybrid cloud. We then looked at configuration comparisons and configuration management for the Hybrid Cloud, as well as compliance standards enforcement. We then cloned a PDB to the cloud, and back again to on-premise. Drilling down, you can go to the on-premise PDB home page. In this case, the performance details appear, since it is an on-premise pluggable database and it relies on the local Enterprise Manager repository to find out if the Enterprise Manager Packs are licensed. Note that if the clone procedure errors out with a connection issue such as shown in the screenshot below or any other sudden connection loss issue, you may need to restart the cloud agent . The cloud agent can be restarted as follows: [oracle@em12c ~]$ ssh Authorized uses only. All activity may be monitored and reported. [oracle@AHUTESTSERVER ~]$ cd /u01/app/oracle/product/agentHome/agent_inst/bin [oracle@AHUTESTSERVER bin]$ ./emctl stop agent Oracle Enterprise Manager Cloud Control 12c Release 5 Copyright (c) 1996, 2015 Oracle Corporation. All rights reserved. Stopping agent ..... stopped. [oracle@AHUTESTSERVER bin]$ ./emctl start agent Oracle Enterprise Manager Cloud Control 12c Release 5 Copyright (c) 1996, 2015 Oracle Corporation. All rights reserved. Starting agent ............... started. You may also need to restart the hybrid gateway agent (which in our case is the agent on the OMS server). Then, you can retry the failed step of the procedure. The procedure now completes successfully. We have now completed the Clone to cloud and clone from cloud scenario. UnPlug PDBs from on-premise and Plug into Cloud CDB It is also possible to move PDBs to and fro from the cloud by using the unplugging and plugging of PDBs. However, this involves more steps so this method is optional, you may or may not use it. Drill down to the on-premise ahuprod CDB Home page, and from the Oracle Database menu, select Provisioning.. Provision Pluggable Databases. On the Provision Pluggable Databases page, select “Create New Pluggable Databases”. Select “Clone an Existing PDB” and Full Clone. Select the on-premise Host credential. Click Next. The Identification screen appears. Name the new clone PDB as “SALESTEST”. Click Next. Select the PDB datafile location (as per the standards used on our installation, this is /u02/oradata/ / ). Click Next. Schedule the Create Pluggable Database to run immediately and go to the Next screen. After the review, click Submit. Select “View Execution Details”. The procedure completes in 1.5 minutes. The SALESTEST PDB is now seen in the database list under the ahuprod CDB. Drill down to the on-premise ahuprod CDB Home page, and from the Oracle Database menu, select Provisioning.. Provision Pluggable Databases. This time, select “Unplug Pluggable Databases”. Launch the procedure. Select the SALESTEST PDB to unplug. Select the On-premise Host credential. Click on Next. On the Destination page, select Software Library as the destination, and “Generate PDB Archive”. Accept or change the PDB template name, and click on Next. Schedule the unplugging to run immediately. Review and submit. The unplug procedure completes in under 2 minutes. In the Targets.. Databases screen, drill down to the AHUTEST cloud database home page. From the Oracle Database menu, select Provisioning.. Provision Pluggable Databases. This time, select “Create New Pluggable Databases” and hit Launch. Select “Plug an unplugged PDB”. As the Cloud Host credential, select the Named credential “NC_OPC_DBCS” that you have used previously. This credential has the SSH private and public keys of the on-premise Enterprise Manager OMS server. Enter SALESTEST as the new PDB name. Select Software library and click on the search icon. Select the PDB template that was recently unplugged from the on-premise database. Having selected the correct unplugged PDB from the software library, click on Next. Before plugging in the new PDB to the cloud CDB, the procedure performs some validation checks. The screenshot above shows what may happen when the character set of the PDB being plugged in is different from the character set of the destination CDB. You can ignore the warning, as seen above, but the net result will be that the PDB after being plugged in will be opened only in restricted mode (i.e. only usable for administrative activities). So, it is better that the PDB and CDB use the same character set. For example, the local “ahuprod” CDB and its PDBs can have AL32UTF8 as the character set, matching that of the cloud databases. Click on Continue after validation completes. Since the cloud database uses Oracle Managed Files , select the same. Make sure you have entered a temporary working directory, and click on Next. Schedule the procedure to run immediately. Review and Submit. The procedure starts. However, if the procedure errors out as shown in the above screenshot, you may need to restart the cloud agent . This can also happen at the end of the procedure as seen below. The cloud agent can be restarted as follows: [oracle@em12c ~]$ ssh Authorized uses only. All activity may be monitored and reported. [oracle@AHUTESTSERVER ~]$ cd /u01/app/oracle/product/agentHome/agent_inst/bin [oracle@AHUTESTSERVER bin]$ ./emctl stop agent Oracle Enterprise Manager Cloud Control 12c Release 5 Copyright (c) 1996, 2015 Oracle Corporation. All rights reserved. Stopping agent ..... stopped. [oracle@AHUTESTSERVER bin]$ ./emctl start agent Oracle Enterprise Manager Cloud Control 12c Release 5 Copyright (c) 1996, 2015 Oracle Corporation. All rights reserved. Starting agent ............... started. You may also need to restart the hybrid gateway agent (which in our case is the agent on the OMS server). Then, you can retry the failed step of the procedure. The procedure now completes successfully. The PDB “SALESTEST” that was plugged in to the Cloud CDB is now visible under the Cloud CDB “AHUTEST”. This concludes the Hybrid Cloud article series. In this series, we have seen a number of capabilities. First we installed a Hybrid Cloud Agent. We then discovered the cloud database and listener and started to monitor the hybrid cloud. We looked at configuration comparisons and configuration management for the Hybrid Cloud, as well as compliance standards enforcement. After this, we completed the cloning of PDBs back and forth from the cloud, and also via unplugging and plugging back in. Our conclusion: Enterprise Manager works effectively with the Oracle Database hybrid cloud, and is bound to get better and better in future versions.

Blog Post: Use TREAT to Access Attributes of Object Subtypes

$
0
0
The TREAT function comes in very handy when working with an object type hierarchy, and you need to access attributes or methods of a subtype of a row or column's declared type. This topic was covered in a PL/SQL Challenge quiz offered in March 2016. Suppose I have the following type hierarchy and I use them as column types in my meals table: CREATE TYPE food_t AS OBJECT ( name VARCHAR2 (100), food_group VARCHAR2 (100), grown_in VARCHAR2 (100) ) NOT FINAL; / CREATE TYPE dessert_t UNDER food_t ( contains_chocolate CHAR (1), year_created NUMBER (4) ) NOT FINAL; / CREATE TYPE cake_t UNDER dessert_t ( diameter NUMBER, inscription VARCHAR2 (200) ); / CREATE TABLE meals ( served_on DATE, appetizer food_t, main_course food_t, dessert dessert_t ); I then insert some rows into the table: BEGIN INSERT INTO meals VALUES (SYSDATE + 1, food_t ('Shrimp cocktail', 'PROTEIN', 'Ocean'), food_t ('Stir fry tofu', 'PROTEIN', 'Vat'), cake_t ('Apple Pie', 'FRUIT', 'Baker''s Square', 'N', 2001, 8, NULL)); INSERT INTO meals VALUES (SYSDATE + 1, food_t ('Fried Calamari', 'PROTEIN', 'Ocean'), dessert_t ('Butter cookie', 'CARBOHYDRATE', 'Oven', 'N', 2001), cake_t ('French Silk Pie', 'CARBOHYDRATE', 'Baker''s Square', Y', 2001, 6, 'To My Favorite Frenchman')); INSERT INTO meals VALUES (SYSDATE + 1, food_t ('Fried Calamari', 'PROTEIN', 'Ocean'), cake_t ('French Silk Pie', 'CARBOHYDRATE', 'Baker''s Square', 'Y', 2001, 6, 'To My Favorite Frenchman'), dessert_t ('Butter cookie', 'CARBOHYDRATE', 'Oven', 'N', 2001)); COMMIT; END; / Notice that even though appetizer and main_course are defined as food_t, I can assign dessert_t and cake_t instances to those columns, because object types support substitutability (the best way to understand that is: every dessert is a food, but not every food is a dessert). Let's take a look at some of the ways I can use TREAT. 1. I want to find all the meals in which the main course is actually a dessert. SELECT * FROM meals WHERE TREAT (main_course AS dessert_t) IS NOT NULL 2. Show whether or not those dessert-centric meals contain chocolate. First with PL/SQL: DECLARE l_dessert dessert_t; BEGIN FOR rec IN ( SELECT * FROM meals WHERE TREAT (main_course AS dessert_t) IS NOT NULL) LOOP l_dessert := TREAT (rec.main_course AS dessert_t); DBMS_OUTPUT.put_line ( rec.main_course.name || '-' || l_dessert.contains_chocolate); END LOOP; END; / And now with "pure" SQL: SELECT TREAT (m.main_course AS dessert_t).contains_chocolate FROM meals m WHERE TREAT (main_course AS dessert_t) IS NOT NULL The thing to realize in both these cases is that even though I have identified only those meals for which the main course is a dessert, I still must explicitly TREAT or narrow the main_course column to dessert_t, before I will be able to reference the contains_chocolate attribute. If I forget the TREAT in the SELECT list, such as: SELECT m.main_course.contains_chocolate FROM meals m WHERE TREAT (main_course AS dessert_t) IS NOT NULL I will see this error: ORA-00904: "M"."MAIN_COURSE"."CONTAINS_CHOCOLATE": invalid identifier 3. Set to NULL any desserts that are not cakes. UPDATE meal SET dessert = TREAT (dessert AS cake_t);

Comment on Oracle and ANSI SQL Joins

$
0
0
"ANSI Join" is a misnomer. "WHERE d.deptno = e.deptno AND e.ename = b.ename" is ANSI SQL. It is ANSI-89 compliant. You are really comparing and contrasting one ANSI standard against another.

Wiki Page: Getting Started with Oracle JET 2.0

$
0
0
You might have noticed that it was not enough for Oracle to have Forms, Application Express and ADF - they had to offer a JavaScript-based development toolkit too. That's the product known as JavaScript Extension Toolkit (JET). Oracle JET is a fairly unusual Oracle product in that it is completely free and open source and distributed under the Universal Permissive License (UPL). What is Oracle JET? Oracle JET is a toolkit for experienced JavaScript developers to build client-side JavaScript-based applications. It is code-heavy and most definitely not for JavaScript beginners. Oracle says: "Before you can successfully develop applications with Oracle JET, you should be familiar with the third party libraries and technologies that the JET framework uses." That list contains: JQuery JQuery UI Knockout JavaScript CSS HTML5 SASS Apache Cordova (if you want mobile) Bower Grunt Node.js Git Yeoman It should be obvious that this is an offering for people already familiar with this specific family of technologies and tools. Oracle JET offers very little hand-holding and expects you to be able to confidently swim in a sea of code. You'll notice that this is all existing languages, technologies and open source tools. In addition to these, Oracle has added some things that they felt were necessary in order to build enterprise JavaScript applications: Good-looking, secure UI components in Oracle's new Alta look Support for accessibility (screen readers etc.) Support for internationalization (including right-to-left languages like Hebrew or Arabic) What does an Oracle JET application Look Like? While Oracle JET has a completely different development paradigm from other Oracle development tools, it fortunately uses the same visual appearance. That's the common look you will see across most of Oracle's new applications these days, and it's called the Alta UI. Oracle is making a big effort to standardize on this look, and is building components with the Alta look for all their development tools. That means that you can build part of your application in APEX, part in ADF and part with Oracle JET, and your users should not be able to tell the difference. How Do I Build an Oracle JET Application? An Oracle JET application exists only on the client side, in the browser. That means that it consists only of HTML, JavaScript and CSS. Unfortunately, Oracle is not offering a development tool like we have for Oracle Forms. They also don't offer an integrated development environment like we have for ADF (JDeveloper), and don't even have a browser-based way to declaratively build applications. For the time being, building an Oracle JET application involves writing (a lot of) code. Starting Point Since a JET application consists of so much code, Oracle is providing us with a starting point so we don’t have to fiddle around with getting the brackets and semicolons right before we can run our first application. As a matter of fact, you can start your first Oracle JET application by downloading the a Basic Starter Template with Oracle JET already configured from the Oracle JET download page . Unzip it and open index.html in a browser and you see a basic application running. You can have a look at the content of this sample application to get an idea of how a JET application should be structured. Mind you, there is no magic format that you have to adhere to – the entire application is just files, and as long as every reference is valid, you can choose any structure you like. Note that you get some modern functionality built into this basic JET application, for example responsive design (meaning that the application will automatically respond to different screen sizes. If you make the screen narrower, the elements rearrange and you get: If you make it even narrower, the menu gets wrapped up and hidden behind a button, like this: Cut and Paste Programming Adding a component to a page again involves code. There is going to be HTML tags that define your components, and lot of properties that need to be set right in order for your component to connect to the Knockout.js model. Again, we don't have to know the syntax by heart – Oracle is catering to the "Google generation" of programmers by providing code snippets for all of the most common functionality as part of the JET Cookbook . But What About Data? Oracle JET is a client-side framework. It does not concert itself with the details of how data is stored, but simply calls REST-based web services, exchanging JSON format messages. You get a lot of help from the Knockout JavaScript library that offers to maintain a client-side model for you. Your HTML user interface elements bind to this model, and you use some moderately complex boilerplate code to create the Knockout JavaScript objects you need to communicate with your chosen REST services. Creating a Basic Table Because JET is a client side framework, JET development starts with the user interface. For example, if you want for create a table of departments, you first get some sample code from the cookbook. For example, you could start with the Basic Table pattern. The Basic Table recipe in the cookbook contains both the HTML and the JavaScript you need. A Complete Table Example The basic starter template provides a starting point, but it is rather simple. If you want to create a single-page application that shows data in an overview table with a form to modify records, the SPA-ojModule-oJRouter example application is a better basis to build on. You find this application described in the Oracle JET developer handbook chapter 6 "Creating Single-Page Applications" in the section " Designing Single-Page Applications Using Oracle JET ." If you scroll to the bottom of this page, you find the download link SPA-ojModule-ojRouter.zip . If you unzip this application, you get the following structure: Unfortunately, this application does not run correctly on your local machine as downloaded. Oracle is using it to demonstrate routing with path segments, and flippantly instructs you to set up an Apache mod_rewrite rule or write a servlet to make it work. However, there is a simpler way: Open the main.js file and find the following code: // To change the URL adapter, un-comment the following line // oj.Router.defaults['urlAdapter'] = new oj.Router.urlParamAdapter(); Remove the slashes from the second line to change the application to use URL parameters instead, so it looks like this: / / To change the URL adapter, un-comment the following line oj.Router.defaults['urlAdapter'] = new oj.Router.urlParamAdapter(); When you've done this, you can navigate through the application with the buttons in the top right. If you click on the Tables button and then Departments Table to the right, you see an example of an ojTable in action: If you click on one of the records, you see a form that demonstrates individual ojInput components: To change the user interface, you edit the tablesContent.html in the views directory. The data comes from the tablesContent.js file in the viewModels directory. This file is where you need to implement calls to a REST service to work with database data. Refer to Chapter 8 " Using the Common Model and Collection API " to learn about using Knockout and JET to establish a data binding between your ViewModel and the underlying REST web service.

Blog Post: Best Practices with Enterprise Manager 13c

$
0
0
Since the introduction of Enterprise Manager 12c, folks have been asking for a list of best practices.  I know a lot of you have been waiting for this post! 1.Use previously deployed, older hardware for your Enterprise Manager deployment on 13c. Enterprise Manager is a simple, single service system.  There is no need for adequate resources and ability to scale.  In fact, I’ll soon be posting on my blog about building an EM13c on a Raspberry Pi 3. 2. Please feel free to add new schemas, objects and ETL’s to the Oracle Management Repository, (OMR.) This database doesn’t have enough to do with metric collections, data rollup, plugin, metric extensions and notifications. 3. Turn on the standard statistics jobs and baseline collection jobs on the OMR. The OMR has its own version of the stats job, but running two jobs should make it run even better and even though baselines aren’t used, why not collect them, just in case? 4. Set the EM13c to autostart, but set the listener to stay down. The Oracle Management Service, (OMS) shouldn’t require the listener to connect to the OMR when starting, after all. 5.  If there is a lot of garbage collection, just add more memory to the java heap. If we give it more memory, then it will have less to clean up, right?  More is better and there isn’t any way to find out what it should be set to anyway. 6. If you want to use the AWR Warehouse, you should use the OMR database for the AWR repository, too. It shouldn’t make a difference to network traffic, datapump loading or resource workloads if they share a box.  These two databases should work flawlessly on the same hardware, not to worry about network traffic, etc. 7.  If you have a lot of backlog for job processing on your EM13c, you should trim down the worker threads. Serializing jobs always speeds up the loading of data. 8. Sizing an Enterprise Manager EM13c is a simple mathematical process, which I’ve displayed below: (If I didn’t mention it earlier, there will be a quiz at the end of this post…) 9.  Never apply patches to the Enterprise Manager tiers or agents. Each release is pristine and bugs don’t exist.  It will only require more work in the way of applying these patches and downtime to your EM13c environment. 10.  Patch any host, database or agent monitored by the Enterprise Manager manually. Patch plans and automation of patching and provisioning is a terrible idea and the only way a DBA can assure if something is done right is if they do it manually themselves.  Who needs a good night’s sleep anyway? Tags: April Fools Del.icio.us Facebook TweetThis Digg StumbleUpon Comments: 0 (Zero), Be the first to leave a reply! You might be interested in this: The ALS Ice Bucket Challenge, the DBA Kevlar Way ORA-02050 Remote DB in Doubt? How About Gone for Years?? Removing Redundant Startup/Restart for the OMS Service in Windows KSCOPE 2013! Cool 11g New Features Copyright ©  DBA Kevlar [ Best Practices with Enterprise Manager 13c ], All Right Reserved. 2016.

Blog Post: Announcing PLSQL.js, a Javascript Framework Hiding Squiggles Behind PL/SQL APIs

$
0
0
Today, the Oracle Database Developer Advocates team announces the release of PLSQL.js , a PL/SQL framework for JavaScript developers, delivering all the power and flexibility of JavaScript though simple, declarative APIs written in the best database programming language in the world. “The first key advantage to PLSQL.js is that you don’t have to write those little squiggle thingies,” notes Steven Feuerstein, well-known author and trainer on Oracle PL/SQL, who designed the bulk of PLSQL.js. “We really didn’t see the point. Why not use regular English words and the kind of punctuation everybody was already used to, like underscores and dots? Why do we always have to change things?” Oracle’s Chief Healthification Officer, Jean Frutesandveggies, adds that PLSQL.js is also an attempt to help young application developers deal with a growing epidemic of Javascript Fatigue . "While PLSQL.js appears to be yet-another-framework that just compounds the problem," explains Feuerstein, "it's really not. PLSQL.js is, in fact, the JS framework to end all JS frameworks. Definitively. Until version 2 comes out, that is, which will be a complete rewrite. Maybe using TypeScript." “Writing code,” points out Frutesandveggies , “is very hard and stressful work. It doesn’t help to write in a language like JavaScript, in which developers are expected to constantly change their frameworks, tools, and general outlook on life. The bottom line? When you have to React to Yeoman who Plop and Babel from an overly Angular point of view, well, you are bound to Relay into Motorcycle trouble. Sure, you can take aspirin for the ensuing headache, but we recommend, instead, that you simply switch once and for all to PLSQL.js.” PLSQL.js will be released as open source under the MYOB (Mind Your Own Business) license on GritHub. Users will be allowed to pull but not push, and never commit, to ensure that the framework remains stable and free of squiggles.
Viewing all 1814 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>