Quantcast
Channel: Oracle
Viewing all 1814 articles
Browse latest View live

Blog Post: Nine Good-to-Knows about PL/SQL Error Management

$
0
0
1. Exceptions raised in the declaration section are not handled in the exception section. This sometimes surprises a developer new to PL/SQL. The exception section of a PL/SQL block can only possibly handle an exception raised in the executable section. An exception raised in the declaration section (in an attempt to assign a default value to a variable or constant) always propagates out unhandled to the enclosing block. Verify on LiveSQL Exceptions Raised in Declaration Section Not Handled Locally 2. An exception raised does not automatically roll back uncommitted changes to tables. Any non-query DML statements that complete successfully in your session are not rolled back when an exception occurs - either directly in PL/SQL or propagated out from the SQL engine. You still have the option of either committing or rolling back yourself. If, however, the exception goes unhandled out to the host environment, a rollback almost always occurs (this is performed by the host environment). Verify on LiveSQL Exceptions Do Not Rollback Uncommitted Changes 3. You can name those unnamed ORA errors (never hard-code an error number). Oracle Database pre-defines a number of exceptions for common ORA errors, such as NO_DATA_FOUND and VALUE_ERROR. But there a whole lot more errors for which there is no pre-defined name. And some of these can be encountered quite often in code. The key thing for developers is to avoid hard-coding these error numbers in your code. Instead, use the EXCEPTION_INIT pragma to assign a name for that error code, and then handle it by name. Verify on LiveSQL Generate Named Exceptions Use EXCEPTION_INIT to Give Names to Un-named Oracle Errors 4. If you do not re-raise an exception in your exception handler, the outer block doesn't know an error has occurred. Just sayin'. You have a subprogram that invokes another subprogram (or nested block). That "inner" subprogram fails with an exception. It contains an exception handler. It logs the error, but then neglects to re-raise that exception (or another). Control passes out to the invoking subprogram, and it continues executing statements, completely unaware that an error occurred in that inner block. Which means, by the way, that a call to SQLCODE will return 0. This may be just what you want, but make sure you do this deliberately. Verify on LiveSQL If Exception Not Re-raised, No More Exception! 5. Whenever you log an error, capture the call stack, error code, error stack and error backtrace. Ideally, this is a total non-issue for you, because you simply invoke a generic logger procedure in your exception handlers (example and recommendation: download and use Logger , an open source utility that does almost anything and everything you can think of). But if you are about to write your own (or are using a home-grown logging utility), make sure that you cal and store in your log (likely a relational table), the values returned by: SQLCODE DBMS_UTILITY.FORMAT_CALL_STACK (or corresponding subprograms in 12.1's UTL_CALL_STACK package) - answers question "How did I get here?" DBMS_UTILITY.FORMAT_ERROR_STACK (or corresponding subprograms in 12.1's UTL_CALL_STACK package) - answers question "What is my error message/stack?" We recommend using this instead of SQLERRM. DBMS_UTILITY.FORMAT_ERROR_BACKTRACE (or corresponding subprograms in 12.1's UTL_CALL_STACK package) - answers question "On what line was the error raised?" Verify on LiveSQL "How did I get here?" DBMS_UTILITY.FORMAT_CALL_STACK UTL_CALL_STACK: Fine-grained execution call stack package (12.1) Error Message Functions: SQLERRM and DBMS_UTILITY.FORMAT_ERROR_STACK Back Trace Exception to Line That Raised It 6. Always log your error (and backtrace) before re-raising the exception. When you re-raise an exception, you will reset the backtrace (the track back to the line on which the error was raised) and might change the error code (if you raise a different exception to propagate the exception "upwards"). So it is extremely important to call you error logging subprogram (see previous Good to Know) before you re-raise an exception. Verify on LiveSQL Back Trace Exception to Line That Raised It 7. Compile-time warnings will help you avoid "WHEN OTHERS THEN NULL". One of Tom Kyte 's favorite pet peeves , the following exception sections "swallow up" errors. EXCEPTION WHEN OTHERS THEN NULL; EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.PUT_LINE (SQLERRM); In fact, any exception handler that does not re-raise the same exception or another, runs the risk of hiding errors from the calling subprogram, your users, and yourself as you debug your code. Generally, you should log the error, then re-raise it. There are certainly some cases in which this advice does not hold (for example: a function that fetches a single row for a primary key. If there is no row for the key, it's not an application error, so just return NULL). In those cases, include a comment so that the person maintaining your code in the distant future knows that you weren't simply ignoring the Wisdom of the Kyte. Example: EXCEPTION WHEN OTHERS THEN /* No company or this ID, let calling subprogram decide what to do */ RETURN NULL; One way to avoid this problem is to turn on compile-time warnings . Then when your program unit is compiled, you will be warned if the compiler has identified an exception handler that does not contain a RAISE statement or a call to RAISE_APPLICATION_ERROR. Verify on LiveSQL Automatically Detect Exception Handlers That "Swallow Up" Errors 8. LOG ERRORS suppresses SQL errors at the row level. The impact of a non-query DML statement is usually "all or nothing". If my update statement identifies 100 rows to change, then either all 100 rows are changed or none are. And none might be the outcome if, say, an error occurs on just one of the rows (value too large to fit in column, NULL value for non-NULL column, etc.). But if you have a situation in which you would really like to "preserve" as many of those row-level changes as possible, you can add the LOG ERRORS clause to your DML statement. Then, if any row changes raise an error, that information is written to your error log table, and processing continues. IMPORTANT: if you use LOG ERRORS, you must must must check that error log table immediately after the DML statement completes. You should also enhance the default error log table. Verify on LiveSQL Suppress DML Errors at Row Level with LOG ERRORS Helper Package for LOG ERRORS 9. Send an application-specific error message to your users with RAISE_APPLICATION_ERROR. If you execute a SELECT-INTO that does not identify any rows, the PL/SQL runtime engine raises: ORA-01403 and the error message (retrieved via SQLERRM or DBMS_UTILITY.FORMAT_ERROR_STACK) is simply "No data found". That may be exactly what you want your users to see. But there is a very good chance you'd like to offer something more informative, such as "An employee with that ID is not in the system." In this case, you can use RAISE_APPLICATION_ERROR, as in: CREATE OR REPLACE PACKAGE BODY employees_mgr IS FUNCTION onerow (employee_id_in IN hr.employees.employee_id%TYPE) RETURN hr.employees%ROWTYPE RESULT_CACHE IS l_employee hr.employees%ROWTYPE; BEGIN SELECT * INTO l_employee FROM hr.employees WHERE employee_id = employee_id_in; RETURN l_employee; EXCEPTION WHEN NO_DATA_FOUND THEN raise_application_error ( -20000, 'An employee with that ID is not in the system.'); END ; END ; Verify on LiveSQL Send Application-specific Error Message To Users With RAISE_APPLICATION_ERROR

Blog Post: Adding GPIO Pins to the Raspberry Pi Zero

$
0
0
Although the Raspberry Pi 3 is now available, many people are still interested in it’s cheaper, smaller version, the Raspberry Pi Zero .  This version isn’t just smaller, it also requires a Wi-Fi dongle for internet access, (one of the new, great features on the 3 Model!) and one of my main reasons for not recommending it to newbies to the Raspberry Pi, (RPI) world, is that it doesn’t come with the GPIO pins pre-soldered. GPIO, The In and Out to the World GPIO, which stands for General Purpose Input and Output is an excellent feature on this single board computer, allowing an RPI to connect easily with sensors, motors and other external components.  The original RPI came with a 26 pin GPIO setup, but since the Module B, it’s been a standard 40 pin GPIO that offers a myriad of project and hardware possibilities. Now those that are seasoned RPI geeks will simply say to wrap wires around the appropriate GPIO connector or with jumper wires, just stick the connector through the hole- Those that introduce RPI to those new to single board computers know that the quickest way to suffer a short out to one is by performing breadboard and other experiments that utilize the GPIO and we see the wrapping of wires and leaving more chance for wires for power, grounding and control to touch a penchant for human error.  These errors leave us asking students to be prepared with a backup image of the software on hand to address the most common victim of a shorted out unit. Now I could go into the dangers of GPIO, ampage current with a recommendation that all newbies start with an Ardruino or discuss the importance of transistors, but eliminating some of the risk by having the GPIO pin connectors makes sense. The Ardruino Vs. RPI Discussion Ardruinos are better at GPIO ampage handling, but they shouldn’t be confused with a single board computer and are limited in their application.  The RPI and Arduino are both capable of sinking 50 ma through their GPIO pins, the difference being where you to acually sink 50ma, the RPI mat very well be damaged, but the arduino will survive and doesn’t have an Operating System and software that may be seriously impacted. Due to this, for those that may be intermediates, I recommend if you’re going to get the Raspberry Pi Zero, consider soldering the GPIO pins for it.  It’s not difficult and there are a lot of videos that can teach you how to solder effectively . Soldering GPIO Pins on the Zero A 60 watt 110V soldering gun with a couple of different soldering tips is all you need to do the work with just about any RPI project.  Purchase the right kind of solder for the project you’re taking on.  Note I have a picture of the solder I’m using below. Pins can be purchased from Radio Shack, MicroCenter and other “geek stores” or you can steal them from kits that can be purchased online, like this cobbler kit .  The pins can be “broken” into the correct count to fit in the holes to be soldered and you’ll be soldering from the back side of the unit. If the unit is in a case, please remove the case and ensure you remove any cables, micro SDCards, etc. that could get in the way or be in danger of harm by the soldering gun before you begin. Set up your work area and I keep a piece of thick cardboard around to clean off any remnants of solder from my solder gun to keep it clean as I work.  Add the correct number of pins and if you don’t have a single set of 40 pin addition you can add, work with one line at a time. Once you’ve add the first line on pins, turn  the unit over and brace it so that the pins are straight.  You don’t want them “leaning”, otherwise the pins could be soldered crooked and you could have challenges attaching units like a PiHat or other components that have configured GPIO attachments.  I used the corner of my keyboard, as it was the right height and balanced it out nicely while I did the work.  Make sure to use the right solder,-  you can see what I used in the picture below: Solder across the first line, taking care not to touch the actual Pi Zero with the soldering iron and once done with the first line, add the second line and solder it into place in the same way as you did the first: Once finished, let it cool and check the connections.  Are the pins tight and are they straight.  If there is some angle to the pins, you can use the soldering iron to *carefully* loosen the solder and straighten them out some, but it’s better to be cautious and check it as you go to begin with. Once it’s cooled and you’re satisfied with the pin  placement, plug back in the Micro SDCard and if you had a case, put the RPI zero back into it.  That’s all there is to it!  You’re ready to create all kinds of fun projects with your RPI Zero and not worry so much about those pesky GPIO wires being exposed!  Oh, yeah, still make a backup of your image using Win32ImageMaker….PLEASE!! Tags: GPIO , Raspberry Pi , RPI Zero Del.icio.us Facebook TweetThis Digg StumbleUpon Comments: 0 (Zero), Be the first to leave a reply! You might be interested in this: Oracle Open World, Part I, Symposium Building Right or Build Twice- That is the Question... Balance to Protect and Utilize Unique I/O Fusion Card Data CBO, Statistics and A Rebel(DBA) With A Cause EM12c, Rel. 4, OMS and OMR Health, Part II Copyright ©  DBA Kevlar [ Adding GPIO Pins to the Raspberry Pi Zero ], All Right Reserved. 2016.

Wiki Page: Exadata – Configure Cisco Switch and PDU SNMP for OEM 12c Monitoring (Post Exadata Discovery Setups)

$
0
0
Introduction In my previous post on OEM 12c for Exadata we have seen how to install OEM 12c agent on Exadata using Agent Automation kit and how to discover Exadata Database Machine in OEM 12c. Installing Enterprise Manager 12c Agent On Exadata Using Agent Automation Kit http://www.toadworld.com/platforms/oracle/w/wiki/11175.installing-enterprise-manager-12c-agent-on-exadata-using-agent-automation-kit Discover Exadata Database Machine in OEM 12c http://www.toadworld.com/platforms/oracle/w/wiki/11418.discover-exadata-database-machine-in-oracle-enterprise-manager-12c In Post Discovery setups we will Configure Cisco Ethernet Switch SNMP and PDU SNMP for Oracle Enterprise Manager 12c Monitoring. In this article I will demonstrate how to perform “Post Discovery Setups” on Exadata Database Machines. Assumption A fully functional OEM 12c server environment OEM 12c Agent is installed on all Exadata Compute Nodes Exadata Database Machine has been discovered in OEM 12c. Oracle user password Admin user password for Cisco switch and PDUs Environment Exadata Model X4-2 Half Rack HC 4TB Exadata Components Storage Cell (7), Compute node (4) & Infiniband Switch (2) Exadata Storage cells DBM01CEL01 – DBM01CEL07 Exadata Compute nodes DBM01DB01 – DBM01DB04 Exadata Software Version 12.1.2.1.1.150316.2 Exadata DB Version 11.2.0.4 BP15 Exadata Infiniband Version 2.1.5-1 Cisco Switch Hostname DBM01SW-ADM01 PUD Hostname DBM01SW-PDUA01 & DBM01SW-PDUB01 Steps to Configure Cisco Switch SNMP and PDU SNMP for OEM 12c Monitoring 1. Cisco Ethernet Switch Login to Cisco Switch and enter Configuration mode Cisco Switch name: DBM01SW-ADM01 Username: admin Password: welcome (default) After login in as admin user, type “enable” at the shell. Without this you can’t execute switch commands. Enter “configure terminal” to enter the configuration mode. Enable access to allow the Agents monitoring Cisco Switch targets to poll the switch. Enter the following command for Agent that monitors the Cisco Switch. Here I am using compute node 1 to monitor Cisco switch. Configure SNMP Community The SNMP Community string generally is “public” and this should match the value specified in the Oracle Enterprise Manager 12c Monitoring Configuration page for the Cisco Switch target. Use the following command to configure SNMP community. Set the monitoring Agent as the location where SNMP traps are delivered. Use the following to set the monitoring agent. The SNMP community string must match the value provided during Oracle Enterprise Manager 12c Cisco Switch Management Plug-In setup. The IP address below is of compute node 1 and port 3872 is agent port. Full command: edwxxsw-adm01(configure):#$ snmp-server host xx version 1 public udp-port 3872 Configure the switch to send only environmental monitor SNMP traps Use the following command: Verify the settings and save the configuration The “show running-config” command will list the entire configuration”. Use the space button to display the full configuration. Execute “copy running-config startup-config” command to save the configuration. Verify the Cisco Switch SNMP configuration for Oracle Enterprise Manager 12c Monitoring snmpget command is used to verify SNMP is configured and running for Cisco Switch. As agent software owner (typically Oracle), run the following command from the compute nodes whose agents are configured to monitor the Cisco Ethernet Switch. In our case we configured agent on compute node 1 to monitor the Cisco Switch. NOTE: If the above command timed out, then the Cisco Switch is not properly configured for SNMP. 2. Power Distribution Unit If you wish to enable OEM 12c to collect metric data and raise events for Power Distribution Unit (PDU), PDUs must be configured to accept SNMP queries from Agent that monitors the PDU. Open a web browser and enter the PDU hostname https://dbm01sw-pdua01.domain.com Click on the “Net Configuration” and enter the credentials. Username: admin Password: welcome1 (default) Click on the “SNMP Access” Scroll down and place a check mark for “Enable SNMP v1/v2” Check the “Original-MIB” and click submit. Once you submit the SNMP changes, the PDU will reboot. Click on “Net Configuration” and login as admin user again. Click on “SNMP Access” again and scroll down to “NMS (SNMP v1/v2). Enter the “IP address or Hostname” of the compute nodes agent that monitor PDU. Scroll right; enter the community string “public” for both Read-Write Community and Read-Only Community Scroll right, place a check mark to enable SNMP Click submit to confirm the changes. This will save the changes without reboot. Click “SNMP Traps” Scroll down, enter the “IP address or Hostname” of the compute nodes agent that monitor PDU. Scroll right, enter “public” under Community Scroll right, leave User column to it’s default value, select Version as v1 and place a check mark to Enable SNMP Traps. Click Submit Verify the PDU SNMP configuration for Oracle Enterprise Manager 12c Monitoring NOTE: If the above command timed out, then the PDU is not properly configured for SNMP. Repeat the above steps for second PDU in the cluster. Conclusion In this article we have learnt how to perform “Post Discovery Setups” on Exadata Database Machines. We have Configured Cisco Ethernet Switch SNMP and PDU SNMP for Oracle Enterprise Manager 12c Monitoring. References OEM 12c Exadata Discovery Cookbook http://www.oracle.com/technetwork/oem/exa-mgmt/em12c-exadata-discovery-cookbook-1662643.pdf Next “Discover the Clusterware and Oracle Databases”

Blog Post: Las funciones, los tipos de dato y los valores nulos.

$
0
0
En la vida, hasta las cosas más simples siempre pueden complicarse. En Oracle, como en la vida, pasa lo mismo. Veamos el caso de dos funciones de Oracle, que a pesar de su simpleza, pueden darnos dolores de cabeza si no estamos lo suficientemente atentos. Greatest y Least son dos funciones del lenguaje SQL que se aplican a filas. Oracle las incluye en un grupo al que llama funciones de comparacion general . Greatest retorna el valor máximo de una lista de valores. Least retorna el valor mínimo. SQL> select greatest(1, 2, 10) from dual; GREATEST(1,2,10) ---------------- 10 SQL> select least(1, 2, 10) from dual; LEAST(1,2,10) ------------- 1 SQL> Hasta aquí, todo muy simple. Sin embargo, siempre que trabajamos con funciones tenemos que estar muy atentos. Veamos el siguiente caso: SQL> select * from prueba; VALOR1 VALOR2 VALOR3 -------- ------- -------- 10 1 2 SQL> select greatest(valor1, valor2, valor3) mayor from prueba MAYOR -------- 2 SQL> select greatest(valor2, valor3, valor1) mayor from prueba MAYOR ---------- 10 Aquí esta pasando algo raro.... la función retorna resultados diferentes a pesar de que la estoy aplicando sobre el mismo juego de datos. ¿Cuál es el motivo por el cual Oracle se comporta de esta manera aparentemente errática? La respuesta está en los tipos de dato. En el caso de la función Greatest , Oracle determina que el tipo de dato del primer argumento será el utilizado como tipo de dato tanto de retorno como de comparación. Veamos la estructura de la tabla PRUEBA. SQL> desc prueba Name Null? Type --------- -------- --------------- VALOR1 VARCHAR2(10) VALOR2 NUMBER VALOR3 NUMBER En el primer caso, Oracle determinó que el tipo de valor a retornar era VARCHAR2, pues el primer argumento (columna VALOR1) es de tipo VARCHAR2. Comparando cadenas de caracteres, Oracle determinó que la cadena "2" es mayor que la cadena "10", por lo tanto retornó el valor 2. Nótese que el resultado quedó alineado a izquierda a pesar de que el valor "2" contenido en la columna VALOR3 es de tipo NUMBER. En el segundo caso, Oracle determinó que el tipo de valor a retornar era NUMBER, pues el primer argumento (columna VALOR2) es de tipo NUMBER. Comparando números, Oracle determinó que el valor 10 es el mayor de todos. Nótese que el resultado quedó alineado a derecha, a pesar de que el valor "10" está almacenado en una columna de tipo VARCHAR2. El valor de retorno de la función es de tipo NUMBER y quedó determinado por el tipo de dato del primer argumento. También tenemos que estar atentos a lo que ocurre cuando intervienen valores nulos entre los argumentos de la función. SQL> set null null SQL> select greatest(1, 2, null) from dual; GREATEST(1,2,NULL) ------------------ null Analizando el resultado de esta consulta, ¿podemos concluir que Oracle considera que el valor nulo es mayor que los valores 1 y 2? Veamos: SQL> select least(1, 2, null) from dual; LEAST(1,2,NULL) --------------- null Y si analizo el resultado de esta última consulta, ¿puedo concluir que Oracle considera que el valor nulo es menor que los valores 1 y 2? Mmmmm..... El "null" en el resultado de la consulta no significa que "null" sea mayor ni menor que los otros valores de la lista. En realidad Oracle retorna "null" porque cuando se pregunta "¿cuál es el mayor o menor valor entre 1, NULL y 2?", la respuesta que da Oracle es "no lo sé, porque uno de los valores de la lista (NULL) es desconocido ". En general las funciones que aplican a filas y que tienen un conjunto de argumentos de entrada, retornarán "null" cada vez que encuentren un valor nulo entre los argumentos. Como hemos visto, todo puede complicarse si no estamos lo suficientemente atentos. Nos vemos!

Forum Post: RE: Access violations

$
0
0
I think I found the reason for the access violation. I’d examined the Toad Options and saw something strange with Files->General in Application data directory. The path was a slash only (/). And my colleague had complained that Toad tried to make directories on several locations. I don’t understand why the slash was used and if it was changed after installation. After a deinstall and new clean install I checked this option and found a normal path-setting. My conclusion is that probably the access violation is caused by the effort to create a directory or file on a location without proper privileges to do so. At this moment no access violations anymore since more than a week. Only thing remains, how could this happen with as far as I know no customizing of the options. The access violation case can be closed

Wiki Page: How to back up Oracle databases. Part II

$
0
0
By: Juan Carlos Olamendy Turruellas Introduction This is the second article in a series where we’re being learning about the principles, concepts and real world scripts for doing backups to Oracle database. In the first article, I’ve talked about the most important terms related to backups in Oracle databases. In this second article, I’ll talk about doing low-level manual backups in order to illustrate the principles and concepts of the first article. In the last articles, I’ll talk about a tool that automates and simplifies the backup process (no more low level tasks) named Recovery Manager (also know as RMAN ). Regarding to the concepts learned in the first article, we could recall that we have basically two backup strategies: Offline/cold/closed backup, Online/hot/open backup. The examples below will be divided on these two strategies. Offline/cold/closed backup This type of backup has the following features: It’s a whole database backup It produces a consistent backup The database can be restore from the last backup without performing the update step of the recovery process It can be used with either archivelog or noarchivelog mode In archivelog mode, we can take additional recovery steps to complete a backup to a point-of-last-committed-transaction The database instance must be shut down normally (not due to an instance failure) It uses the OS commands to backup the database files. They are: All database files All control files All online redo log files Optional: The initialization parameter file ( init.ora ) Backups performed using OS commands, while the Oracle instance is running or crashed, are not valid. If not possible to shut down the instance, we need to execute a hot backup. In order to execute this type of backup, we need to follow the steps: 1 - Obtain a list of files to backup Use SQL*PLUS and query the v$datafile view to obtain the list of database files as show below in the listing 01. SQL> SELECT name FROM v$datafile; Listing 01 Use SQL*PLUS and query the v$controlfile view to obtain the list of control files as shown in the listing 02. SQL> SELECT name FROM v$controlfile; Listing 02 Use SQL*PLUS and query the v$logfile view to obtain the list of online redo log files as shown in the listing 03. SQL> SELECT member FROM v$logfile; Listing 03 2 - Copy the files to a backup directory Now we need to shutdown the Oracle instance, and using the cp Unix command, let’s copy every file to a backup directory as shown in the listing 04. $ cp -a database_file_path /u05/oradata/DBTEST/backup/ Listing 04 3- Do a backup archive And finally, using the tar Unix command, I’ll archive the backup directory to a tape drive as shown below in the listing 05. $ tar –cvf /dev/rmt/0hc /u05/oradata/DBTEST/backup/ Listing 05 We can automate this process using a shell script to generate the list of files to backup. At the end of the day, the backup process boils down to copying all the necessary files using the operating system copy utilities; and always remember to shutdown the instance before executing this type of backup. The script is shown below in the listing 06. #!/bin/bash BACKUP_DIR=/u05/oradata/DBTEST/backup/ OUTPUT_SCRIPT=cold_backup.sh sqlplus -s /nolog SHUTDOWN IMMEDIATE -- Start the instance in mount mode (no open mode) SQL> STARTUP MOUNT -- Change the database archiving mode -- Then open the database for normal operations SQL> ALTER DATABASE ARCHIVELOG; SQL> ALTER DATABASE OPEN; Listing 07 We need to modify the init.ora initialization parameter file to set the location and format of log archives as shown in the listing 08. #Collect all of the archive log files in this directory log_archive_dest=/u05/oradata/DBTEST/arch #Specify a particular format for the archived log file #%T refers to the thread number, #%S refers to the sequence number log_archive_format="%s_%t_%r.arc" Listing 08 In order to check that everything is working correctly, we need to run the following command as shown in the listing 09. Listing 09 We can force an Oracle instance to switch the current log file and to archive it using the command shown in the listing 10. SQL> ALTER SYSTEM ARCHIVE LOG CURRENT; Listing 10 2 – Backup database files We can setup a tablespace into a backup mode as shown below in the listing 11. SQL> ALTER TABLESPACE data BEGIN BACKUP; Listing 11 Next step is to backup the underlying database files using the OS commands. In order to get a list of the database files associated to a tablespace , we need to run the statement shown in the listing 12. SQL> SELECT FILE_NAME SQL> FROM DBA_DATA_FILES SQL> WHERE TABLESPACE_NAME='DATA'; Listing 12 From the former step output, we can copy the database files into a backup directory using the cp Unix command as shown in the listing 13. $ cp -a database_file_path /u05/oradata/DBTEST/backup/ Listing 13 When the copy is done, we need to indicate the tablespace to turn into a normal state as shown in the listing 14. SQL> ALTER TABLESPACE data END BACKUP; Listing 14 3 – Backup the archived log files After completing an inconsistent backup, we need to backup all archived redo logs produced since the backup began; otherwise, we cannot recover from the backup. We can delete the original archived logs from the storage after this backup step. 4 – Backup the control files While running in archivelog mode, we need to backup the control files. During the update step in the recovery process, we must use the backup of the control file. We can backup a control file to a physical backup file as shown below in the listing 15. The REUSE clause indicates to overwrite any current backup that exists. SQL> ALTER DATABASE BACKUP CONTROLFILE TO '/u05/oradata/DBTEST/backup/control.ctl.bak' REUSE; Listing 15 It’s remarkable to say that we can make several tablespace backups by putting the database on backup mode as shown in the listing 16. SQL> ALTER DATABASE BEGIN BACKUP; SQL> ALTER DATABASE END BACKUP; Listing 16 Conclusion In this second part, I've illustrated the key principles and concepts related to backup in Oracle databases using real world examples. Now you can adapt these scripts to your own backup scenarios.

Comment on Las funciones, los tipos de dato y los valores nulos.

$
0
0
Hola Fernando, gracias por el artículo. Y si el caso es que queramos ignorar los NULL, como se podría hacer? Saludos

Blog Post: ORA-12516: TNS:listener could not find available handler – EBS R12.2 RAC Database

$
0
0
One of the EBS RAC Node restarted and then database connections cannot be established on that specific node. The below error was reported when we were trying to connect database on the respective node. [applprod@ebsnode1 appl]$ sqlplus SQL*Plus: Release 10.1.0.5.0 - Production on Sun Feb 28 19:13:19 2016 Copyright (c) 1982, 2005, Oracle. All rights reserved. Enter user-name: apps@PROD Enter password: ERROR: ORA-12516: TNS:listener could not find available handler with matching protocol Issue: >> On Investigation we found there are two listener services running on that node: [oraprod@ebsnode1 PROD1_ebsnode1]$ ps -ef | grep lsn oraprod 4674 1 0 18:57 ? 00:00:00 /u01/grid/11.2.0.3/bin/tnslsnr LISTENER_SCAN1 -inherit oraprod 4701 1 0 18:57 ? 00:00:00 /u01/grid/11.2.0.3/bin/tnslsnr LISTENER -inherit oraprod 5956 5076 0 19:12 pts/2 00:00:00 grep lsn [oraprod@ebsnode1 PROD1_ebsnode1]$ Cause: >> The listener from RDBMS Home did not started after node reboot. Solution: >> Start Listener process from RDBMS Home and verify the connection. [oraprod@ebsnode1 PROD1_ebsnode1]$ lsnrctl start PROD LSNRCTL for Linux: Version 11.2.0.3.0 - Production on 28-FEB-2016 19:14:32 Copyright (c) 1991, 2011, Oracle. All rights reserved. Starting /u01/oraprod/PROD/11.2.0/bin/tnslsnr: please wait... TNSLSNR for Linux: Version 11.2.0.3.0 - Production System parameter file is /u01/oraprod/PROD/11.2.0/network/admin/PROD1_ebsnode1/listener.ora Log messages written to /u01/oraprod/PROD/11.2.0/log/diag/tnslsnr/ebsnode1/prod/alert/log.xml Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.26)(PORT=1524))) Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.21)(PORT=1524))) Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ebsnode1-vip.oralabs.com)(PORT=1524)(IP=FIRST))) STATUS of the LISTENER ------------------------ Alias PROD Version TNSLSNR for Linux: Version 11.2.0.3.0 - Production Start Date 28-FEB-2016 19:14:47 Uptime 0 days 0 hr. 0 min. 16 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/oraprod/PROD/11.2.0/network/admin/PROD1_ebsnode1/listener.ora Listener Log File /u01/oraprod/PROD/11.2.0/log/diag/tnslsnr/ebsnode1/prod/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.26)(PORT=1524))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.21)(PORT=1524))) Services Summary... Service "PROD1" has 1 instance(s). Instance "PROD1", status UNKNOWN, has 1 handler(s) for this service... The command completed successfully [oraprod@ebsnode1 PROD1_ebsnode1]$ >> Check listener processes: [oraprod@ebsnode1 PROD1_ebsnode1]$ ps -ef | grep lsn oraprod 4674 1 0 18:57 ? 00:00:00 /u01/grid/11.2.0.3/bin/tnslsnr LISTENER_SCAN1 -inherit oraprod 4701 1 0 18:57 ? 00:00:00 /u01/grid/11.2.0.3/bin/tnslsnr LISTENER -inherit oraprod 6025 1 0 19:14 ? 00:00:00 /u01/oraprod/PROD/11.2.0/bin/tnslsnr PROD -inherit oraprod 6105 5076 0 19:16 pts/2 00:00:00 grep lsn [oraprod@ebsnode1 PROD1_ebsnode1]$ >> Check DB Connection: Enter user-name: apps@PROD Enter password: Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL> So we should ensure all services started properly after cluster nodes restarted or rebooted. Thanks for reading regards, X A H E E R

Blog Post: catupgrd.sql and ORA-01722: invalid number

$
0
0
Recently while upgrading one of the database from Oracle version 11.2.0.3 to 11.2.0.4 , the catupgrd.sql script failed with following errors DOC>####################################################################### DOC>####################################################################### DOC> The following error is generated if (1) the old release uses a time DOC> zone file version newer than the one shipped with the new oracle DOC> release and (2) the new oracle home has not been patched yet: DOC> DOC> SELECT TO_NUMBER('MUST_PATCH_TIMEZONE_FILE_VERSION_ON_NEW_ORACLE_HOME') DOC> * DOC> ERROR at line 1: DOC> ORA-01722: invalid number DOC> DOC> o Action: DOC> Shutdown database ("alter system checkpoint" and then "shutdown abort"). DOC> Patch new ORACLE_HOME to the same time zone file version as used DOC> in the old ORACLE_HOME. DOC> DOC>####################################################################### DOC>####################################################################### DOC># SELECT TO_NUMBER('MUST_PATCH_TIMEZONE_FILE_VERSION_ON_NEW_ORACLE_HOME') * ERROR at line 1: ORA-01722: invalid number As per the error report, it seems to be an issue with time zone file version mismatch between 11.2.0.3 (Old ORACLE_HOME) and 11.2.0.4 (new ORACLE_HOME) and Oracle is suggesting to patch the new ORACLE_HOME (11.2.0.4) to the same time zone file version that is used by old ORACLE_HOME (11.2.0.3). I knew that the old ORACLE_HOME was using a time zone file version 17, so I decided to look in to the time zone file version for the new ORACLE_HOME. SQL> select version from v$instance; VERSION ----------------- 11.2.0.4.0 SQL> SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value 2 FROM DATABASE_PROPERTIES 3 WHERE PROPERTY_NAME LIKE 'DST_%' 4 ORDER BY PROPERTY_NAME; PROPERTY_NAME VALUE ------------------------------ ------------------------------ DST_PRIMARY_TT_VERSION 17 DST_SECONDARY_TT_VERSION 0 DST_UPGRADE_STATE NONE Even though, we have the same time zone version (17) in new and old ORACLE_HOME. Oracle is still complaining about the time zone mismatch during the upgrade process. Somehow Oracle is not able to determine the time zone version from the new ORACLE_HOME home, which is resulting in to the mismatch. Let’s query the V$TIMEZONE_FILE view in new ORACLE_HOME to check if we can find out the SQL> select version from v$instance; VERSION ----------------- 11.2.0.4.0 SQL> select * from V_$TIMEZONE_FILE; no rows selected Here the problem is. Even though we have the same time zone file version in new and old ORACLE_HOME, Oracle is not able to locate the time zone file for that version under the new ORACLE_HOME. Let’s check if the time zone file is present under the new ORACLE_HOME. oracle@labserver1:~> cd $ORACLE_HOME/oracore/zoneinfo oracle@labserver1:/app/oracle/product/11.2.0.4/oracore/zoneinfo> ls -lrt *_17.dat ls: cannot access *_17.dat: No such file or directory The time zone files for version 17 is missing from the new ORACLE_HOME (it is not shipped by default with Oracle 11.2.0.4. The latest version shipped with Oracle 11.2 is version 14 of time zone file), which is why a query against V$TIMEZONE_FILE is not returning any information. We need to make sure that the time zone files are present under new ORACLE_HOME for the upgrade to be successful. We can simply copy the relevant time zone file from old ORACLE_HOME to new ORACLE_HOME as shown below oracle@labserver1:~> echo $ORACLE_HOME /app/oracle/product/11.2.0.4/ oracle@labserver1:~> cp /app/oracle/product/11.2.0.3/oracore/zoneinfo/*_17.dat $ORACLE_HOME/oracore/zoneinfo oracle@labserver1:~> find $ORACLE_HOME -name *_17.dat -print 2>/dev/null /app/oracle/product/11.2.0.4/oracore/zoneinfo/timezone_17.dat /app/oracle/product/11.2.0.4/oracore/zoneinfo/timezlrg_17.dat Let’s check if Oracle is able to locate the time zone file. SQL> select version from v$instance; VERSION ----------------- 11.2.0.4.0 SQL> select * from V_$TIMEZONE_FILE; FILENAME VERSION -------------------- ---------- timezlrg_17.dat 17 Yes, Oracle is now able to determine the time zone information from new ORACLE_HOME. Now, we can re-initiate the catupgrd.sql script to complete database upgrade. Footnote: To avoid timezone issues during database upgrade, make sure that the time zone files relevant to the current time zone version are present under the new ORACLE_HOME. If the time zone files are missing (not shipped with the specific Oracle binaries) under new ORACLE_HOME, you can simply copy them over from the existing (old) ORACLE_HOME. Don’t miss to take a note of the time zone version (from old ORACLE_HOME) before starting the database upgrade and make sure the time zone version matches in the new ORACLE_HOME before you initiate the catupgrd.sql script from the new home. If the time zone version is different in the new ORACLE_HOME, follow the Oracle documented process to upgrade the time zone version before initiating catupgrd.sql

Blog Post: Using UTL_MAIL in Oracle 12c Database PDBs

$
0
0
As many DBAs would know, the UTL_MAIL database package is used to manage email. It allows you to send an email message directly from the database server, with cc and bcc, and also catering for RAW attachments. This package is not installed by default due to obvious security reasons, but needs to be installed manually via two scripts: utlmain.sql and prvtmail.plb, both of which are in the rdbms/admin directory under the Oracle Home. sqlplus / as sysdba SQL> @?/rdbms/admin/utlmail.sql SQL> @?/rdbms/admin/prvtmail.plb (The ? in the 2 lines above is a shortcut notation that refers to the Oracle Home) However, in the case of an Oracle 12c Database, it is not enough to run these scripts in the root container. You need to run them in each PDB in which you need UTL_MAIL to work. This also requires the SMTP_OUT_SERVER initialization parameter to be defined in the init.ora file. This parameter specifies the SMTP host and port to which UTL_MAIL will send email to. You can specify multiple servers in this parameter, and if the first is unavailable, the next one is used, and so on. Suppose you do not specify SMTP_OUT_SERVER explicitly, then the SMTP server name used by UTL_MAIL will default to the value of DB_DOMAIN with a port number of 25. More information on the UTL_MAIL package, including how to send raw attachments, is in the 12c database documentation here .

Blog Post: Perl lib version (5.10.0) doesn’t match executable version – adcfgclone.pl dbTechStack (or) dbTier – R12.1

$
0
0
This is  a common issue we face while performing clone on EBS R12.1 on target, rapid clone utility is unable to locate the required perl version. The following error is reported while executing clone on dbTier: [oramgr@erpnode2 bin]$ perl adcfgclone.pl dbTier Perl lib version (5.10.0) doesn't match executable version (v5.8.8) at /d01/EBS_DB/11.2.0/perl/lib/5.10.0/x86_64-linux-thread-multi/Config.pm line 46. Compilation failed in require at adcfgclone.pl line 28. BEGIN failed--compilation aborted at adcfgclone.pl line 28. [oramgr@erpnode2 bin]$ The general resolution for this issue is mentioned on many blogs to set “PERL5LIB” environment variable. But this is applicable only if you’re refreshing database or existing environment is already set. [oramgr@erpnode2 bin]$ export PERL5LIB=/d01/EBS_DB/11.2.0/perl/lib/5.10.0:/d01/EBS_DB/11.2.0/perl/lib/site_perl:/d01/EBS_DB/11.2.0/appsutil/perl Still its failing [oramgr@erpnode2 bin]$ perl adcfgclone.pl dbTier Perl lib version (5.10.0) doesn't match executable version (v5.8.8) at /d01/EBS_DB/11.2.0/perl/lib/5.10.0/x86_64-linux-thread-multi/Config.pm line 46. Compilation failed in require at adcfgclone.pl line 28. BEGIN failed--compilation aborted at adcfgclone.pl line 28. [oramgr@erpnode2 bin]$ Cause: Still its pointing to old perl version: [oramgr@erpnode2 bin]$ which perl /usr/bin/perl Solution: Set the correct environment variables – ORACLE_HOME, PERL5LIB and most important PATH. [oramgr@erpnode2 bin]$ export ORACLE_HOME=/d01/EBS_DB/11.2.0 [oramgr@erpnode2 bin]$ export PERL5LIB=$ORACLE_HOME/perl/lib/5.10.0:$ORACLE_HOME/perl/site_perl/5.10.0:$ORACLE_HOME/appsutil/perl [oramgr@erpnode2 bin]$ export PATH=$ORACLE_HOME/perl:$ORACLE_HOME/perl/lib:$ORACLE_HOME/perl/bin:$PATH [oramgr@erpnode2 bin]$ Check the version of perl again and it should point to version 5.10: [oramgr@erpnode2 bin]$ perl -v This is perl, v5.10.0 built for x86_64-linux-thread-multi Copyright 1987-2007, Larry Wall Perl may be copied only under the terms of either the Artistic License or the GNU General Public License, which may be found in the Perl 5 source kit. Complete documentation for Perl, including FAQ lists, should be found on this system using "man perl" or "perldoc perl". If you have access to the Internet, point your browser at http://www.perl.org/, the Perl Home Page. [oramgr@erpnode2 bin]$ which perl /d01/EBS_DB/11.2.0/perl/bin/perl Execute adcfgclone.pl  and it should work fine [oramgr@erpnode2 bin]$ perl adcfgclone.pl dbTier Copyright (c) 2002 Oracle Corporation Redwood Shores, California, USA Oracle Applications Rapid Clone Version 12.0.0 adcfgclone Version 120.31.12010000.8 Enter the APPS password : Running: /d01/EBS_DB/11.2.0/appsutil/clone/bin/../jre/bin/java -Xmx600M -cp /d01/EBS_DB/11.2.0/appsutil/clone/jlib/java:/d01/EBS_DB/11.2.0/appsutil/clone/jlib/xmlparserv2.jar:/d01/EBS_DB/11.2.0/appsutil/clone/jlib/ojdbc5.jar oracle.apps.ad.context.CloneContext -e /d01/EBS_DB/11.2.0/appsutil/clone/bin/../context/db/CTXORIG.xml -validate -pairsfile /tmp/adpairsfile_8658.lst -stage /d01/EBS_DB/11.2.0/appsutil/clone 2> /tmp/adcfgclone_8658.err; echo $? > /tmp/adcfgclone_8658.res Thanks for reading. regards, X A H E E R

Blog Post: BEGIN fnd_gsm_util.upload_context_file — Oracle error -376: ORA-00376: file 22 cannot be read

$
0
0
Recently we clone from EBS RAC database Instance to Non-RAC and while executing autoconfig its unable to locate one of the datafile as listed in the below error stack: [oramgr@erpnode2 EBS_erpnode2]$ adautocfg.sh Enter the APPS user password: The log file for this session is located at: /d01/EBS_DB/11.2.0/appsutil/log/EBS_erpnode2/03161526/adconfig.log AutoConfig is configuring the Database environment... AutoConfig will consider the custom templates if present. Using ORACLE_HOME location : /d01/EBS_DB/11.2.0 Classpath : :/d01/EBS_DB/11.2.0/jdbc/lib/ojdbc5.jar:/d01/EBS_DB/11.2.0/appsutil/java/xmlparserv2.jar:/d01/EBS_DB/11.2.0/appsutil/java:/d01/EBS_DB/11.2.0/jlib/netcfg.jar:/d01/EBS_DB/11.2.0/jlib/ldapjclnt11.jar Using Context file : /d01/EBS_DB/11.2.0/appsutil/EBS_erpnode2.xml Context Value Management will now update the Context file Updating Context file...COMPLETED Attempting upload of Context file and templates to database...ERROR: InDbCtxFile.uploadCtx() : Exception : Error executng BEGIN fnd_gsm_util.upload_context_file(:1,:2,:3,:4,:5); END;: 1; Oracle error -376: ORA-00376: file 22 cannot be read at this time ORA-01111: name for data file 22 is unknown - rename to correct file ORA-01110: data file 22: '/d01/EBS_DB/11.2.0/dbs/MISSING00022' has been detected in FND_GSM_UTIL.upload_context_file. oracle.apps.ad.autoconfig.oam.InDbCtxFileException: Error executng BEGIN fnd_gsm_util.upload_context_file(:1,:2,:3,:4,:5); END;: 1; Oracle error -376: ORA-00376: file 22 cannot be read at this time ORA-01111: name for data file 22 is unknown - rename to correct file ORA-01110: data file 22: '/d01/EBS_DB/11.2.0/dbs/MISSING00022' has been detected in FND_GSM_UTIL.upload_context_file. at oracle.apps.ad.autoconfig.oam.InDbCtxFile.uploadCtx(InDbCtxFile.java:281) at oracle.apps.ad.autoconfig.oam.CtxSynchronizer.uploadToDb(CtxSynchronizer.java:328) at oracle.apps.ad.tools.configuration.FileSysDBCtxMerge.updateDBCtx(FileSysDBCtxMerge.java:721) at oracle.apps.ad.tools.configuration.FileSysDBCtxMerge.updateDBFiles(FileSysDBCtxMerge.java:226) at oracle.apps.ad.context.CtxValueMgt.processCtxFile(CtxValueMgt.java:1690) at oracle.apps.ad.context.CtxValueMgt.main(CtxValueMgt.java:763) FAILED COMPLETED Updating rdbms version in Context file to db112 Updating rdbms type in Context file to 64 bits Configuring templates from ORACLE_HOME ... AutoConfig completed successfully. [oramgr@erpnode2 EBS_erpnode2]$ Autoconfig itself is completing successfully but its not able to locate the datafile. Cause: The UNDO tablespace from Instance2 is still listed and included in database. TABLESPACE_NAME FILE_NAME ------------------------------ ------------------------------ APPS_TS_TX_DATA /d01/EBSDATA/a_txn_data04.dbf APPS_TS_TX_IDX /d01/EBSDATA/a_txn_ind06.dbf APPS_TS_SEED /d01/EBSDATA/a_ref03.dbf APPS_TS_INTERFACE /d01/EBSDATA/a_int02.dbf SYSAUX /d01/EBSDATA/sysaux02.dbf APPS_TS_TX_DATA /d01/EBSDATA/a_txn_data05.dbf SYSAUX /d01/EBSDATA/sysaux03.dbf UNDOTBS2 /d01/EBS_DB/11.2.0/dbs/MISSING 00022 Check the default UNDO TS: SQL> show parameter undo NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ undo_management string AUTO undo_retention integer 900 undo_tablespace string APPS_UNDOTS1 SQL> So its clear that this table space doesn’t belongs to the current Instance Solution: DROP the UNDOTBS2 with missing datafile. SQL> drop tablespace undotbs2 2 / Tablespace dropped. SQL> Execute autconfig again and it shuold complete without issue: [oramgr@erpnode2 EBS_erpnode2]$ adautocfg.sh Enter the APPS user password: The log file for this session is located at: /d01/EBS_DB/11.2.0/appsutil/log/EBS_erpnode2/03161540/adconfig.log AutoConfig is configuring the Database environment... AutoConfig will consider the custom templates if present. Using ORACLE_HOME location : /d01/EBS_DB/11.2.0 Classpath : :/d01/EBS_DB/11.2.0/jdbc/lib/ojdbc5.jar:/d01/EBS_DB/11.2.0/appsutil/java/xmlparserv2.jar:/d01/EBS_DB/11.2.0/appsutil/java:/d01/EBS_DB/11.2.0/jlib/netcfg.jar:/d01/EBS_DB/11.2.0/jlib/ldapjclnt11.jar Using Context file : /d01/EBS_DB/11.2.0/appsutil/EBS_erpnode2.xml Context Value Management will now update the Context file Updating Context file...COMPLETED Attempting upload of Context file and templates to database...COMPLETED Updating rdbms version in Context file to db112 Updating rdbms type in Context file to 64 bits Configuring templates from ORACLE_HOME ... AutoConfig completed successfully. [oramgr@erpnode2 EBS_erpnode2]$ Conclusion: So if we are performing a manual clone from RAC to NON RAC then we must ensure all other UNDO Tablespaces with missing datafiles are dropped before proceeding with the execution of autoconfig on dbTier.

Blog Post: Learning curve (Oracle 12c Multitenant, Oracle Cloud & Golden Gate)

$
0
0
First thing first. After almost 8 yrs of successful tenure at my previous company, I have moved on to new challenges from 1-Mar-2016. Joined eProseed KSA as Technical Director where my prime responsibility is to involve in pre-sales, technical planning, motivating teams and hands-on technically in critical projects. I must say, this is what I was looking for a very long time and I am sure I gonna enjoy my new role very much. Over the past couple of weeks, I have been busy exploring the following concepts, though they are not very new to more people: Oracle 12c Multitenant Oracle Cloud Golden Gate Enterprise Manager Cloud Control 13c Also involved in additional task, which I can't reveal due to NDA, but will reveal later on. Started a new Whatsapp group Trend Oracle Cloud with more than 60 members as of now. Hope everyone of you doing great. Stay tuned for more updates.

Wiki Page: Managing ASM Devices in Solaris 11 Non Global Zone - Part2

$
0
0
Introduction This is my third article in the series. In previous two articles we have seen how we can install/configure ASM Instance inside a non global zone and we have also seen how we can add any additional devices to an existing ASM instance running in a non global zone. In this article we will see how we can remove the provisioned ASM devices from Solaris-11 Non global zone. We have to be very careful with the device management specifically if we are using PRODUCTION Environments in a non-global zones. Please refer below links for the previous articles: http://www.toadworld.com/platforms/oracle/w/wiki/11365.installing-and-configuring-oracle-12c-grid-infrasturucture-and-asm-in-solaris-11-non-global-zones http://www.toadworld.com/platforms/oracle/w/wiki/11379.managing-asm-devices-in-solaris-11-non-global-zone-part1 If you have not read my previous articles then I would highly recommend to read it to have better understanding about this article. Environment details I will be using the same environment used in my previous articles. - soltest is the hostname for global zone. - dbnode1 is the hostname for the non global zone Global Zone - GZ soltest Non Global Zone - NGZ dbnode1 Demonstration: In this demonstration we will execute TEST cases for destroying the ASM devices provisioned in non-global zone. Actually the TEST cases are situations that can be executed accidentally on any system running in PROD/DEV/TEST and eventually we will loose data and it may cost huge time for restoring/ recovering the lost data from the Disk Group. In this current demonstration the text code displayed in blue color is from the global zone and text color displayed in red is from non global zone. The below mentioned are the existing ASM devices and Disk Groups exists in the non global zone: SQL> column name format a15 SQL> column path format a45 SQL> select name, path from v$asm_disk; NAME PATH --------------- --------------------------------------------- /dev/zvol/rdsk/dbzone/dbnode1/asmdisk04 /dev/zvol/rdsk/dbzone/dbnode1/asmdisk03 GRID_0000 /dev/rdsk/c2t2d0s5 GRID_0001 /dev/rdsk/c2t3d0s5 DATA_0000 /dev/rdsk/c2t4d0s5 DATA_0001 /dev/rdsk/c2t5d0s5 DATA3_0000 /dev/zvol/rdsk/dbzone/dbnode1/asmdisk01 DATA4_0000 /dev/zvol/rdsk/dbzone/dbnode1/asmdisk02 8 rows selected. SQL> ---------------------------------------------------------------- SQL> select name from v$asm_diskgroup; NAME --------------- DATA4 DATA3 DATA GRID SQL> In this demonstration we will work on Disk groups "DATA" and "DATA3" - DATA3 Disk Group is configured on ZFS volume " /dev/zvol/rdsk/dbzone/dbnode1/asmdisk01 " and DATA Disk Group is configured on "/ dev/rdsk/c2t4d0s5, /dev/rdsk/c2t4d0s5 " TEST CASES In this demonstration we will see the different situations that can happen in real life while working on ASM Instances in global and non-global zones. TEST CASE -1: We will try to destroy the ZFS volume from the global zone and then we will see the impact on the underlying ASM Disk Groups. - Check the exists ZFS volumes: root@soltest:~# zfs list -t volume NAME USED AVAIL REFER MOUNTPOINT dbzone/dbnode1/asmdisk01 1.03G 39.9G 53.4M - dbzone/dbnode1/asmdisk02 1.03G 39.9G 53.4M - dbzone/dbnode1/asmdisk03 1.03G 40.0G 16K - dbzone/dbnode1/asmdisk04 1.03G 40.0G 16K - rpool/dump 2.06G 16.2G 2.00G - rpool/swap 5.16G 16.3G 5.00G - root@soltest:~# - Lets try to destroy the ZFS volume "asmdiks01" belonging to DG - DATA3: root@soltest:~# zfs destroy dbzone/dbnode1/asmdisk01 cannot destroy 'dbzone/dbnode1/asmdisk01': volume is busy root@soltest:~# -Try with the force option: root@soltest:~# zfs destroy -f dbzone/dbnode1/asmdisk01 cannot destroy 'dbzone/dbnode1/asmdisk01': volume is busy root@soltest:~# Great - Still its not able to destroy the zfs volume as the volume is by ASM Instance in a non-global zone. So the ASM Devices are safer as we are not able to destroy the volumes from the global zone as well. TEST CASE -2 : In this test case we will try to dismount the ASM Diskgroup "DATA3" and then we will try to destroy the ZFS volume belonging to the ASM Disk group "DATA3" from the global zone. - Check the existing mounted Disk Groups: - Dismount the Disk Group using asmca or CLI - Disk Group "DATA3" is dismounted - Disk Group "DATA3" is already dismounted now we will try to destroy the ZFS volume (asmdiks01) belonging to DG - DATA3 from the global zone. - Try to destroy ZFS volume root@soltest:~# zfs destroy dbzone/dbnode1/asmdisk01 cannot destroy 'dbzone/dbnode1/asmdisk01': volume is busy - Now try using force option root@soltest:~# zfs destroy -f dbzone/dbnode1/asmdisk01 cannot destroy 'dbzone/dbnode1/asmdisk01': volume is busy root@soltest:~# - Good still we are not able to destroy the ZFS volume from the non global zone. TEST CASE -3: In this scenario we will try to drop the ASM Disk Group and then we will try to destroy the ZFS volume from the non-global zone. The DG - "DATA3" is already dismounted simply we will drop the required Disk Group. - After Dropping the disk group now try to destroy the required ZFS volume. root@soltest:~# zfs destroy dbzone/dbnode1/asmdisk01 cannot destroy 'dbzone/dbnode1/asmdisk01': volume is busy root@soltest:~# zfs destroy -f dbzone/dbnode1/asmdisk01 cannot destroy 'dbzone/dbnode1/asmdisk01': volume is busy root@soltest:~# - We are not able to destroy ZFS volume even after dropping the Disk Group associated with the ZFS volume. TEST CASE -4: In this test case we will remove the zfs device from the non-global zone and then will try to destroy the ZFS volume. root@soltest:~# zonecfg -z dbnode1 zonecfg:dbnode1> info zonename: dbnode1 zonepath: /dbzone/dbnode1 brand: solaris autoboot: true autoshutdown: shutdown bootargs: -m verbose file-mac-profile: pool: limitpriv: scheduling-class: ip-type: exclusive hostid: tenant: fs-allowed: anet: linkname: net0 lower-link: auto allowed-address not specified configure-allowed-address: true defrouter not specified allowed-dhcp-cids not specified link-protection: mac-nospoof mac-address: auto auto-mac-address: 2:8:20:1c:41:61 mac-prefix not specified mac-slot not specified vlan-id not specified priority not specified rxrings not specified txrings not specified mtu not specified maxbw not specified rxfanout not specified vsi-typeid not specified vsi-vers not specified vsi-mgrid not specified etsbw-lcl not specified cos not specified pkey not specified linkmode not specified evs not specified vport not specified device: match: /dev/rdsk/c2t2d0s0 storage not specified allow-partition not specified allow-raw-io not specified device: match: /dev/rdsk/c2t3d0s0 storage not specified allow-partition not specified allow-raw-io not specified device: match: /dev/dsk/c2t3d0s0 storage not specified allow-partition not specified allow-raw-io not specified device: match: /dev/rdsk/c2t2d0s5 storage not specified allow-partition not specified allow-raw-io not specified device: match: /dev/dsk/c2t2d0s5 storage not specified allow-partition not specified allow-raw-io not specified device: match: /dev/rdsk/c2t3d0s5 storage not specified allow-partition not specified allow-raw-io not specified device: match: /dev/dsk/c2t3d0s5 storage not specified allow-partition not specified allow-raw-io not specified device: match: /dev/dsk/c2t5d0s5 storage not specified allow-partition not specified allow-raw-io not specified device: match: /dev/rdsk/c2t5d0s5 storage not specified allow-partition not specified allow-raw-io not specified device: match: /dev/dsk/c2t4d0s5 storage not specified allow-partition not specified allow-raw-io not specified device: match: /dev/rdsk/c2t4d0s5 storage not specified allow-partition not specified allow-raw-io not specified device: match: /dev/zvol/dsk/dbzone/dbnode1/asmvol1 storage not specified allow-partition not specified allow-raw-io not specified device: match: /dev/zvol/rdsk/dbzone/dbnode1/asmdisk01 storage not specified allow-partition not specified allow-raw-io not specified device: match: /dev/zvol/rdsk/dbzone/dbnode1/asmdisk02 storage not specified allow-partition not specified allow-raw-io not specified device: match: /dev/zvol/rdsk/dbzone/dbnode1/asmdisk03 storage not specified allow-partition not specified allow-raw-io not specified device: match: /dev/zvol/rdsk/dbzone/dbnode1/asmdisk04 storage not specified allow-partition not specified allow-raw-io not specified zonecfg:dbnode1> - Once you are connected to zone using zonecfg command then we can see all devices configuration of non-global zone using "info" command. Now we will remove "/dev/zvol/rdsk/dbzone/dbnode1/asmdisk01" ZFS volume from the non-global zone. zonecfg:dbnode1> remove device match=/dev/zvol/rdsk/dbzone/dbnode1/asmdisk01 zonecfg:dbnode1> commit; zonecfg:dbnode1> - After removing the devices the removed device will not be listed in zonecfg info output command. device: match: /dev/zvol/dsk/dbzone/dbnode1/asmvol1 storage not specified allow-partition not specified allow-raw-io not specified device: match: /dev/zvol/rdsk/dbzone/dbnode1/asmdisk02 storage not specified allow-partition not specified allow-raw-io not specified device: match: /dev/zvol/rdsk/dbzone/dbnode1/asmdisk03 storage not specified allow-partition not specified allow-raw-io not specified device: match: /dev/zvol/rdsk/dbzone/dbnode1/asmdisk04 storage not specified allow-partition not specified allow-raw-io not specified zonecfg:dbnode1> - After removing the device try to destroy the ZFS volume. Before proceeding to destroy the ZFS volume from Global zone, lets see whether this device is still visible under ASM Instance and at OS level: SQL> column name format a15 SQL> column path format a45 SQL> select name, path from v$asm_disk; NAME PATH --------------- --------------------------------------------- /dev/zvol/rdsk/dbzone/dbnode1/asmdisk04 /dev/zvol/rdsk/dbzone/dbnode1/asmdisk03 /dev/zvol/rdsk/dbzone/dbnode1/asmdisk01 GRID_0000 /dev/rdsk/c2t2d0s5 GRID_0001 /dev/rdsk/c2t3d0s5 DATA_0000 /dev/rdsk/c2t4d0s5 DATA_0001 /dev/rdsk/c2t5d0s5 DATA4_0000 /dev/zvol/rdsk/dbzone/dbnode1/asmdisk02 8 rows selected. SQL> grid12c@dbnode1:/dev/zvol/rdsk/dbzone/dbnode1$ ls -lrt total 0 crw-rw---- 1 grid12c dba 303, 6 Oct 14 16:28 asmdisk04 crw-rw---- 1 grid12c dba 303, 5 Oct 14 16:28 asmdisk03 crw-rw---- 1 grid12c dba 303, 4 Oct 14 16:57 asmdisk01 crw-rw---- 1 grid12c dba 303, 3 Oct 14 17:11 asmdisk02 grid12c@dbnode1:/dev/zvol/rdsk/dbzone/dbnode1$ - Still asmdisk01 volume is visible in V$asm_disk - Now dynamically apply the configuration changes to a non-global zone for relecting recently removed asm volume. root@soltest:~# zoneadm -z dbnode1 apply zone 'dbnode1': Checking: Removing device match=/dev/zvol/rdsk/dbzone/dbnode1/asmdisk01 zone 'dbnode1': Applying the changes root@soltest:~# grid12c@dbno de1:/dev/zvol/rdsk/dbzone/dbnode1$ ls -lrt total 0 crw-rw---- 1 grid12c dba 303, 6 Oct 14 16:28 asmdisk04 crw-rw---- 1 grid12c dba 303, 5 Oct 14 16:28 asmdisk03 crw-rw---- 1 grid12c dba 303, 4 Oct 14 16:57 asmdisk01 crw-rw---- 1 grid12c dba 303, 3 Oct 14 17:14 asmdisk02 grid12c@dbnode1:/dev/zvol/rdsk/dbzone/dbnode1$ - Now lets try to destory the ZFS volume: root@soltest:~# zfs destroy -f dbzone/dbnode1/asmdisk01 cannot destroy 'dbzone/dbnode1/asmdisk01': volume is busy root@soltest:~# After reconfiguration also still we are not able to destroy the zfs volume. This I beleive is an unexpected behaviour. Once we remove the zfs volume then it should not be visible in the non-global zone. To remove the devices from the non-global zone requires restarting of Grid Infrastructure Services. - Stop Grid Infrastructure Services running in the non-global zone: root@dbnode1:/u01/grid12c/product/12.1.0/grid/bin# pwd /u01/grid12c/product/12.1.0/grid/bin root@dbnode1:/u01/grid12c/product/12.1.0/grid/bin# ./crsctl stop has CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'dbnode1' CRS-2673: Attempting to stop 'ora.DATA4.dg' on 'dbnode1' CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'dbnode1' CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'dbnode1' succeeded CRS-2677: Stop of 'ora.DATA4.dg' on 'dbnode1' succeeded CRS-2673: Attempting to stop 'ora.DATA.dg' on 'dbnode1' CRS-2673: Attempting to stop 'ora.GRID.dg' on 'dbnode1' CRS-2677: Stop of 'ora.DATA.dg' on 'dbnode1' succeeded CRS-2677: Stop of 'ora.GRID.dg' on 'dbnode1' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'dbnode1' CRS-2677: Stop of 'ora.asm' on 'dbnode1' succeeded CRS-2673: Attempting to stop 'ora.evmd' on 'dbnode1' CRS-2677: Stop of 'ora.evmd' on 'dbnode1' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'dbnode1' CRS-2677: Stop of 'ora.cssd' on 'dbnode1' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'dbnode1' has completed CRS-4133: Oracle High Availability Services has been stopped. root@dbnode1:/u01/grid12c/product/12.1.0/grid/bin# - Now lets try to destroy the ZFS volume: root@soltest:~# zfs destroy dbzone/dbnode1/asmdisk01 root@soltest:~# This time ZFS volume has been destroyed successfully. So its very clear that if we are using ZFS volumes as ASM Devices then we cannot destroy it before stopping the Grid Infrastructure services. After starting up of GI services check the destroyed ZFS volume from ASM Instance and OS: SQL> column name format a15 SQL> column path format a45 SQL> select name, path from v$asm_disk; NAME PATH --------------- --------------------------------------------- /dev/zvol/rdsk/dbzone/dbnode1/asmdisk04 /dev/zvol/rdsk/dbzone/dbnode1/asmdisk03 GRID_0000 /dev/rdsk/c2t2d0s5 GRID_0001 /dev/rdsk/c2t3d0s5 DATA_0000 /dev/rdsk/c2t4d0s5 DATA_0001 /dev/rdsk/c2t5d0s5 DATA4_0000 /dev/zvol/rdsk/dbzone/dbnode1/asmdisk02 7 rows selected. SQL> grid12c@dbnode1:/u01/grid12c/product/12.1.0/grid$ cd /dev/zvol/rdsk/dbzone/dbnode1/ grid12c@dbnode1:/dev/zvol/rdsk/dbzone/dbnode1$ ls -lrt total 0 crw-rw---- 1 grid12c dba 303, 6 Oct 14 16:28 asmdisk04 crw-rw---- 1 grid12c dba 303, 5 Oct 14 16:28 asmdisk03 crw-rw---- 1 grid12c dba 303, 3 Oct 14 17:50 asmdisk02 grid12c@dbnode1:/dev/zvol/rdsk/dbzone/dbnode1$ Now "asmdisk01" is not visible in the list. TEST CASE -5: All executed test cases before was on zfs volumes. In this test case we will try to format the disk belongs to non-global zone and under ASM from the global zone. We will see the impact of these changes on ASM Disk group. The disk group DATA is confugured with devices "/dev/rdsk/c2t4d0s5" & "dev/rdsk/c2t5d0s5". In this scenario we will format device "dev/rdsk/c2t5d0s5" from the non global zone. root@soltest:~# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c2t0d0 /pci@0,0/pci1000,8000@14/sd@0,0 1. c2t1d0 /pci@0,0/pci1000,8000@14/sd@1,0 2. c2t2d0 /pci@0,0/pci1000,8000@14/sd@2,0 3. c2t3d0 /pci@0,0/pci1000,8000@14/sd@3,0 4. c2t4d0 /pci@0,0/pci1000,8000@14/sd@4,0 5. c2t5d0 /pci@0,0/pci1000,8000@14/sd@5,0 6. c2t6d0 /pci@0,0/pci1000,8000@14/sd@6,0 Specify disk (enter its number): 5 selecting c2t5d0 [disk formatted] FORMAT MENU: disk - select a disk type - select (define) a disk type partition - select (define) a partition table current - describe the current disk format - format and analyze the disk fdisk - run the fdisk program repair - repair a defective sector label - write label to the disk analyze - surface analysis defect - defect list management backup - search for backup labels verify - read and display labels save - save new disk/partition definitions inquiry - show disk ID volname - set 8-character volume name ! - execute , then return quit format> p PARTITION MENU: 0 - change `0' partition 1 - change `1' partition 2 - change `2' partition 3 - change `3' partition 4 - change `4' partition 5 - change `5' partition 6 - change `6' partition 7 - change `7' partition select - select a predefined table modify - modify a predefined partition table name - name the current table print - display the current table label - write partition map and label to the disk ! - execute , then return quit partition> p Current partition table (original): Total disk cylinders available: 1021 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 unassigned wm 0 0 (0/0/0) 0 1 unassigned wm 0 0 (0/0/0) 0 2 backup wu 0 - 1020 1021.00MB (1021/0/0) 2091008 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 1 - 1000 1000.00MB (1000/0/0) 2048000 6 unassigned wm 0 0 (0/0/0) 0 7 unassigned wm 0 0 (0/0/0) 0 8 boot wu 0 - 0 1.00MB (1/0/0) 2048 9 unassigned wm 0 0 (0/0/0) 0 partition> partition> 5 Part Tag Flag Cylinders Size Blocks 5 unassigned wm 1 - 1000 1000.00MB (1000/0/0) 2048000 Enter partition id tag[unassigned]: Enter partition permission flags[wm]: Enter new starting cyl[1]: 0 Enter partition size[2048000b, 1000c, 999e, 1000.00mb, 0.98gb]: 0 partition> l Ready to label disk, continue? y partition> p Current partition table (unnamed): Total disk cylinders available: 1021 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 unassigned wm 0 0 (0/0/0) 0 1 unassigned wm 0 0 (0/0/0) 0 2 backup wu 0 - 1020 1021.00MB (1021/0/0) 2091008 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 unassigned wm 0 0 (0/0/0) 0 7 unassigned wm 0 0 (0/0/0) 0 8 boot wu 0 - 0 1.00MB (1/0/0) 2048 9 unassigned wm 0 0 (0/0/0) 0 - Once we formatted the disk slice associated with "DATA" from the global zone then it must not allow any I/O operation on that Disk group. Lets try to create some file on DG - DATA grid12c@dbnode1:~$ asmcmd ASMCMD> ls DATA/ DATA4/ GRID/ ASMCMD> cd DATA ASMCMD> ls ASMCMD> mkdir test1 ORA-15032: not all alterations performed ORA-15130: diskgroup "DATA" is being dismounted (DBD ERROR: OCIStmtExecute) ASMCMD> exit grid12c@dbnode1:~$ Content - ASM alert logfile: Wed Oct 14 17:39:46 2015 NOTE: diskgroup resource ora.DATA4.dg is online NOTE: diskgroup resource ora.DATA.dg is online NOTE: diskgroup resource ora.GRID.dg is online Wed Oct 14 18:35:01 2015 SQL> /* ASMCMD */alter diskgroup /*ASMCMD*/ "DATA" add directory '+DATA/test1' Wed Oct 14 18:35:01 2015 WARNING: Write Failed. group:3 disk:1 AU:10 offset:57344 size:4096 path:/dev/rdsk/c2t5d0s5 incarnation:0xf0d0e8e9 synchronous result:'I/O error' subsys:System krq:0xffff80ffbf733688 bufp:0x9e590000 osderr1:0x0 osderr2:0x0 IO elapsed time: 0 usec Time waited on I/O: 0 usec NOTE: unable to write any mirror side for diskgroup DATA Wed Oct 14 18:35:01 2015 Errors in file /u01/grid12c/diag/asm/+asm/+ASM/trace/+ASM_lgwr_14504.trc: ORA-15080: synchronous I/O operation failed to write block 0 of disk 1 in disk group DATA ORA-27072: File I/O error Solaris-AMD64 Error: 5: I/O error Additional information: 4 Additional information: 20592 Additional information: 4294967295 NOTE: cache initiating offline of disk 1 group DATA NOTE: process _lgwr_+asm (14504) initiating offline of disk 1.4040222953 (DATA_0001) with mask 0x7e in group 3 with client assisting NOTE: initiating PST update: grp 3 (DATA), dsk = 1/0xf0d0e8e9, mask = 0x6a, op = clear Wed Oct 14 18:35:01 2015 GMON updating disk modes for group 3 at 75 for pid 10, osid 14504 ERROR: disk 1(DATA_0001) in group 3(DATA) cannot be offlined because the disk group has external redundancy. Wed Oct 14 18:35:01 2015 ERROR: too many offline disks in PST (grp 3) Wed Oct 14 18:35:01 2015 NOTE: cache dismounting (not clean) group 3/0x8DC01812 (DATA) Wed Oct 14 18:35:01 2015 NOTE: sending clear offline flag message (3717243651) to 1 disk(s) in group 3 Wed Oct 14 18:35:01 2015 WARNING: Disk 1 (DATA_0001) in group 3 mode 0x7f offline is being aborted Wed Oct 14 18:35:01 2015 NOTE: halting all I/Os to diskgroup 3 (DATA) Wed Oct 14 18:35:01 2015 NOTE: unable to offline disks after getting write error for diskgroup DATA Wed Oct 14 18:35:01 2015 Errors in file /u01/grid12c/diag/asm/+asm/+ASM/trace/+ASM_lgwr_14504.trc: ORA-15066: offlining disk "DATA_0001" in group "DATA" may result in a data loss ORA-27072: File I/O error Solaris-AMD64 Error: 5: I/O error Additional information: 4 Additional information: 20592 Additional information: 4294967295 NOTE: disk 1 had IO error Wed Oct 14 18:35:01 2015 ORA-15032: not all alterations performed ORA-15130: diskgroup "DATA" is being dismounted Wed Oct 14 18:35:01 2015 ERROR: /* ASMCMD */alter diskgroup /*ASMCMD*/ "DATA" add directory '+DATA/test1' Wed Oct 14 18:35:01 2015 NOTE: messaging CKPT to quiesce pins Unix process pid: 21743, image: oracle@dbnode1 (B000) Wed Oct 14 18:35:02 2015 ERROR: ORA-15130 in COD recovery for diskgroup 3/0x8dc01812 (DATA) ERROR: ORA-15130 thrown in RBAL for group number 3 Wed Oct 14 18:35:02 2015 Errors in file /u01/grid12c/diag/asm/+asm/+ASM/trace/+ASM_rbal_14514.trc: ORA-15130: diskgroup "DATA" is being dismounted Wed Oct 14 18:35:02 2015 NOTE: LGWR doing non-clean dismount of group 3 (DATA) thread 1 NOTE: LGWR sync ABA=7.12 last written ABA 7.13 Wed Oct 14 18:35:02 2015 NOTE: cache dismounted group 3/0x8DC01812 (DATA) NOTE: cache deleting context for group DATA 3/0x8dc01812 Wed Oct 14 18:35:02 2015 SQL> alter diskgroup DATA dismount force /* ASM SERVER:2378176530 */ Wed Oct 14 18:35:02 2015 GMON dismounting group 3 at 76 for pid 23, osid 21743 Wed Oct 14 18:35:02 2015 NOTE: Disk DATA_0000 in mode 0x7f marked for de-assignment NOTE: Disk DATA_0001 in mode 0x7f marked for de-assignment SUCCESS: diskgroup DATA was dismounted Wed Oct 14 18:35:02 2015 SUCCESS: alter diskgroup DATA dismount force /* ASM SERVER:2378176530 */ SUCCESS: ASM-initiated MANDATORY DISMOUNT of group DATA Wed Oct 14 18:35:02 2015 NOTE: diskgroup resource ora.DATA.dg is offline Wed Oct 14 18:35:02 2015 ASM Health Checker found 1 new failures Once we format the configured ASM device from the global zone. Its allowing us to format it without any warning and it dismounted the ASM diskgroup immediately when we tried to perform the I/O on that specific Disk Group. Correct Sequence for removing ASM devices from NGZ stop database or backup any data residing on the Disk group Drop the required disk group Stop Grid Infrastructure services (If using ZFS volumes) Remove devices from the global zone Apply the chnanges dynamically for reflecting the changes (applicable from solaris 11.2) Unprovisioned or destroy the devices from the global zone startup the Grid Infrastructure services dropping the required diks group: We must drop the ASM disk group before removing the devices from the non-global zone. grid12c@dbnode1:~$ sqlplus / as sysasm SQL*Plus: Release 12.1.0.1.0 Production on Wed Oct 14 20:22:48 2015 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production With the Automatic Storage Management option SQL> drop diskgroup data including contents; drop diskgroup data including contents * ERROR at line 1: ORA-15039: diskgroup not dropped ORA-15001: diskgroup "DATA" does not exist or is not mounted SQL> drop diskgroup data force including contents; Diskgroup dropped. SQL> As we formatted one of the devices from the global zone hence its not allowing us to drop the DG normally, we must use force option for dropping the ASM Disk Group. content from ASM Alert log: SQL> drop diskgroup data force including contents Wed Oct 14 20:23:08 2015 NOTE: Assigning number (1,0) to disk (/dev/rdsk/c2t4d0s5) NOTE: erasing header on grp 1 disk DATA_0000 NOTE: Disk DATA_0000 in mode 0x18 marked for de-assignment SUCCESS: diskgroup DATA was force dropped Wed Oct 14 20:23:14 2015 SUCCESS: drop diskgroup data force including contents Remove devices from the non-global zone: After dropping the Disk group we must remove the required devices from the non-global zone. root@soltest:~# zonecfg -z dbnode1 zonecfg:dbnode1> info zonename: dbnode1 zonepath: /dbzone/dbnode1 brand: solaris autoboot: true autoshutdown: shutdown bootargs: -m verbose file-mac-profile: pool: limitpriv: scheduling-class: ip-type: exclusive hostid: tenant: fs-allowed: anet: linkname: net0 lower-link: auto allowed-address not specified configure-allowed-address: true defrouter not specified allowed-dhcp-cids not specified link-protection: mac-nospoof mac-address: auto auto-mac-address: 2:8:20:1c:41:61 mac-prefix not specified mac-slot not specified vlan-id not specified priority not specified rxrings not specified txrings not specified mtu not specified maxbw not specified rxfanout not specified vsi-typeid not specified vsi-vers not specified vsi-mgrid not specified etsbw-lcl not specified cos not specified pkey not specified linkmode not specified evs not specified vport not specified device: match: /dev/rdsk/c2t2d0s0 storage not specified allow-partition not specified allow-raw-io not specified device: match: /dev/rdsk/c2t3d0s0 storage not specified allow-partition not specified allow-raw-io not specified device: match: /dev/dsk/c2t3d0s0 storage not specified allow-partition not specified allow-raw-io not specified device: match: /dev/rdsk/c2t2d0s5 storage not specified allow-partition not specified allow-raw-io not specified device: match: /dev/dsk/c2t2d0s5 storage not specified allow-partition not specified allow-raw-io not specified device: match: /dev/rdsk/c2t3d0s5 storage not specified allow-partition not specified allow-raw-io not specified device: match: /dev/dsk/c2t3d0s5 storage not specified allow-partition not specified allow-raw-io not specified device: match: /dev/dsk/c2t5d0s5 storage not specified allow-partition not specified allow-raw-io not specified device: match: /dev/rdsk/c2t5d0s5 storage not specified allow-partition not specified allow-raw-io not specified device: match: /dev/dsk/c2t4d0s5 storage not specified allow-partition not specified allow-raw-io not specified device: match: /dev/rdsk/c2t4d0s5 storage not specified allow-partition not specified allow-raw-io not specified device: match: /dev/zvol/dsk/dbzone/dbnode1/asmvol1 storage not specified allow-partition not specified allow-raw-io not specified device: match: /dev/zvol/rdsk/dbzone/dbnode1/asmdisk02 storage not specified allow-partition not specified allow-raw-io not specified device: match: /dev/zvol/rdsk/dbzone/dbnode1/asmdisk03 storage not specified allow-partition not specified allow-raw-io not specified device: match: /dev/zvol/rdsk/dbzone/dbnode1/asmdisk04 storage not specified allow-partition not specified allow-raw-io not specified zonecfg:dbnode1> We will remove the highlighted devices from the non-global zone. These are devices that was belonging to the Disk group "DATA3" zonecfg:dbnode1> remove device match=/dev/rdsk/c2t4d0s5 zonecfg:dbnode1> remove device match=/dev/dsk/c2t4d0s5 zonecfg:dbnode1> remove device match=/dev/rdsk/c2t5d0s5 zonecfg:dbnode1> remove device match=/dev/dsk/c2t5d0s5 zonecfg:dbnode1> commit; zonecfg:dbnode1> After removing the devices from the non global zone the devices will not be visible at OS level as well as under the ASM Instance. But the devices will still be visible under global zone and we need to follow the regular OS utilities to remove these devices from the global zone. Conclusion: In this part of article we have seen different test cases for removing the ASM disk group from the non-global zone in Solaris 11.2. We have seen demonstration for removing two type of ASM devices one is ZFS volumes and another is raw devices. ZFS volumes are seems to much safer than regular raw devices under ASM instance. As ZFS is not allowing us to destroy the ZFS volume (even with force -f option) once the ASM instance is active. Its only allowing us to destroy the volume once the GI services are stopped. Whereas we can format/modify any raw device from the global zone which is directly effecting the ASM Disk group. So we must be very careful if are using raw devices under ASM instance. Its very difficult to keep track which raw device is provisioned to which global zone. So its highly recommenced to maintain a catalog where we can keep all information related to raw devices that belong to non-global zones. It may not be difficult in environments where one or two non global zones are running. But Imagine a system with 100 non global zones and each NGZ has 10 - 20 raw devices, In such situation the device management will be a big challenge for DBA/System administrators. On other hand ZFS volumes are safer to use but its not recommended by oracle as ZFS itself is a volume manager and it has its own file system. Oracle do not recommend to use any other volume manager with ASM. It may create performance issues related to I/O.

Wiki Page: Oracle12c

$
0
0
This section contains articles related to consolidation using 12c container database or using the different virtualization technologies.

Wiki Page: Oracle 12c (12.1.0.2): Implement flexible schema structures with JSON

$
0
0
Introduction A primary characteristics of an RDBMS is that, it supports only schema centric designs, which means an application must define and deploy its schema layout (structure) in advance before it can store its data in the database. However, this schema centric approach has its own disadvantages, primarily when dealing with rapid (agile)application deployments. One of the reason behind the success of NoSQL databases is that most of these databases support a flexible data structure, where we need not define the schema structures in advance. Looks like, Oracle is now bridging that gap with the introduction of native support for JSON documents. Starting with Oracle 12c release 12.1.0.2, JSON data structure is natively supported by Oracle databases. Oracle has built a rich set of functions to natively support JSON data structures in a Oracle database. This will provide us the ability to implement schema less structure (partially) within a Oracle database. In the upcoming sections, we will explore this new Oracle offering and how it can be utilized to meet customer needs of having flexible schema structures. What is JSON JSON (Java Script Object Notation) is a language independent data format (although it was originally derived from JavaScript language) and is a very popular format to interchange document. A JSON document is basically comprised of Key-Value pairs where the Key provides a description about the Value (data) associated with it and each value can again have nested Key-Value pairs and so on. Here is a simple format and example of a JSON document. Format (syntax) of a JSON document ---// ---// Key-Value syntax of JSON data structure //--- ---// { key : Value, Key : { Key : Value, Key : { Key : Value, .. .. } }, ... ... Key : Value } Here is a sample JSON document containing information about an user ---// ---// A sample document in JSON format //--- ---// { "User ID" : 101, "Name" : "Alex", "Contact" : { "email" : "alex@example.com", "mobile" : "612-897-8393" }, "Address" : { "House No" : 19, "Street" : "xyz", "Country" : "India", "Postal Code" : 560017 }, "Status" : "Active" } For more details about JSON data format, you can refer following pages http://www.json.org/ https://en.wikipedia.org/wiki/JSON http://www.w3schools.com/json/ Oracle leverage this language independent format to support schema less data structures within the Oracle database. Oracle is now able to identify whether the stored data is of JSON type and provides a rich set of functions to operate on these JSON data structures. How JSON is stored in Oracle database Oracle did not introduce any new data type to store JSON data. We can store JSON in a CLOB, BLOB or VARCHAR2 table columns. However, Oracle has introduced a new check to determine whether the data stored within those columns are JSON or not. Oracle does that with the help of a new constraint implemented through IS JSON clause. Here is the syntax for implementing the constraint ---// ---// Syntax for defining IS JSON constraint //--- ---// CRETAE TABLE tabe_name ( ... column_name DATA-TYPE(CLOB/VARCHAR2/BLOB), ... CONSTRAINT constraint_name CHECK (column_name IS JSON [FORMAT JSON] [(STRICT)]) ... ) If we are storing JSON in BLOB columns, we need use an additional clause FORMAT JSON while enforcing the IS JSON constraint. The default Oracle implementation is to use lax syntax for JSON. If we want Oracle to check for strict JSON syntax while storing JSON data in the column, we can use the optional STRICT clause of IS JSON constraint. Example (Store JSON in Oracle database) Lets create table in Oracle database to store JSON data. In the following example, I am creating a table with name products to store product information, which will be coming in JSON format. I am also using the STRICT clause to let Oracle strictly check the JSON syntax before loading the data in the table column. ---// ---// create table to store JSON documents //--- ---// SQL> CREATE TABLE products 2 ( 3 prod_id NUMBER NOT NULL PRIMARY KEY, 4 product_info CLOB, 5 CONSTRAINT product_info_json_chk CHECK (product_info IS JSON (STRICT)) 6 ); Table created. Once we create the table to hold JSON data, we can load JSON documents into the respective columns like any other BLOB, CLOB or VARCHAR2 data as shown below. ---// ---// inserting product info in JSON format //--- ---// SQL> insert into products 2 values 3 ( 4 1, 5 ---// start of JSON document //--- 6 '{ 7 "name" : "Phone Service Basic Plan", 8 "type" : "service", 9 "monthly_price" : 40, 10 "limits" : 11 { 12 "voice" : 13 { 14 "units" : "minutes", 15 "quantity" : 200, 16 "over_rate" : 0.05 17 }, 18 "data" : 19 { 20 "units" : "gigabytes", 21 "quantity" : 20, 22 "over_rate" : 1 23 }, 24 "sms" : 25 { 26 "units" : "texts sent", 27 "quantity" : 100, 28 "over_rate" : 0.001 29 } 30 }, 31 "term_years" : 2 32 }' 33 ---// end of JSON document //--- 34 ); 1 row created. SQL> commit; Commit complete. In the above example, we have loaded one product information in JSON format. We have not used a predefined table structure to load the product information rather we have just defined a table column to contain JSON data and used that column to load all the relevant product information in JSON format. With this kind of implementation, we can store product information having different properties without the need of matching predefined product properties. In that terms, we have a flexible table structure within our Oracle database. In the following example, I am loading a JSON document in to the same table; however, with different set of properties as shown below. ---// ---// inserting another product info with different attributes //--- ---// SQL> insert into products 2 values ( 3 2, 4 ---// start of JSON data //--- 5 '{ 6 "name" : "Cable TV Basic Service Package", 7 "type" : "tv", 8 "monthly_price" : 50, 9 "term_years" : 2, 10 "cancel_penalty" : 25, 11 "sales_tax" : true, 12 "additional_tariffs" : [ 13 { 14 "kind" : "federal tariff", 15 "amount" : { "percent_of_service" : 0.06 } 16 }, 17 { 18 "kind" : "misc tariff", 19 "amount" : 2.25 20 } 21 ] 22 }' 23 ---// end of JSON data //--- 24 ); 1 row created. SQL> commit; Commit complete. We can query the [ALL/DBA/USER/CDB]_JSON_COLUMNS views to determine which tables contain JSON data in a database as shown below. ---// ---// finding tables containing JSON data //--- ---// SQL> select * from DBA_JSON_COLUMNS; OWNER TABLE_NAME COLUMN_NAME FORMAT DATA_TYPE ---------- --------------- --------------- --------- ------------- MYAPP PRODUCTS PRODUCT_INFO TEXT CLOB Now that we have loaded the data in JSON format, the next question is, how would we query data from this flexible JSON structure. We have a single column containing all the information. There should be some mechanism available which will let us query part/all of this data from JSON document. We will explore that area in the upcoming sections. How to query data from JSON documents Having implemented a flexible data structure with JSON, the next obvious requirement is the ability to query the loaded data. There are a number of ways that we can use to query part/all of data from a JSON document in Oracle database. Oracle provides a variety of JSON functions like JSON_VALUE , JSON_QUERY and JSON_TABLE which can be used to query data from a JSON document. We can also alternatively use DOT NOTATION to retrieve data from JSON documents. Oracle provides SQL access to JSON documents through JSON path expressions . JSON path expression is somewhat analogous to XQuery or XPath expressions for XML data. We pass the JSON path expression and some JSON data to SQL functions or conditions. The path expression is matched against the data and the matching data is processed by the SQL function or condition. For more details about the JSON path expression, please refer the documentation here Query JSON data using JSON_VALUE function Oracle provides a predefined function called JSON_VALUE which can be used to query individual elements (scalar) of a JSON document. In the following example, I am using the JSON_VALUE function to query the data related to voice limits for a specific product. The JSON_VALUE function takes two minimum arguments, which are the JSON column name from the table and the JSON path expression pointing to the individual element in the JSON document and returns the JSON element based on the JSON path expression. ---// ---// query JSON document elements using JSON_VALUE function //--- ---// SQL> select prod_id, 2 json_value(product_info format json, '$.name') Name, 3 json_value(product_info, '$.type') Type, 4 json_value(product_info, '$.monthly_price') monthly_price, 5 json_value(product_info, '$.limits.voice.quantity') ||' '|| 6 json_value(product_info, '$.limits.voice.units') as "Voice Limit", 5 json_value(product_info, '$.limits.voice.over_rate') as "Over Rate" 6 from products p 7 where json_value(product_info, '$.name') like '%Basic%'; PROD_ID NAME TYPE MONTHLY_PRICE Voice Limit Over Rate ---------- ------------------------------ ---------- --------------- --------------- ---------- 1 Phone Service Basic Plan service 40 200 minutes 0.05 2 Cable TV Basic Service Package tv 50 As we can observe, the JSON_VALUE function can also be used in the predicate section to limit the selection of JSON document elements based on the elements matching the criteria. We can use the JSON_VALUE function to return only scalar elements which means we can't use the JSON_VALUE function to return JSON elements containing arrays or nested values. For instance, if we use JSON_VALUE function to return all the elements related to voice limits for a specific product, it will not return any values as shown below. ---// ---// query non-scalar elements using JSON_VALUE function //--- ---// SQL> select json_value(product_info, '$.name') Name, 2 json_value(product_info, '$.limits.voice') as "Voice Limits" 3 from products p 4 where json_value(product_info, '$.type') = 'service'; NAME Voice Limits ------------------------------ --------------- Phone Service Basic Plan The default error handling for JSON_VALUE function is set to NULL ON ERROR and that is why querying non-scalar values return NULL instead of reporting any error. If we want Oracle to report errors, we can include the optional flag ERROR ON ERROR while using the JSON_VALUE function as shown below. ---// ---// error handling for non-scalar elements in JSON_VALUE function //--- ---// SQL> select json_value(product_info, '$.name') Name, 2 json_value(product_info, '$.limits.voice' error on error) as "Voice Limits" 3 from products p 4 where json_value(product_info, '$.type') = 'service'; from products p * ERROR at line 3: ORA-40456: JSON_VALUE evaluated to non-scalar value Note: If we are using BLOB columns to store JSON documents, we must explicitly use the "FORMAT JSON" clause whenever we query data using JSON functions to declare that the data we are dealing with is of JSON type. The complete syntax for JSON_VALUE function can be found here Query JSON data using JSON_QUERY function In the previous sections, we have explored the JSON_VALUE function for querying scalar JSON data elements from a JSON document. Oracle also provides predefined function that can be used to query non-scalar (arrays or nested elements) JSON data from a JSON document. We have JSON_QUERY function which can be used to query non-scalar elements (JSON fragments) from a JSON document. In the following example, I am using the JSON_QUERY function to return all the data elements related to voice limits for a specific product. ---// ---// query non-scalar JSON data elements using JSON_QUERY function //--- ---// SQL> select json_value(product_info, '$.name') Name, 2 json_query(product_info, '$.limits.voice' with wrapper) as "Voice Limits" 3 from products p 4 where json_value(product_info, '$.type') = 'service'; NAME Voice Limits ------------------------------ ------------------------------------------------------- Phone Service Basic Plan [{"units":"minutes","quantity":200,"over_rate":0.05}] The JSON_QUERY function takes a minimum of two arguments namely the column name containing JSON document and the JSON path expression returning the JSON fragments (one or more elements) matched by the path expression. We can use the optional WITH WRAPPER argument to print the result surrounded by square braces ([]) as shown in the above example. When we query JSON fragments, it is formatted in ASCII mode by default; which doesn't provide a good readability when we query nested data. Oracle provides a PRETTY clause which can be used with JSON_QUERY function to return JSON fragments in a pretty-print (with new lines and indents) format as shown below. ---// ---// using pretty format with JSON_QUERY function //---- ---// SQL> select json_value(product_info, '$.name') Name, 2 json_query(product_info, '$.additional_tariffs' PRETTY) as "Additional Tariff" 3 from products p 4 where json_value(product_info, '$.type') = 'tv'; NAME Additional Tariff ------------------------------ ---------------------------------------------------------------------- Cable TV Basic Service Package [ { "kind" : "federal tariff", "amount" : { "percent_of_service" : 0.06 } }, { "kind" : "misc tariff", "amount" : 2.25 } ] The complete syntax for JSON_QUERY function can be found here Query JSON data using JSON_TABLE function We can use the JSON_TABLE function to project JSON data in a relational format. JSON_TABLE function is particularly useful for creating relational views of JSON data. However, it can also be used to retrieve individual or fragments of a JSON document. The JSON_TABLE function is used as a row source and hence it is used in the SQL FROM clause rather than in the SELECT clause. In the following example, I am using JSON_TABLE function to create a relational view of JSON data from products table. ---// ---// using JSON_TABLE to generate a relational view of JSON data //--- ---// SQL> create or replace view products_view 2 as 3 select p.* 4 from products, 5 json_table ( 6 product_info, '$' 7 columns ( 8 name varchar2(32 char) path '$.name', 9 type varchar2(32 char) path '$.type', 10 monthly_price number path '$.monthly_price', 11 voice_limit number path '$.limits.voice.quantity', 12 voice_units varchar2(32 char) path '$.limits.voice.units', 13 voice_over_rate number path '$.limits.voice.over_rate', 14 data_limit number path '$.limits.data.quantity', 15 data_units varchar2(32 char) path '$.limits.data.units', 16 data_over_rate number path '$.limits.data.over_rate', 17 sms_limit number path '$.limits.sms.quantity', 18 sms_units varchar2(32 char) path '$.limits.sms.units', 19 sms_over_rate number path '$.limits.sms.over_rate', 20 term_years number path '$.term_years' 21 ) 22 ) 23 as p 24 where p.type='service'; View created. ---// ---// query from the relational view //--- ---// SQL> select NAME,TYPE,MONTHLY_PRICE,VOICE_LIMIT,VOICE_UNITS,VOICE_OVER_RATE,TERM_YEARS 2 from products_view; NAME TYPE MONTHLY_PRICE VOICE_LIMIT VOICE_UNIT VOICE_OVER_RATE TERM_YEARS -------------------------------- ---------- ------------- ----------- ---------- --------------- ---------- Phone Service Basic Plan service 40 200 minutes .05 2 As mentioned earlier, JSON_TABLE can also be used as an alternative to JSON_QUERY/JSON_VALUE to query individual/fragments of JSON data. In the following example, I am using JSON_TABLE to query all the data related to voice limits for a specific product. ---// ---// query non-scalar JSON data elements using JSON_TABLE function //--- ---// SQL> select p.* 2 from products, 3 json_table ( 4 product_info, '$' 5 columns ( 6 name varchar2(32 char) path '$.name', 7 type varchar2(32 char) path '$.type', 8 "Voice Limits" varchar2(200 char) format json path '$.limits.voice' 9 ) 10 ) 11 as p 12 where p.type='service'; NAME TYPE LIMITS ------------------------------ ---------- ------------------------------------------------------- Phone Service Basic Plan service {"units":"minutes","quantity":200,"over_rate":0.05} The complete syntax of JSON_TABLE function can be found here Query JSON data using DOT NOTATION In the previous sections, we have explored the different predefined Oracle functions available to query JSON data in a Oracle database. Apart from this predefined functions, we can also use the DOT NOTATION to query scalar/non-scalar elements of a JSON document. In the following example, I am using the DOT NOTATION to query each JSON data elements related to voice limits for a specific product. ---// ---// query JSON document elements using DOT NOTATION //--- ---// SQL> col "Quantity" for a10 SQL> col "Units" for a10 SQL> col "Over Rate" for a10 SQL> select p.product_info.name, 2 p.product_info.limits.voice.quantity as "Quantity", 3 p.product_info.limits.voice.units as "Units", 4 p.product_info.limits.voice.over_rate as "Over Rate" 5 from products p 6 where prod_id=1; NAME Quantity Units Over Rate ------------------------------ ---------- ---------- ---------- Phone Service Basic Plan 200 minutes 0.05 Here is another example of DOT NOTATION where I am querying all the limits (voice/data/sms) for a specific product. ---// ---// query JSON data elements using DOT NOTATION //--- ---// SQL> select p.prod_id, 2 p.product_info.name, 3 p.product_info.type, 4 p.product_info.limits.voice.quantity ||' '|| p.product_info.limits.voice.units as "Voice Limit", 5 p.product_info.limits.data.quantity ||' '|| p.product_info.limits.data.units as "Data Limit", 6 p.product_info.limits.sms.quantity ||' '|| p.product_info.limits.sms.units as "SMS Limit", 7 p.product_info.term_years 8 from products p 9 where p.product_info.type='service' ; PROD_ID NAME TYPE Voice Limit Data Limit SMS Limit TERM_YEARS ---------- ------------------------------ ---------- --------------- --------------- --------------- ---------- 1 Phone Service Basic Plan service 200 minutes 20 gigabytes 100 texts sent 2 As we can observe, the DOT NOTATION can also be used in the predicate section to limit the selection of JSON data. In our case, we have used p.product_info.type='service' to limit the selection of JSON data from only those documents which has a product type of 'service' Here is another example, where I am using the DOT NOTATION to query all the nested (non-scalar) elements rather than querying each JSON data elements individually. ---// ---// query nested JSON elements with DOT NOTATION //--- ---// SQL> select p.prod_id, 2 p.product_info.name, 3 p.product_info.limits 4 from products p 5 where prod_id=1; PROD_ID NAME LIMITS ---------- ------------------------------ ---------------------------------------------------------------------- 1 Phone Service Basic Plan {"voice":{"units":"minutes","quantity":300,"over_rate":0.05},"data":{" units":"gigabytes","quantity":30,"over_rate":1},"sms":{"units":"texts sent","quantity":100,"over_rate":0.001}} DOT NOTATION provides an easy method for querying JSON data elements (scalar and non-scalar) without the need of using the predefined JSON functions. However, Oracle internally transforms the DOT NOTATION queries to map to the respective predefined JSON functions as shown below. ---// ---// DOT NOTATION query transformations by optimizer //--- ---// Final query after transformations:******* UNPARSED QUERY IS ******* SELECT "P"."PROD_ID" "PROD_ID", JSON_QUERY("P"."PRODUCT_INFO" FORMAT JSON , '$.name' RETURNING VARCHAR2(4000) ASIS WITHOUT ARRAY WRAPPER NULL ON ERROR) "NAME", JSON_QUERY("P"."PRODUCT_INFO" FORMAT JSON , '$.limits' RETURNING VARCHAR2(4000) ASIS WITHOUT ARRAY WRAPPER NULL ON ERROR) "LIMITS" FROM "MYAPP"."PRODUCTS" "P" WHERE "P"."PROD_ID"=1 kkoqbc: optimizing query block SEL$1 (#0) : call(in-use=4168, alloc=16344), compile(in-use=72400, alloc=73928), execution(in-use=3016, alloc=4032) kkoqbc-subheap (create addr=0x7f21b2b7fb78) **************** QUERY BLOCK TEXT **************** select p.prod_id, p.product_info.name, p.product_info.limits from products p where prod_id=1 How to modify JSON documents Unfortunately, Oracle doesn't yet support direct modification of JSON data elements. I have tried to update a JSON element using the DOT NOTATION and following was the error that I received upon the command execution. ---// ---// JSON data element modification is not supported //--- ---// SQL> update products p 2 set p.product_info.term_years=3 3 where p.product_info.type='service'; update products p * ERROR at line 1: ORA-03001: unimplemented feature As we can see from the error, direct modification of JSON data elements is not yet implemented in Oracle. Looking at the error message, one can guess Oracle may soon implement this feature in an upcoming release/patch. However, we can apply an alternative trick to achieve the JSON data element modification functionalities. We can use the SQL function replace to search for the JSON element within a JSON document and then replace that element with new value as shown below. ---// ---// syntax to update JSON document with replace function //--- ---// update json_table_name set json_document_column = replace( json_document_column , json_element , new_vlaue ) where search_predicate Where json_table_name Name of the table containing JSON document json_document_column Name of the table column holding JSON document json_element JSON data element within JSON document that needs to be updated new_vlaue updated/new value to be applied on the JSON data element search_predicate filter condition to limit JSON document search In the following example, I am updating the JSON data element with name "TERM_YEARS" stored in the "PRODUCT" table under the JSON column "PRODUCT_INFO" ---// ---// update JSON document with replace function //--- ---// SQL> update products p 2 set p.product_info=replace(p.product_info,p.product_info.term_years,3) 3 where p.product_info.type='service'; 1 row updated. SQL> commit; Commit complete. ---// ---// Validating JSON data element is updated //--- ---// SQL> select prod_id, 2 json_value(product_info format json, '$.name') Name, 3 json_value(product_info, '$.type') Type, 4 json_value(product_info, '$.term_years') TERM_YEARS 5 from products p 6 where p.product_info.type='service'; PROD_ID NAME TYPE TERM_YEARS ---------- ------------------------------ ---------- ---------- 1 Phone Service Basic Plan service 3 As we can, we could update the JSON data element (PRODUCT_INFO.TERM_YEARS) with the help of replace function. We are also allowed to delete JSON documents using JSON search as shown below. In the following example, I am deleting a JSON product document by searching against the documents for a specific product type (where json_value(product_info, '$.type') = 'tv'). ---// ---// JSON document deleting using JSON search //--- ---// SQL> delete from products 2 where json_value(product_info, '$.type') = 'tv'; 1 row deleted. SQL> commit; Commit complete. ---// ---// Validate JSON documents are deleted //--- ---// SQL> select * from products 2 where json_value(product_info, '$.type') = 'tv'; no rows selected Conclusion In this article, we have explored Oracle's ability to provide a flexible schema structure by means of native support for JSON documents. Oracle is undoubtedly bridging the gap between traditional relational database and the increasingly popular NoSQL databases. This new feature will definitely turn out to be a useful addition in the Oracle database functionalities. Reference https://docs.oracle.com/database/121/ADXDB/json.htm

Wiki Page: Oracle 12c Flex Cluster installation using Widows DNS/DHCP Server - Part I

$
0
0
Introduction: Oracle Flex ASM and Flex cluster has been Introduced from oracle 12c Grid infrastructure and this is one the major significant change. To Install and configure the "Flex cluster" it is mandatory to have a GNS VIP configured with DHCP Server for delegation of virtual IP and SCAN IP addresses of all cluster nodes. In normal standard cluster configuration we have to assign node specific virtual IP's in respective cluster nodes "/etc/hosts" file and SCAN IP addresses will be configured in DNS Server and these SCAN IP addresses will be resolved by cluster nodes DNS client service. In GNS configuration things work little different. it's not required to configure the virtual IP's in each cluster node "/etc/hosts" file and even we should not configure the SCAN IP's in DNS domain forward lookup zone. The only configuration that need to be added in DNS forward lookup zone is to have a GNS VIP. The service for GNS is to have a DHCP Server configured and that DHCP Server will have a reserved IP lease. The VIP 's and SCAN IP's will be allocated to all cluster nodes using DHCP Server with GNS VIP. This article illustrates how we can configure a 12c Flex Cluster and Flex ASM on Oracle enterprise Linux using Windows2008R2 as a DNS/DHCP server for GNS sub domain delegation. In this article we are not going to see the detailed Grid Infrastructure pre-requisites configuration. The main goal of this article is to explain the working of GNS using Windows DNS/DHCP Server for Oracle 12c Flex Cluster Installation. In this demonstration we will be using total 6 nodes for Flex cluster deployment with and its classified as below: S.No Node Name IP Address SW Version Node Description 1 DANODE1 192.168.2.1 2008 DNS/DHCP Server for Node 2 flexrac1 Public Private 12.1.0.1 Cluster Hub-Node 192.168.2.81 10.10.2.81 3 flexrac2 192.168.2.82 10.10.2.82 12.1.0.1 Cluster Hub-Node 4 flexrac3 192.168.2.83 10.10.2.83 12.1.0.1 Cluster Hub-Node 5 flexrac4 192.168.2.84 10.10.2.84 12.1.0.1 Cluster Leaf-Node 6 flexrac5 192.168.2.85 10.10.2.85 12.1.0.1 Cluster Leaf-Node Deployment topology diagram: High Level steps for deployment: Configure the DNS Server Add GNS VIP and cluster nodes to domain service Create a new sub domain delegation Configure DHCP Server Configure /etc/hosts file Configure oracleasm library Flex Cluster Installation verify cluster 1- Configure the DNS Server: The DNS Server will be responsible for resolving the Incoming client connections. Clients will connect to database using SCAN IPs and these SCAN IP's will be generated from windows DHCP server and it will be resolved by configured DNS sub domain delegation. If DNS Server is already configure then we can jump to second step. - "dcpromo" is the command used for configuring the DNS Server in widows environment - Follow the screens for Active Directory Domain Service Installation - select on "Create a new domain in a new forest". If already domain exists then we can select Existing forest. - Here we should provide the fully qualified name for the domain. This will be root domain and all name resolution for public/VIP/Scan-IP/GNS-VIP will be resolved using this domain name - This will setup the domain functional level. This will only allow to add domain controllers from windows2008 and later. The domain server configuration completed successfully. 2- Add GNS VIP and cluster nodes to domain service: In standard cluster all public IP's, Virtual IP's and SCAN-Ip's should be registered in DNS server forward lookup zone. In flex cluster only public IP and GNS Vip should be added to DNS Server forward lookup zone. - This is the GNS VIP and this IP should be resolved by nslookup command on all cluster nodes. - Similarly we should add all cluster public IP's in domain. Here in this demonstration domain name is "dbamaze.com" 3- Create a new DNS sub domain delegation: Navigate to DNS Manager >> Forward Lookup zones >> domain name then right click here to create a new DNS sub domain delegation - Here it is mandatory to select the name configured earlier for GNS VIP. In this demonstration "flexrac" is the virtual host name of GNS VIP. Click on "Resolve" and it should resolve the provided IP address and FQDN The Sub domain creation completed. Now we can see the sub domain "flexrac1" under root domain "dbamaze.com" - Verify cluster nodes are able to communicate with DNS Server [root@flexnode1 ~]# ping danode1.dbamaze.com PING danode1.dbamaze.com (192.168.2.10) 56(84) bytes of data. 64 bytes from 192.168.2.10: icmp_seq=1 ttl=128 time=0.453 ms 64 bytes from 192.168.2.10: icmp_seq=2 ttl=128 time=0.553 ms 64 bytes from 192.168.2.10: icmp_seq=3 ttl=128 time=0.506 ms 64 bytes from 192.168.2.10: icmp_seq=4 ttl=128 time=0.924 ms - Verify name resolution of GNS Name "flexrac" and its VIP - Perform this step on all cluster nodes to verify connectivity. 4 -Configure DHCP Server: The configuration of DNS root domain and GNS Sub domain has been completed. The next step is to configure the DHCP Server using GNS Sub domain delegation. This DHCP server will assign the Cluster nodes virtual IP's and SCAN IP's. Navigate to Windows Start >> Administrative tools >> Server Manager - Click on "add roles" - From the server roles Menu select " DHCP Server " - By default it will select the root domain IP address. - Here provide the parent domain name " danode1.dbamaze.com " and provide the GNS VIP address. - Specify the range of IP addresses that should be reserved in the DHCP lease. This range should be reserved based on the number of cluster nodes . - select " Disable DHCPV6 " - provide the admin password - DHCP Server Installation summary. - DHCP server Address pool The Installation and configuration of DHCP Server completed successfully. Now we are ready to proceed with the Installation of Flex cluster. 5 - Configure "/etc/hosts" file: "/etc/hosts" file doesn't contains the entries with virtual IP addresses. All virtual IP addresses Including SCAN IP's will be managed by GNS virtual hostname. Private IP address should be configured on all cluster nodes if using "ASM & Private" Network option and If not using this option then Private IP address can be configured only on Hub Nodes and it's not required to be used on leaf nodes. Sample "/etc/hosts" file from flexnode1: [root@flexnode1 bin]# cat /etc/hosts # Do not remove the following line, or various programs # that require network functionality will fail. 127.0.0.1 localhost.localdomain localhost ############PUBLIC############################## 192.168.2.81 flexnode1.dbamaze.com flexnode1 localhost 192.168.2.82 flexnode2.dbamaze.com flexnode2 192.168.2.83 flexnode3.dbamaze.com flexnode3 192.168.2.84 flexnode4.dbamaze.com flexnode4 192.168.2.85 flexnode5.dbamaze.com flexnode5 ###############PRIVATE########################## 10.10.2.81 flexnode1-priv.dbamaze.com flexnode1-priv 10.10.2.82 flexnode2-priv.dbamaze.com flexnode2-priv 10.10.2.83 flexnode3-priv.dbamaze.com flexnode3-priv 10.10.2.84 flexnode4-priv.dbamaze.com flexnode4-priv 10.10.2.85 flexnode5-priv.dbamaze.com flexnode5-priv #################VIP########################### ##NO NEED FOR VIP AS ITS A GNS INSTALLATION##### [root@flexnode1 bin]# 6 - Configure oracleasm library We have to ensure that all required ASM rpm packages are installed on all participating cluster nodes. In a Flex Cluster configuration ASM shared storage disks should be configured only on Hub nodes and it is not required to configure the ASM shared storage disks on Leaf nodes. [root@flexnode1 ~]# oracleasm listdisks <<---- HUB Node DISK1 DISK2 DISK3 [root@flexnode2 bin]# oracleasm listdisks <<---- HUB Node DISK1 DISK2 DISK3 [root@flexnode3 bin]# oracleasm listdisks <<---- HUB Node DISK1 DISK2 DISK3 [root@flexnode4 ~]# oracleasm listdisks <<---- Leaf Node [root@flexnode4 ~]# [root@flexnode5 ~]# oracleasm listdisks <<----Leaf Node [root@flexnode5 ~]# Conclusion This part of article covers configuration of DNS/DHCP Server on windows for GNS Sub domain delegation. We have to make sure the IP addresses configured in the DHCP lease has not been used by any other server. If any of the IP is used in network then "root.sh" script will fail. GNS sub domain will allocate Virtual IP's in consecutive fashion. Once the configuration of DNS/DHCP Server is completed we are ready to begin Installation of 12c Flex cluster. In next part of this article we will see the Installation of Flex cluster.

Wiki Page: Oracle 12c Flex Cluster Installation using Widows DNS/DHCP Server - Part II

$
0
0
Introduction: This is the second part of the article in this series. Please refer to below mentioned URL to understand the steps that needs to be prepared for flex cluster Installation. http://www.toadworld.com/platforms/oracle/w/wiki/11508.oracle-12c-flex-cluster-installation-using-widows-dnsdhcp-server-part1 In first part of this article we've seen the configuration of DNS, DHCP Server with GNS Sub Domain Delegation. Now the setup is ready to kick off the Installation. As Mentioned in part1 the configuration of DNS/DCHP server with GNS sub domain delegation is very Important and Incorrect configuration of these services will lead to failed Installation. We need to make sure that all required steps are completed and all cluster nodes are able to resolve DNS Server root domain, GNS Sub domain and they are able to communicate each other. Installation of Flex Cluster Before starting the Installation we need to make sure that all 12c Grid Infrastructure pre-requisites are in configured. If there are no issues reported in the cluvfy output then we are ready for Installation. - Provide the name of the Cluster which we want to configure "flexrac-cluster" - Provide the SCAN Name "scan-flexrac" - Port for the scan listener "1521 " For Installation of Flex cluster its mandatory to enable to GNS and to enable GNS we should provide the GNS Vip and name of the GNS Sub Domain. Here we have to provide the inputs based on the configured settings in earlier steps. - GNS VIP Address - 192.168.2.20 - GNS Sub Domain - flexrac.dbamaze.com - Add all participating cluster nodes based on its properties (hub/leaf). In this demonstration I've only listed only one HUB and LEAF node, similarly we need to add up all cluster nodes. - List of all Hub and Leaf Nodes for 12c Grid Infrastructure. The Virtual Hostnames will be allocated automatically from GNS Service for Hub Nodes. Virtual IP and Virtual Hostnames are not applicable for Leaf Nodes. - Setup SSH for enabling password less login on remote nodes. - Select the Type for Network Interfaces. eth0 - 192.168.2.0 - Public eth1 - 10.10.2.0 - Private - Configure Disk for OCR and VD for 12c Grid Infrastructure - In this demonstration IPMI is not used but if required it can be configured at this step. - Use same directory location on all cluster nodes. (Should be created in advance with appropriate privileges) - There is an option to execute root.sh script from the Installer itself. This option is not used in this Installation. - Execute "orainstRoot.sh" & "root.sh" scripts in a same sequence as mentioned. - Execution of scripts on "flexnode1": [root@flexnode1 ~]# /u01/oracle/oraInventory/orainstRoot.sh Changing permissions of /u01/oracle/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/oracle/oraInventory to dbagrid. The execution of the script is complete. [root@flexnode1 ~]# [root@flexnode1 ~]# /u01/grid/12.1.0/root.sh Performing root user operation for Oracle 12c The following environment variables are set as: ORACLE_OWNER= oragrid ORACLE_HOME= /u01/grid/12.1.0 Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/grid/12.1.0/crs/install/crsconfig_params 2015/04/01 00:32:13 CLSRSC-363: User ignored prerequisites during installation OLR initialization - successful root wallet root wallet cert root cert export peer wallet profile reader wallet pa wallet peer wallet keys pa wallet keys peer cert request pa cert request peer cert pa cert peer root cert TP profile reader root cert TP pa root cert TP peer pa cert TP pa peer cert TP profile reader pa cert TP profile reader peer cert TP peer user cert pa user cert 2015/04/01 00:33:11 CLSRSC-330: Adding Clusterware entries to file '/etc/inittab' CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'flexnode1' CRS-2677: Stop of 'ora.drivers.acfs' on 'flexnode1' succeeded CRS-2672: Attempting to start 'ora.evmd' on 'flexnode1' CRS-2672: Attempting to start 'ora.mdnsd' on 'flexnode1' CRS-2676: Start of 'ora.mdnsd' on 'flexnode1' succeeded CRS-2676: Start of 'ora.evmd' on 'flexnode1' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'flexnode1' CRS-2676: Start of 'ora.gpnpd' on 'flexnode1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'flexnode1' CRS-2672: Attempting to start 'ora.gipcd' on 'flexnode1' CRS-2676: Start of 'ora.cssdmonitor' on 'flexnode1' succeeded CRS-2676: Start of 'ora.gipcd' on 'flexnode1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'flexnode1' CRS-2672: Attempting to start 'ora.diskmon' on 'flexnode1' CRS-2676: Start of 'ora.diskmon' on 'flexnode1' succeeded CRS-2676: Start of 'ora.cssd' on 'flexnode1' succeeded ASM created and started successfully. Disk Group GRID created successfully. CRS-2672: Attempting to start 'ora.crf' on 'flexnode1' CRS-2672: Attempting to start 'ora.storage' on 'flexnode1' CRS-2676: Start of 'ora.storage' on 'flexnode1' succeeded CRS-2676: Start of 'ora.crf' on 'flexnode1' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'flexnode1' CRS-2676: Start of 'ora.crsd' on 'flexnode1' succeeded CRS-4256: Updating the profile Successful addition of voting disk 32670456172a4f5dbf39b2617a9fb10c. Successfully replaced voting disk group with +GRID. CRS-4256: Updating the profile CRS-4266: Voting file(s) successfully replaced ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 32670456172a4f5dbf39b2617a9fb10c (/dev/oracleasm/disks/DISK1) [GRID] Located 1 voting disk(s). CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'flexnode1' CRS-2673: Attempting to stop 'ora.crsd' on 'flexnode1' CRS-2677: Stop of 'ora.crsd' on 'flexnode1' succeeded CRS-2673: Attempting to stop 'ora.storage' on 'flexnode1' CRS-2673: Attempting to stop 'ora.mdnsd' on 'flexnode1' CRS-2673: Attempting to stop 'ora.gpnpd' on 'flexnode1' CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'flexnode1' CRS-2677: Stop of 'ora.storage' on 'flexnode1' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'flexnode1' CRS-2677: Stop of 'ora.drivers.acfs' on 'flexnode1' succeeded CRS-2677: Stop of 'ora.gpnpd' on 'flexnode1' succeeded CRS-2677: Stop of 'ora.mdnsd' on 'flexnode1' succeeded CRS-2677: Stop of 'ora.asm' on 'flexnode1' succeeded CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'flexnode1' CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'flexnode1' succeeded CRS-2673: Attempting to stop 'ora.crf' on 'flexnode1' CRS-2673: Attempting to stop 'ora.ctssd' on 'flexnode1' CRS-2673: Attempting to stop 'ora.evmd' on 'flexnode1' CRS-2677: Stop of 'ora.crf' on 'flexnode1' succeeded CRS-2677: Stop of 'ora.ctssd' on 'flexnode1' succeeded CRS-2677: Stop of 'ora.evmd' on 'flexnode1' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'flexnode1' CRS-2677: Stop of 'ora.cssd' on 'flexnode1' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'flexnode1' CRS-2677: Stop of 'ora.gipcd' on 'flexnode1' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'flexnode1' has completed CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Starting Oracle High Availability Services-managed resources CRS-2672: Attempting to start 'ora.mdnsd' on 'flexnode1' CRS-2672: Attempting to start 'ora.evmd' on 'flexnode1' CRS-2676: Start of 'ora.mdnsd' on 'flexnode1' succeeded CRS-2676: Start of 'ora.evmd' on 'flexnode1' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'flexnode1' CRS-2676: Start of 'ora.gpnpd' on 'flexnode1' succeeded CRS-2672: Attempting to start 'ora.gipcd' on 'flexnode1' CRS-2676: Start of 'ora.gipcd' on 'flexnode1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'flexnode1' CRS-2676: Start of 'ora.cssdmonitor' on 'flexnode1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'flexnode1' CRS-2672: Attempting to start 'ora.diskmon' on 'flexnode1' CRS-2676: Start of 'ora.diskmon' on 'flexnode1' succeeded CRS-2789: Cannot stop resource 'ora.diskmon' as it is not running on server 'flexnode1' CRS-2676: Start of 'ora.cssd' on 'flexnode1' succeeded CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'flexnode1' CRS-2672: Attempting to start 'ora.ctssd' on 'flexnode1' CRS-2676: Start of 'ora.ctssd' on 'flexnode1' succeeded CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'flexnode1' succeeded CRS-2672: Attempting to start 'ora.asm' on 'flexnode1' CRS-2676: Start of 'ora.asm' on 'flexnode1' succeeded CRS-2672: Attempting to start 'ora.storage' on 'flexnode1' CRS-2676: Start of 'ora.storage' on 'flexnode1' succeeded CRS-2672: Attempting to start 'ora.crf' on 'flexnode1' CRS-2676: Start of 'ora.crf' on 'flexnode1' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'flexnode1' CRS-2676: Start of 'ora.crsd' on 'flexnode1' succeeded CRS-6023: Starting Oracle Cluster Ready Services-managed resources CRS-6017: Processing resource auto-start for servers: flexnode1 CRS-6016: Resource auto-start has completed for server flexnode1 CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources CRS-4123: Oracle High Availability Services has been started. 2015/04/01 00:38:43 CLSRSC-343: Successfully started Oracle clusterware stack /proc/net/ipv6_route: No such file or directory CRS-2672: Attempting to start 'ora.net1.network' on 'flexnode1' CRS-2676: Start of 'ora.net1.network' on 'flexnode1' succeeded CRS-2672: Attempting to start 'ora.gns.vip' on 'flexnode1' CRS-2676: Start of 'ora.gns.vip' on 'flexnode1' succeeded CRS-2672: Attempting to start 'ora.gns' on 'flexnode1' CRS-2676: Start of 'ora.gns' on 'flexnode1' succeeded CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'flexnode1' CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'flexnode1' succeeded CRS-2672: Attempting to start 'ora.asm' on 'flexnode1' CRS-2676: Start of 'ora.asm' on 'flexnode1' succeeded CRS-2672: Attempting to start 'ora.GRID.dg' on 'flexnode1' CRS-2676: Start of 'ora.GRID.dg' on 'flexnode1' succeeded CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'flexnode1' CRS-2673: Attempting to stop 'ora.crsd' on 'flexnode1' CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'flexnode1' CRS-2673: Attempting to stop 'ora.cvu' on 'flexnode1' CRS-2673: Attempting to stop 'ora.oc4j' on 'flexnode1' CRS-2673: Attempting to stop 'ora.GRID.dg' on 'flexnode1' CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'flexnode1' CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'flexnode1' CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'flexnode1' CRS-2673: Attempting to stop 'ora.gns' on 'flexnode1' CRS-2673: Attempting to stop 'ora.flexnode1.vip' on 'flexnode1' CRS-2677: Stop of 'ora.cvu' on 'flexnode1' succeeded CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'flexnode1' succeeded CRS-2673: Attempting to stop 'ora.scan3.vip' on 'flexnode1' CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'flexnode1' succeeded CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'flexnode1' succeeded CRS-2673: Attempting to stop 'ora.scan1.vip' on 'flexnode1' CRS-2673: Attempting to stop 'ora.scan2.vip' on 'flexnode1' CRS-2677: Stop of 'ora.scan3.vip' on 'flexnode1' succeeded CRS-2677: Stop of 'ora.flexnode1.vip' on 'flexnode1' succeeded CRS-2677: Stop of 'ora.GRID.dg' on 'flexnode1' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'flexnode1' CRS-2677: Stop of 'ora.asm' on 'flexnode1' succeeded CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'flexnode1' CRS-2677: Stop of 'ora.scan2.vip' on 'flexnode1' succeeded CRS-2677: Stop of 'ora.scan1.vip' on 'flexnode1' succeeded CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'flexnode1' succeeded CRS-2677: Stop of 'ora.gns' on 'flexnode1' succeeded CRS-2673: Attempting to stop 'ora.gns.vip' on 'flexnode1' CRS-2677: Stop of 'ora.gns.vip' on 'flexnode1' succeeded CRS-2677: Stop of 'ora.oc4j' on 'flexnode1' succeeded CRS-2673: Attempting to stop 'ora.ons' on 'flexnode1' CRS-2677: Stop of 'ora.ons' on 'flexnode1' succeeded CRS-2673: Attempting to stop 'ora.net1.network' on 'flexnode1' CRS-2677: Stop of 'ora.net1.network' on 'flexnode1' succeeded CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'flexnode1' has completed CRS-2677: Stop of 'ora.crsd' on 'flexnode1' succeeded CRS-2673: Attempting to stop 'ora.storage' on 'flexnode1' CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'flexnode1' CRS-2673: Attempting to stop 'ora.ctssd' on 'flexnode1' CRS-2673: Attempting to stop 'ora.mdnsd' on 'flexnode1' CRS-2673: Attempting to stop 'ora.gpnpd' on 'flexnode1' CRS-2677: Stop of 'ora.storage' on 'flexnode1' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'flexnode1' CRS-2677: Stop of 'ora.drivers.acfs' on 'flexnode1' succeeded CRS-2677: Stop of 'ora.mdnsd' on 'flexnode1' succeeded CRS-2677: Stop of 'ora.gpnpd' on 'flexnode1' succeeded CRS-2677: Stop of 'ora.ctssd' on 'flexnode1' succeeded CRS-2677: Stop of 'ora.asm' on 'flexnode1' succeeded CRS-2673: Attempting to stop 'ora.evmd' on 'flexnode1' CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'flexnode1' CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'flexnode1' succeeded CRS-2677: Stop of 'ora.evmd' on 'flexnode1' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'flexnode1' CRS-2677: Stop of 'ora.cssd' on 'flexnode1' succeeded CRS-2673: Attempting to stop 'ora.crf' on 'flexnode1' CRS-2677: Stop of 'ora.crf' on 'flexnode1' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'flexnode1' CRS-2677: Stop of 'ora.gipcd' on 'flexnode1' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'flexnode1' has completed CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Starting Oracle High Availability Services-managed resources CRS-2672: Attempting to start 'ora.evmd' on 'flexnode1' CRS-2672: Attempting to start 'ora.mdnsd' on 'flexnode1' CRS-2676: Start of 'ora.mdnsd' on 'flexnode1' succeeded CRS-2676: Start of 'ora.evmd' on 'flexnode1' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'flexnode1' CRS-2676: Start of 'ora.gpnpd' on 'flexnode1' succeeded CRS-2672: Attempting to start 'ora.gipcd' on 'flexnode1' CRS-2676: Start of 'ora.gipcd' on 'flexnode1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'flexnode1' CRS-2676: Start of 'ora.cssdmonitor' on 'flexnode1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'flexnode1' CRS-2672: Attempting to start 'ora.diskmon' on 'flexnode1' CRS-2676: Start of 'ora.diskmon' on 'flexnode1' succeeded CRS-2789: Cannot stop resource 'ora.diskmon' as it is not running on server 'flexnode1' CRS-2676: Start of 'ora.cssd' on 'flexnode1' succeeded CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'flexnode1' CRS-2672: Attempting to start 'ora.ctssd' on 'flexnode1' CRS-2676: Start of 'ora.ctssd' on 'flexnode1' succeeded CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'flexnode1' succeeded CRS-2672: Attempting to start 'ora.asm' on 'flexnode1' CRS-2676: Start of 'ora.asm' on 'flexnode1' succeeded CRS-2672: Attempting to start 'ora.storage' on 'flexnode1' CRS-2676: Start of 'ora.storage' on 'flexnode1' succeeded CRS-2672: Attempting to start 'ora.crf' on 'flexnode1' CRS-2676: Start of 'ora.crf' on 'flexnode1' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'flexnode1' CRS-2676: Start of 'ora.crsd' on 'flexnode1' succeeded CRS-6023: Starting Oracle Cluster Ready Services-managed resources CRS-6017: Processing resource auto-start for servers: flexnode1 CRS-2672: Attempting to start 'ora.cvu' on 'flexnode1' CRS-2672: Attempting to start 'ora.ons' on 'flexnode1' CRS-2672: Attempting to start 'ora.oc4j' on 'flexnode1' CRS-2676: Start of 'ora.cvu' on 'flexnode1' succeeded CRS-2676: Start of 'ora.ons' on 'flexnode1' succeeded CRS-2672: Attempting to start 'ora.scan1.vip' on 'flexnode1' CRS-2672: Attempting to start 'ora.scan2.vip' on 'flexnode1' CRS-2672: Attempting to start 'ora.scan3.vip' on 'flexnode1' CRS-2672: Attempting to start 'ora.flexnode1.vip' on 'flexnode1' CRS-2676: Start of 'ora.scan1.vip' on 'flexnode1' succeeded CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'flexnode1' CRS-2676: Start of 'ora.scan3.vip' on 'flexnode1' succeeded CRS-2672: Attempting to start 'ora.LISTENER_SCAN3.lsnr' on 'flexnode1' CRS-2676: Start of 'ora.scan2.vip' on 'flexnode1' succeeded CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on 'flexnode1' CRS-2676: Start of 'ora.flexnode1.vip' on 'flexnode1' succeeded CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'flexnode1' succeeded CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'flexnode1' succeeded CRS-2676: Start of 'ora.LISTENER_SCAN3.lsnr' on 'flexnode1' succeeded CRS-2676: Start of 'ora.oc4j' on 'flexnode1' succeeded CRS-6016: Resource auto-start has completed for server flexnode1 CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources CRS-4123: Oracle High Availability Services has been started. 2015/04/01 00:45:48 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded - Script "root.sh" completed successfully on "flexnode1" and now we can verify IP address from DHCP lease has been assigned and able to communicate. [root@flexnode1 ~]# ping 192.168.2.21 PING 192.168.2.21 (192.168.2.21) 56(84) bytes of data. 64 bytes from 192.168.2.21: icmp_seq=1 ttl=64 time=0.028 ms 64 bytes from 192.168.2.21: icmp_seq=2 ttl=64 time=0.029 ms --- 192.168.2.21 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.028/0.028/0.029/0.005 ms [root@flexnode1 ~]# ping 192.168.2.22 PING 192.168.2.22 (192.168.2.22) 56(84) bytes of data. 64 bytes from 192.168.2.22: icmp_seq=1 ttl=64 time=0.026 ms 64 bytes from 192.168.2.22: icmp_seq=2 ttl=64 time=0.024 ms --- 192.168.2.22 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.024/0.025/0.026/0.001 ms [root@flexnode1 ~]# ping 192.168.2.23 PING 192.168.2.23 (192.168.2.23) 56(84) bytes of data. 64 bytes from 192.168.2.23: icmp_seq=1 ttl=64 time=0.026 ms 64 bytes from 192.168.2.23: icmp_seq=2 ttl=64 time=0.025 ms 64 bytes from 192.168.2.23: icmp_seq=3 ttl=64 time=0.024 ms --- 192.168.2.23 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.024/0.025/0.026/0.000 ms - check the DHCP lease it will list the newly allocated IP address: - The DHCP Server allocated new IP's using GNS Sub domain delegation for new virtual hosts. Once we execute root.sh on first cluster node then DHCP will lease four IP's one is for cluster host virtual IP address and three others is for SCAN IP's. Similarly DHCP lease will assign Virtual IP addresses to the respective cluster nodes once "root.sh" is executed. - Similarly Execute "root.sh" on remaining cluster nodes. - Check the DHCP lease after successful completion of "root.sh" script on all cluster nodes - VIP's will be allocated only for SCAN Listener and Cluster Hub nodes. Hence there are total six leased IP addresses (3 for SCAN and 3 for Hub Nodes VIP) - Check the status of the Grid Infrastructure: [root@flexnode1 bin]# ./crsctl check cluster -all ************************************************************** flexnode1: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** flexnode2: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** flexnode3: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** flexnode4: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** flexnode5: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** [root@flexnode1 bin]# - Check the roles of Node: [root@flexnode1 bin]# ./crsctl get node role status -all Node 'flexnode1' active role is 'hub' Node 'flexnode2' active role is 'hub' Node 'flexnode3' active role is 'hub' Node 'flexnode4' active role is 'leaf' Node 'flexnode5' active role is 'leaf' [root@flexnode1 bin]# Conclusion: Oracle 12c Flex cluster option is useful in large scale cluster deployments. In Flex cluster Installation DBA is not required to configure the Virtual IP address for cluster nodes and it is also not required to configure the SCAN IP address in DNS. All the virtual IP addresses will be allocated by GNS Service - Grid Plug and Play(GPNP). The addition of nodes will be easier in a flex cluster environment. Leaf nodes can be converted to hub nodes in a 12c Flex cluster very easily. This is another good benefit of using flex cluster for easier scalability considerations. Leaf nodes will never run an active ASM Instance, it will run a proxy ASM connection in case on node failures to maintain the Application High Availability.

Wiki Page: Oracle 12c RAC: Quick guide to GIMR administration

$
0
0
Introduction In my last article , we have explored about the architecture of GIMR in 12c. This article describes about the various options available to manage and maintain the Grid Infrastructure Management repository (GIMR) . Oracle provides a command line utility called OCLUMON (Oracle Cluster Monitor) which is part of the CHM (Cluster Health Monitor) component and can be used to perform miscellaneous administrative tasks like changing the debug levels of logs, changing repository size/retention, querying repository path, etc. Apart from OCLUMON utility, we have a set of SRVCTL commands which can be used to perform various administrative tasks on the management repository resources. In the upcoming sections, we are going to explore both OCLUMON and SRVCTL utilities for administrating GIMR repository and its resources. How to find repository version Cluster Health Monitor (CHM) is the primary component, which collects Clusterware diagnostic data and persists those data in repository database (MGMTDB). Oracle provides an utility called OCLUMON , which can be used to manage CHM components as well as its associated diagnostic repository. We can use the following command to find the version of OCLUMON utility, which in turn tells us the version of CHM and its repository. ---// command to find OCLUMON version //--- $GRID_HOME/bin/oclumon version Example: ---// checking CHM version //--- myracserver1 {/home/oracle}: oclumon version Cluster Health Monitor (OS), Version 12.1.0.2.0 - Production Copyright 2007, 2014 Oracle. All rights reserved. How to find repository location CHM persists the diagnostic data in the management repository database (MGMTDB), which consists of a set of datafiles. We can use the following OCLUMON command to locate the database file (datafile) in the MGMTDB database which is associated with the GIMR repository ---// command to find repository path //--- $GRID_HOME/bin/oclumon manage -get reppath Example: ---// locating GIMR repository path //--- myracserver2 {/home/oracle}: oclumon manage -get reppath CHM Repository Path = /data/clusterfiles/_MGMTDB/datafile/o1_mf_sysmgmtd__374325064041_.dbf myracserver2 {/home/oracle}: By having this output we can also verify that, this file actually belong to the pluggable (PDB) database created during the MGMTDB database creation ---// ---// validating repository path against MGMTDB //--- ---// SQL> select con_id,name,open_mode 2 from v$pdbs 3 where con_id= 4 ( 5 select con_id 6 from v$datafile 7 where name='/data/clusterfiles/_MGMTDB/datafile/o1_mf_sysmgmtd__374325064041_.dbf' 8 ); CON_ID NAME OPEN_MODE ---------- ------------------------------ ---------- 3 MY_RAC_CLUSTER READ WRITE How to find repository size/retention The diagnostic data in the GIMR repository database is retained based on the size/retention defined for the repository. Once the size/retention threshold is reached, the diagnostic data is overwritten. We can use the following OCLUMON command to find the current size of GIMR repository. ---// command to find repository size/retention //--- $GRID_HOME/bin/oclumon manage -get repsize Example: ---// finding repository retention/size //--- myracserver2 {/home/oracle}: oclumon manage -get repsize CHM Repository Size = 136320 seconds myracserver2 {/home/oracle}: Here is the catch. OCLUMON never shows the size of repository in terms of storage units (KB/MB/GB), rather it displays the size of the repository in terms of duration (in seconds). This duration indicates the retention time of the repository data. OCLUMON basically queries the size of the repository and then determines how long it can retain data for the current repository size and displays that information to the user. To know the actual size of the repository, we can query the database directly as shown below ---// query MGMTDB database to find repository size //--- SQL> alter session set container=MY_RAC_CLUSTER; Session altered. SQL> show con_name CON_NAME ------------------------------ MY_RAC_CLUSTER SQL> select TABLESPACE_NAME,FILE_NAME,BYTES/1024/1024 Size_MB,MAXBYTES/1024/1024 Max_MB,AUTOEXTENSIBLE 2 from dba_data_files 3 where file_name='/data/clusterfiles/_MGMTDB/datafile/o1_mf_sysmgmtd__374325064041_.dbf'; TABLESPACE_NAME FILE_NAME SIZE_MB MAX_MB AUT ---------------- ------------------------------------------------------------------------- ---------- ---------- --- SYSMGMTDATA /data/clusterfiles/_MGMTDB/datafile/o1_mf_sysmgmtd__374325064041_.dbf 2048 0 NO Note: Replace container name with Cluster Name and file_name with the output of reppath. We can see our repository is 2 GB in size and the datafile associated with the repository is not AUTOEXTENSIBLE. Observation: Oracle by default creates the repository with 2 GB size (136320 secs retention) for a 2 node cluster regardless of space availability on the underlying file system. How to change repository size We may want to retain the diagnostic data for a specific number of days. In that case, we can increase (change) the repository size to accommodate more diagnostic data using the following OCLUMON command. ---// command to change repository size //--- $GRID_HOME/bin/oclumon manage -repsos changerepossize Example: ---// changing repository size //--- myracserver2 {/home/oracle}: oclumon manage -repos changerepossize 2200 The Cluster Health Monitor repository was successfully resized.The new retention is 146400 seconds. myracserver2 {/home/oracle}: This command acts in dual mode where it first resizes the repository with the specified size (MB) and then recalculates the retention of the repository based on the new repository size. As we can see here, since we had increased the size of repository from 2048 MB (default) to 2200 MB; Oracle has recalculated the retention against the new size and increased it from 136320 seconds (default) to 146400 seconds We can also validate the retention, following a resize operation. ---// validating new repository size/retention //--- myracserver2 {/home/oracle}: oclumon manage -get repsize CHM Repository Size = 146400 seconds myracserver2 {/home/oracle}: Internals on repository resize operation What Oracle did to the MGMTDB database during the resize operation? Well, here is what it did. ---// ---// impact of size change in the repository database //--- ---// SQL> select TABLESPACE_NAME,FILE_NAME,BYTES/1024/1024 Size_MB,MAXBYTES/1024/1024 Max_MB,AUTOEXTENSIBLE 2 from dba_data_files 3 where file_name='/data/clusterfiles/_MGMTDB/datafile/o1_mf_sysmgmtd__374325064041_.dbf'; TABLESPACE_NAME FILE_NAME SIZE_MB MAX_MB AUT ---------------- ------------------------------------------------------------------------ ---------- ---------- --- SYSMGMTDATA /data/clusterfiles/_MGMTDB/datafile/o1_mf_sysmgmtd__374325064041_.dbf 2200 0 NO It has re-sized the datafile in the database internally. We can also verify the same by viewing the MGMTDB database alert log file How to change repository retention Technically, there is no command available to change the retention of the data stored in the repository. However, there is a alternative way to do that. We can use OCLUMON utility to check whether if a desired retention can be set for the repository using the following command. ---// command to check if a specific retention can be set //--- $GRID_HOME/bin/oclumon manage -repos checkretentiontime Example: ---// checking if retention 260000 secs can be set //--- myracserver2 {/home/oracle}: oclumon manage -repos checkretentiontime 260000 The Cluster Health Monitor repository is too small for the desired retention. Please first resize the repository to 3908 MB I know, you have figured it out! I wanted to change the retention of the repository to 260000 secs. I have used the command "oclumon manage -repos checkretentiontime 260000" to see if that retention can be set. Oracle just came back to me and asked to increase the size of repository to 3908 MB in order to be able to set that retention. Here is the simple interpretation. Changing the repository retention period is a two phase process. Use checkretentiontime to find how much more space needs to be added to the repository to satisfy the desired retention Use changerepossize to change the size of the repository in order to meet the desired retention. If the desired retention is less than the current retention, then checkretentiontime will show an output like below ---// myracserver2 {/home/oracle}: oclumon manage -repos checkretentiontime 136320 The Cluster Health Monitor repository can support the desired retention for 2 hosts How to purge repository data There is no need to manually purge the repository as it is automatically taken care by the cluster logger service (ologgerd) based on the repository size and retention setup. However, if desired we can simulate a purge of the repository by decreasing the repository size using OCLUMON changerepossize command as shown below ---// trick to manually purge repository data //--- myracserver2 {/home/oracle}: oclumon manage -repos changerepossize 100 Warning: Entire data in Cluster Health Monitor repository will be deleted.Do you want to continue(Yes/No) ? No Operation aborted on user request What we tried to do here is, we tried to decrease the size of the GIMR repository which will in turn delete all the data stored in the repository. Once the data is purged, we can revert the repository size to the required value. How to locate cluster logger service We know that the cluster logger service (ologgerd) of the Cluster Health Monitor (CHM) component is the service responsible for persisting the diagnostic data collected by the system monitor service (osysmond) in the repository (MGMTDB). There is one Cluster Logger Service (ologgerd) running per 32 nodes in a cluster. We can use the following OCLUMON commands to query where the cluster logger services (ologgerd) are running. ---// commands to locate cluster logger services //--- $GRID_HOME/bin/oclumon manage -get alllogger -details (Lists all logger services available in the cluster) $GRID_HOME/bin/oclumon manage -get mylogger (Lists the logger service for the current cluster node) Example: ---// listing all logger services in the cluster //--- myracserver2 {/home/oracle}: oclumon manage -get alllogger -details Logger = myracserver2 Nodes = myracserver1,myracserver2 In this particular example, I have only one Cluster logger service (ologgerd) running on node myracserver2 for my cluster and is logging diagnostic data for nodes myracserver1 and myracserver2. How to change logging level We know that Cluster Health Monitor (CHM) monitors real-time Operating system and Clusterware metrics and logs them in the GIMR repository dayabase. By default, CHM logging level is set to 1 which collects basic diagnostic data. At times we may need to change the CHM logging level to collect extended diagnostic data. That can be done using the following OCLUMON command ---// command to change CHM logging levels //--- $GRID_HOME/bin/oclumon debug [log daemon module:log_level] The supported daemon and their respective modules with log_level are listed below DAEMON MODULE LOG LEVEL osysmond CRFMOND, CRFM, allcomp 0, 1, 2, 3 ologgerd CRFLOGD, CRFLDREP, CRFM, allcomp 0, 1, 2, 3 client OCLUMON, CRFM, allcomp 0, 1, 2, 3 all allcomp 0, 1, 2, 3 Example: The following command sets the logging level of cluster logger service (ologgerd) to level 3 ---// changing CHM loggerd logging to level 3 //--- myracserver2 {/home/oracle}: oclumon debug log ologgerd CRFLOGD:3 Manage repository resources with SRVCTL commands With the introduction of GIMR, we have two additional resources ora.mgmtdb and ora.MGMTLSNR which are added in the Clusterware stack. Oracle provides a dedicated set of SRVCTL commands to monitor and manage these two new clusterware resources. Following are the new set of SRVCTL commands which are specific to GIMR resources (MGMTDB and MGMTLSNR) ---// list of srvctl commands available to operate on GIMR resources //--- myracserver2 {/home/oracle}: srvctl -h | grep -i mgmt | sort | awk -F ":" '{print $2}' srvctl add mgmtdb [-domain ] srvctl add mgmtlsnr [-endpoints "[TCP srvctl config mgmtdb [-verbose] [-all] srvctl config mgmtlsnr [-all] srvctl disable mgmtdb [-node ] srvctl disable mgmtlsnr [-node ] srvctl enable mgmtdb [-node ] srvctl enable mgmtlsnr [-node ] srvctl getenv mgmtdb [-envs "[,...]"] srvctl getenv mgmtlsnr [ -envs "[,...]"] srvctl modify mgmtdb [-pwfile ] [-spfile ] srvctl modify mgmtlsnr -endpoints "[TCP srvctl relocate mgmtdb [-node ] srvctl remove mgmtdb [-force] [-noprompt] [-verbose] srvctl remove mgmtlsnr [-force] srvctl setenv mgmtdb {-envs "=[,...]" | -env ""} srvctl setenv mgmtlsnr { -envs "=[,...]" | -env "="} srvctl start mgmtdb [-startoption ] [-node ] srvctl start mgmtlsnr [-node ] srvctl status mgmtdb [-verbose] srvctl status mgmtlsnr [-verbose] srvctl stop mgmtdb [-stopoption ] [-force] srvctl stop mgmtlsnr [-node ] [-force] srvctl unsetenv mgmtdb -envs "[,..]" srvctl unsetenv mgmtlsnr -envs "[,...]" srvctl update mgmtdb -startoption myracserver2 {/home/oracle}: Lets go though a few examples to get ourselves familiarized with these new set of commands. We can use the SRVCTL STATUS command to find the current status of repository database and listener as shown below. ---// checking MGMTDB status //--- myracserver2 {/home/oracle}: srvctl status mgmtdb Database is enabled Instance -MGMTDB is running on node myracserver2 ---// checking MGMTLSNR status //--- myracserver2 {/home/oracle}: srvctl status mgmtlsnr Listener MGMTLSNR is enabled Listener MGMTLSNR is running on node(s): myracserver2 We can use the SRVCTL CONFIG commands to find out the current configuration of repository database and listener as shown below. ---// finding configuration of MGMTDB //--- myracserver2 {/home/oracle}: srvctl config mgmtdb Database unique name: _mgmtdb Database name: Oracle home: Oracle user: oracle Spfile: /data/clusterfiles/_mgmtdb/spfile-MGMTDB.ora Password file: Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Type: Management PDB name: my_rac_cluster PDB service: my_rac_cluster Cluster name: my-rac-cluster Database instance: -MGMTDB ---// finding configuration of MGMTLSNR //--- myracserver2 {/home/oracle}: srvctl config MGMTLSNR Name: MGMTLSNR Type: Management Listener Owner: oracle Home: End points: TCP:1521 Management listener is enabled. Management listener is individually enabled on nodes: Management listener is individually disabled on nodes: Note: It is not recommended to modify the default configuration of MGMTDB. However, we may choose to modify the default configuration of MGMTLSNR to change the listener port (by default listens on port 1521) if desired as shown below. ---// change listener port for MGMTLSNR //--- myracserver2 {/home/oracle}: srvctl modify MGMTLSNR -endpoints "TCP:1540" ---// validate new MGMTLSNR configuration //--- myracserver2 {/home/oracle}: srvctl config MGMTLSNR Name: MGMTLSNR Type: Management Listener Owner: oracle Home: End points: TCP:1540 Management listener is enabled. Management listener is individually enabled on nodes: Management listener is individually disabled on nodes: Similarly, we can use the other set of commands like SRVCTL MODIFY to change MGMTDB and MGTLSNR properties, SRVCTL SETENV to set specific environment for MGMTDB and MGMTLSNR, SRVCTL DISABLE to disable MGMTDB and MGMTLSNR resources, SRVCTL REMOVE to remove MGMTDB and MGMTLSNR from clusterware stack and so on. How to perform manual failover (relocation) of repository resources The management repository resources ( ora.mgmtdb and ora.MGMTLSNR ) are entirely managed by the Clusterware stack, which takes care of failing over the repository database resources to other available node when the hosting node fails. However, we can also manually failover these resources to other cluster nodes when desired. We can make use of SRVCTL RELOCATE MGMTDB command to relocate the repository database resources from one cluster node to another cluster node as shown below. ---// command to relocate repository resources //--- srvctl relocate mgmtdb -node Example: ---// we have two node cluster with nodes myracserver1 and myracserver2 //--- myracserver2 {/home/oracle}: olsnodes myracserver1 myracserver2 ---// repository database resources are running on myracserver2 //--- myracserver2 {/home/oracle}: srvctl status mgmtdb Database is enabled Instance -MGMTDB is running on node myracserver2 myracserver2 {/home/oracle}: srvctl status mgmtlsnr Listener MGMTLSNR is enabled Listener MGMTLSNR is running on node(s): myracserver2 ---// relocating repository database resources to myracserver1 //--- myracserver2 {/home/oracle}: srvctl relocate mgmtdb -node myracserver1 ---// validate the repository resources are relocated //--- myracserver2 {/home/oracle}: srvctl status mgmtdb Database is enabled Instance -MGMTDB is running on node myracserver1 myracserver2 {/home/oracle}: srvctl status mgmtlsnr Listener MGMTLSNR is enabled Listener MGMTLSNR is running on node(s): myracserver1 Relocating the repository database MGMTDB also results in automatic relocation of the repository database listener as seen in the previous example. This type of manual relocation would be very useful during planned maintenance of the hosting cluster node. Conclusion In this article, we have explored various options available to administer and manage the Grid Infrastructure Management repository as well as seen few tricks that can be used to alter the repository attributes/characteristics based on specific requirements. Oracle provides a rich set of commands to monitor and manage the repository and its associated clusterware components.

Wiki Page: Oracle 12c RAC: Introduction to Grid Infrastructure Management Repository (GIMR)

$
0
0
What is Grid Infrastructure Management Repository (GIMR) Oracle Grid Infrastructure repository is a container (store) that is used to preserve diagnostic information collected by Cluster Health Monitor (i.e CHM/OS or ora.crf) as well as to store other information related to Oracle database QoS management, Rapid Home provisioning, etcetera. However, it is primarily used to maintain diagnostic data collected by the Cluster Health Monitor (CHM) which detects and analyzes Operating System (OS) and Clusterware (GI) resource related failure and degradation. Brief about Cluster Health Monitor (CHM) Cluster Health Monitor is a Oracle Clusterware (GI) component, which monitors and analyzes the Clusterware as well as Operating System resources and collects information related to any failure or degradation of those resources. CHM runs as Clusterware resource and is identified by the name ora.crf. The status of CHM resource can be queried using the following command ---// syntax to check status of cluster health monitor //--- $GRID_HOME/bin/crsctl status res ora.crf -init Example: ---// checking status of CHM //--- myracserver2 {/home/oracle}: crsctl status resource ora.crf -init NAME=ora.crf TYPE=ora.crf.type TARGET=ONLINE STATE=ONLINE on myracserver2 CHM makes use of two services to collect the diagnostic data as mentioned below System Monitor Service (osysmond) : The system monitor service (osysmond) is a reat-time monitoring and operating system metric collection service that runs on each cluster node. The collected metrics are then forwarded to Cluster logger service (ologgerd) which stores the data in the Grid Infrastructure Management Repository (GIMR) database Cluster Logger Service (ologgerd) : In a Cluster, there is one cluster logger service (ologgerd) per 32 nodes. Additional logger services are spawned for every additional 32 nodes. As mentioned earlier the cluster logger service (ologgerd) is responsible for persisting the data collected by the system monitor service (osysmond) in the repository database. If the logger service fails and is not able to come up after a fixed number of retries, Oracle Clusterware will relocate and start the service on a different node. Example: In the following two node cluster ( myracserver1 and myracserver2 ), we have the system monitor service ( osysmond ) running on both myracserver1 and myracserver2 where as the cluster logger service ( ologgerd ) is running just on myracserver2 (since we can have only one logger service per 32 cluster nodes). ---// we have a two node cluster //--- myracserver2 {/home/oracle}: olsnodes myracserver1 myracserver2 ---// system monitor service running on first node //--- myracserver1 {/home/oracle}: ps -ef | grep osysmond oracle 24321 31609 0 03:23 pts/0 00:00:00 grep osysmond root 2529 1 0 Aug27 ? 00:07:48 /app/grid/12.1.0.2/bin/osysmond.bin myracserver1 {/home/oracle}: ---// system monitor service running on second node //--- myracserver2 {/home/oracle}: ps -ef | grep osysmond oracle 24321 31609 0 03:25 pts/0 00:00:00 grep osysmond root 2526 1 0 Aug27 ? 00:07:20 /app/grid/12.1.0.2/bin/osysmond.bin myracserver2 {/home/oracle}: ---// cluster logger service running on second node //--- myracserver2 {/home/oracle}: ps -ef | grep ologgerd oracle 25874 31609 0 03:27 pts/0 00:00:00 grep ologgerd root 30748 1 1 Aug27 ? 00:12:31 /app/grid/12.1.0.2/bin/ologgerd -M -d /app/grid/12.1.0.2/crf/db/myracserver2 myracserver2 {/home/oracle}: ---// cluster logger service not running on first node //--- myracserver1 {/home/oracle}: ps -ef | grep ologgerd oracle 3519 1948 0 03:27 pts/1 00:00:00 grep ologgerd myracserver1 {/home/oracle}: Evolution of diagnostic repository with 12c Prior to the introduction of Oracle database 12c, the Clusterware diagnostic data was managed in a Berkeley DB (BDB) and the related Berkeley database files were stored by default under $GRID_HOME/crf/db location. Example: ---// Clusterware diagnostic repository in 11g //--- racserver1_11g {/home/oracle}: oclumon manage -get reppath CHM Repository Path = /app/grid/11.2.0.4/crf/db/racserver1_11g Done Oracle has take a step further with Oracle database 12c and replaced the Berkeley DB with a Single instance Oracle 12c Container database (having a single pluggable database) called management database ( MGMTDB ) with its own dedicated listener ( MGMTLSNR ). This database is completely managed by the Custerware (GI) and runs as a single instance database regardless of the number of cluster nodes. Additionally, since MGMTDB is a single instance database and managed by Clusterware (GI); in case the hosting node is down the database would be automatically failed over to the other node by the Clusterware (GI). GIMR in Oracle Database 12.1.0.1 While installing the Clusterware (GI) software in Oracle database 12.1.0.1, it was optional to install the Grid Infrastructure Management Repository database (MGMTDB). If not installed, Oracle Clusterware (GI) features such as Cluster Health Monitor (CHM), etc which depends on it will be disabled. GIMR in Oracle Database 12.1.0.2 Oracle has now made it mandatory to install the Grid Infrastructure Management Repository database (MGMTDB) as part of the Clusterware (GI) installation starting with Oracle Clusterware version 12.1.0.2. We no longer have the option to opt out of installing MGMTDB during the Clusterware (GI) installation. Overall framework of GIMR Following diagram depicts a brief architecture/framework of the Grid Infrastructure Management Repository (GIMR) along with the related components. Considering a "N" node cluster, we have the GIMR (MGMTDB/MGMTLSNR) running only on a single node with one Cluster Logger service (ologgerd) running per 32 nodes and one System Monitor Service (osysmond) running on every node. Apart from these integrated components, we have the optional RHP (Rapid Home Provisioning) clients which may communicate with the GIMR (MGMTDB/MGMTLSNR) for persisting/querying metadata related to Oracle Rapid Home Provisioning. We also have the Trace File Analyzer ( tfactl ) which can communicate with the GIMR (MGMTDB/MGMTLSNR) to query the diagnostic data stored (persisted by cluster logger service) in the repository. When the node hosting the GIMR repository fails, all the repository resources (MGMTDB/MGMTLSNR) automatically fails over to another available cluster node as depicted in the following diagram Note: Although the diagram shows the repository (MGMTDB/MGMTLSNR) and cluster logger service (ologgerd) relocating to same to node upon failure (for representation purpose), the cluster logger service (ologgerd) relocation is independent of the repository (MGMTDB/MGMTLSNR) and can relocate to any available cluster node. GIMR space requirement The average growth size of the repository is approximately 650-750 MB. The space requirement completely depends on the retention desired for the repository. For example a 4 node cluster would lead at the default retention of 3 days to an approximate size of 5.9-6.8 GB In case, where the Cluster has more than 4 nodes, an additional 500 MB is required for each of the additional cluster node. Here are few test cases that have been performed against two node cluster to find out the size requirement RETENTION (IN DAYS) SPACE REQUIRED (IN MB) 3 days 3896 MB 7 days 9091 MB 10 days 12986 MB 30 days 38958 MB GIMR database (MGMTDB) location Starting with Oracle database 12c, the GIMR database (MGMTDB) is by default created within the same Filesystem/ASM-Diskgroup as OCR or VOTING. During the installation of Clusterware (GI) binaries, the OUI will fetch the location (ASM Diskgroup/ File system) of the OCR and VOTING and utilizes the first location to create the datafiles for the MGMTDB database. For example, if we have the following locations for OCR or VOTING ---// voting disk location //--- myracserver2 {/home/oracle}: crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 38aaf08ea3c74ffabfd258876dd6f97c (/data/clusterfiles/copy1/VOTE-disk01) [] 2. ONLINE 97d73bdbe42c4fa4bfa3c3cb7d741583 (/data/clusterfiles/copy2/VOTE-disk02) [] 3. ONLINE 00b6b258e0724f6cbf1dc6a03d15fd87 (/data/clusterfiles/copy3/VOTE-disk03) [] Located 3 voting disk(s). Oracle Universal Installer (OUI) will choose the first location i.e. ( /data/clusterfiles/copy1 ) to create the datafiles for repository database MGMTDB. This can be a problem if we have limited space available on the underlying file system and we want to have a higher retention for the diagnostic data in the repository. It also has a potential to impact the OCR/Voting disk availability. We can however, relocate the MGMTDB database to a different storage location manually as per the MOS Note 1589394.1 or using the MDBUtil tool as per MOS Note 2065175.1 GIMR Clusterware (GI) components With the introduction of the repository database MGMTDB, we have now two additional components included in the Clusterware stack and these are ora.mgmtdb (repository database resource) and ora.MGMTLSNR (repository database listener) as shown below: ---// GIMR Clusterware resources //--- myracserver2 {/home/oracle}: crsstat | grep -i mgmt ora.MGMTLSNR mgmtlsnr ONLINE ONLINE on myracserver2 192.168.230.15 10.205.87.231 ora.mgmtdb mgmtdb ONLINE ONLINE on myracserver2 Open Unlike a generic Clusterware database or listener resource, these two resources have their own set of Clusterware commands as listed below. ---// list of srvctl commands available to operate on GIMR resources //--- myracserver2 {/home/oracle}: srvctl -h | grep -i mgmt | sort | awk -F ":" '{print $2}' srvctl add mgmtdb [-domain ] srvctl add mgmtlsnr [-endpoints "[TCP srvctl config mgmtdb [-verbose] [-all] srvctl config mgmtlsnr [-all] srvctl disable mgmtdb [-node ] srvctl disable mgmtlsnr [-node ] srvctl enable mgmtdb [-node ] srvctl enable mgmtlsnr [-node ] srvctl getenv mgmtdb [-envs "[,...]"] srvctl getenv mgmtlsnr [ -envs "[,...]"] srvctl modify mgmtdb [-pwfile ] [-spfile ] srvctl modify mgmtlsnr -endpoints "[TCP srvctl relocate mgmtdb [-node ] srvctl remove mgmtdb [-force] [-noprompt] [-verbose] srvctl remove mgmtlsnr [-force] srvctl setenv mgmtdb {-envs "=[,...]" | -env ""} srvctl setenv mgmtlsnr { -envs "=[,...]" | -env "="} srvctl start mgmtdb [-startoption ] [-node ] srvctl start mgmtlsnr [-node ] srvctl status mgmtdb [-verbose] srvctl status mgmtlsnr [-verbose] srvctl stop mgmtdb [-stopoption ] [-force] srvctl stop mgmtlsnr [-node ] [-force] srvctl unsetenv mgmtdb -envs "[,..]" srvctl unsetenv mgmtlsnr -envs "[,...]" srvctl update mgmtdb -startoption myracserver2 {/home/oracle}: Locating the GIMR database (MGMTDB) The repository database (MGMTDB) always runs as a single node instance. We can locate the node hosting MGMTDB in any of the following way. Using SRVCTL commands To locate MGMTDB database: srvctl status mgmtdb To locate MGMTLSNR listener: srvctl status mgmtlsnr Example: ---// use srvctl to find MGMTDB //--- myracserver2 {/home/oracle}: srvctl status mgmtdb Database is enabled Instance -MGMTDB is running on node myracserver2 ---// use srvctl to find MGMTLSNR //--- myracserver2 {/home/oracle}: srvctl status mgmtlsnr Listener MGMTLSNR is enabled Listener MGMTLSNR is running on node(s): myracserver2 Using CRSCTL commands To locate MGMTDB database: $GRID_HOME/bin/crsctl status resource ora.mgmtdb To locate MGMTLSNR listener: $GRID_HOME/bin/crsctl status resource ora.MGMTLSNR Example: ---// use crsctl to find MGMTDB //--- myracserver2 {/home/oracle}: crsctl status resource ora.mgmtdb NAME=ora.mgmtdb TYPE=ora.mgmtdb.type TARGET=ONLINE STATE=ONLINE on myracserver2 ---// use crsctl to find MGMTLSNR //--- myracserver2 {/home/oracle}: crsctl status resource ora.MGMTLSNR NAME=ora.MGMTLSNR TYPE=ora.mgmtlsnr.type TARGET=ONLINE STATE=ONLINE on myracserver2 Using OCLUMON utility To locate the node hosting repository: $GRID_HOME/bin/oclumon manage -get master Example: ---// use oclumon utility to locate node hosting GIMR //--- myracserver2 {/home/oracle}: oclumon manage -get master Master = myracserver2 myracserver2 {/home/oracle}: On the hosting node, we can identify the processes associated with these repository database resources as follows ---// locating MGMTDB on the master node //--- myracserver2 {/home/oracle}: ps -ef | grep pmon | grep MGMT oracle 2891 1 0 06:35 ? 00:00:01 mdb_pmon_-MGMTDB ---// locating MGMTLSNR on the master node //--- myracserver2 {/home/oracle}: ps -ef | grep tns | grep MGMT oracle 17666 1 0 05:23 ? 00:00:00 /app/grid/12.1.0.2/bin/tnslsnr MGMTLSNR -no_crs_notify -inherit myracserver2 {/home/oracle}: The repository database (MGMTDB) is by default associated with SID "-MGMTDB" and an equivalent entry can be located in /etc/oratab as shown below. ---// oratab entry for GIMR database MGMTDB //--- myracserver2 {/home/oracle}: grep -i mgmt /etc/oratab -MGMTDB:/app/grid/12.1.0.2:N myracserver2 {/home/oracle}: Explore the GIMR database (MGMTDB) As mentioned earlier in the introductory section, the repository database MGMTDB is created during the Clusterware installation process. This database is a single instance CONTAINER database (CDB) and has only one PLUGGABLE database (PDB) associated with it apart from the SEED database. The pluggable database is the actual repository holding all the diagnostic information. The container (CDB) database is named as _MGMTDB where as the pluggable (PDB) database is named after the Cluster name (with hyphen "-" replaced as underscore "_" in the cluster name) Example: ---// GIMR container database MGMTDB information //--- SQL> select name,db_unique_name,host_name,cdb from v$database,v$instance; NAME DB_UNIQUE_NAME HOST_NAME CDB --------- ------------------------------ -------------------- --- _MGMTDB _mgmtdb myracserver2 YES ---// pluggable database holding the actual data //--- SQL> select CON_ID,DBID,NAME,OPEN_MODE from v$containers; CON_ID DBID NAME OPEN_MODE ---------- ---------- ------------------------------ ---------- 1 1091149818 CDB$ROOT READ WRITE --> root container CDB 2 1260861561 PDB$SEED READ ONLY --> seed database 3 521100791 MY_RAC_CLUSTER READ WRITE --> actual repository ---// actual repository is named after the cluster name //--- myracserver2 {/home/oracle}: olsnodes -c my-rac-cluster Note: In case where the Cluster name has hyphen (-) in between the name it gets replaced by an underscore (_) while naming the pluggable (PDB) database. The management repository database (MGMTDB.[pdb_name]) is comprised of the following tablespaces. TABLESPACE_NAME FILE_NAME SIZE_MB MAX_MB AUT ------------------------------ ---------------------------------------------------------------------- ---------- ---------- --- SYSAUX /data/clusterfiles/_MGMTDB/datafile/o1_mf_sysaux__2318922894015_.dbf 150 32767.9844 YES SYSGRIDHOMEDATA /data/clusterfiles/_MGMTDB/datafile/o1_mf_sysgridh__2318922910141_.dbf 100 32767.9844 YES SYSMGMTDATA /data/clusterfiles/_MGMTDB/datafile/o1_mf_sysmgmtd__2318922860778_.dbf 2048 0 NO SYSMGMTDATADB /data/clusterfiles/_MGMTDB/datafile/o1_mf_sysmgmtd__2318922925741_.dbf 100 0 NO SYSTEM /data/clusterfiles/_MGMTDB/datafile/o1_mf_system__2318922876379_.dbf 160 32767.9844 YES USERS /data/clusterfiles/_MGMTDB/datafile/o1_mf_users__2318922940656_.dbf 5 32767.9844 YES Where:- TABLESPACE DESCRIPTION SYSMGMTDATA This is the primary tablespace and the repository which is used to store the diagnostic data collected by the Cluster Health Monitor (CHM) tool. SYSMGMTDATADB There is not much details available about this tablespace and by default, it doesn't contain any object. However, I assume it has something to do with the Change Assistant. SYSGRIDHOMEDATA This tablespace is used to store data related to Rapid Home provisioning (in cloud database context). By default, it doesn't contain any objects in it Note: In this example, the repository datafiles are not using the default location (OCR/Vote-disk filesystem). I had relocated the repository to different storage location These set of tablespaces are mapped to the following list of users ---// database users owning repository objects/data //--- SQL> select username,account_status,default_tablespace from dba_users 2 where default_tablespace in ('SYSGRIDHOMEDATA','SYSMGMTDATA','SYSMGMTDATADB'); USERNAME ACCOUNT_STATUS DEFAULT_TABLESPACE -------------- -------------------------------- ------------------------------ GHSUSER EXPIRED & LOCKED SYSGRIDHOMEDATA CHM OPEN SYSMGMTDATA --> User mapped to the Cluster Health Monitor (CHM) CHA EXPIRED & LOCKED SYSMGMTDATADB By default, only CHM database account is unlocked and is in turn used by the Cluster Health Monitor (CHM) to store the Clusterware diagnostic data in the database. The user account GHSUSER is used for Rapid Home provisioning (in cloud database context) and comes in to picture only when Rapid Home provisioning is used. The CHA user account is related to Cluster Health Adviser (CHA), which is an improvised version of Cluster Health Monitor (CHM) and would be available in the upcoming release of Oracle Clusterware. Conclusion Clusterware diagnostic repository has evolved a lot with Oracle Database 12c. Having a dedicated Oracle database for repository also brings more clarity in terms how the diagnostic data is stored by Oracle as well as it opens up multiple ways to query those diagnostic data. Oracle is likely to leverage the GIMR to store a variety of diagnostic and management data in the upcoming releases. With Organizations rapidly moving to Oracle cloud infrastructure, this repository database is also going to be extensively used for storing the provisioning metadata used by the cloud deployments.
Viewing all 1814 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>