Quantcast
Channel: Oracle
Viewing all articles
Browse latest Browse all 1814

Blog Post: An interesting Exadata DB migration scenario (XTTS with Incr backup + ORA-00059)

$
0
0
SUMMARY In the previous article “ The need of refreshing/copying an Oracle 12c Password File (stored in ASM) while configuring Active Data Guard of a RAC database ”, I have discussed and shared how we resolved an interesting scenario that we encountered during our ongoing Exadata database migration engagement. In this article, I am going to share yet another interesting scenario and its resolution. Let me emphasize, though these scenarios appear to be minor, but, the resolution will defiantly bail you out from wasting a lot of time and redoing the procedure again, at least it does saved us from redoing the one week work again. PROBLEM As part of our ongoing Exadata database migration engagement at one of the clients, we have an upfront challenges like, minimizing the required downtime for some of the business critical databases and try out some out of the box ideas to achieve business demands. Let me discuss an interesting, I must say pretty tricky, situation that we encountered few days back. During the course of a 30TB database cross platform migration, we started off with all the prerequisites and completed the full backup, conversion and roll forward phases. When we reached and performed the database metadata import, we encountered the below error: Import: Release 12.1.0.2.0 - Production on Sun Jul 10 15:06:56 2016 Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Advanced Analytics and Real Application Testing options Startup took 2 seconds Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded Source time zone is -07:00 and target time zone is +03:00. Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01": system/******** parfile=auimpxtts.dat Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK ORA-39123: Data Pump transportable tablespace job aborted ORA-00059: maximum number of DB_FILES exceeded When we hit the problem, it was kind of a heart stopping moment, just kidding. Because, it took around 7 days to reach to this phase. Yaep, you heard correctly, 7 days, we are talking about a 30TB DB on a pretty slow network. Because it’s a critical production system, it was very difficult to get another downtime, to perform the final procedure to complete the XTTS. In the event of a typical situation, simply increasing the value of the db_files initialization parameter and restarting the database would have been perfectly suited. But, the conditions we were in was a different one. WORKAROUND Though an ORA-00059 is not a fatal error, but our scenario was very special. Any wrong move could have easily ruined our past 7 days work and our preparation. What we did was the following: Increased the number of db_files value to match with its source and restarted the instance and performed the metadata import once again. Unfortunately, the attempt failed with the following error: Import: Release 12.1.0.2.0 - Production on Sun Jul 10 15:19:52 2016 Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Advanced Analytics and Real Application Testing options Startup took 4 seconds Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded Source time zone is -07:00 and target time zone is +03:00. Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01": system/******** parfile=auimpxtts.dat Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK ORA-39123: Data Pump transportable tablespace job aborted ORA-29349: tablespace 'APPS_TS_ARCHIVE' already exists Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" stopped due to fatal error at Sun Jul 10 15:20:27 2016 elapsed 0 00:00:31 Since it was coming from the automated job from XTTS with incremental procedure, we couldn’t think of customizing or modifying the value to ensure we could resume from the point it failed. Since the metadata import was not done successfully, we were not complete sure what option to look for and the following two ideas were taking place in our minds: Rename the tablespace and start over the metadata import Drop the tablespace and start over the metadata import We were not completely sure which option would perfectly work for us, at the same time, we need to ensure we don’t lose our 7 days work. After a few brainstorming sessions within the team and searching for the solution over the internet, hence we decided to exercise similar step with a small tablespace on a different database: When tried with the Rename tablespace option, the metadata import failed with the following error: Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Advanced Analytics and Real Application Testing options Startup took 0 seconds Master table "SYS"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded Starting "SYS"."SYS_IMPORT_TRANSPORTABLE_01": /******** AS SYSDBA parfile=impttstest.dat Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK ORA-39123: Data Pump transportable tablespace job aborted ORA-01537: cannot add file '+DATA/DB1/DATAFILE/tts_test.1214.916851897' - file already part of database It was obvious, because after we rename the tablespace, the associate datafiles belongs to the renamed tablespace and can’t be added to another tablespace. When then tried the DROP TABLESPACE option. Thinking it would just drop the metadata of the Tablespace. However, without any further additional parameters, the DROP Tablespace, included the data files also. We were kind of disappointed, but, we were never in a mood to lose hopes. We knew we could perform DROP TABLESPCE just to drop the metadata, and leave the physical data files on the disk. We confirmed the same from the complete syntax: We then looked at the DROP TABLESPACE SYNTAX and then decided to play with DROP TABLESPACE KEEP DATAFILES option. As the default DROP TABLESPACE contains delete data files option. We then performed DROP tablespace with KEEP DATAFILES option and performed the metadata import and were happy to see our solution worked. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Advanced Analytics and Real Application Testing options Startup took 1 seconds Master table "SYS"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded Starting "SYS"."SYS_IMPORT_TRANSPORTABLE_01": /******** AS SYSDBA parfile=impttstest.dat Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK Completed 1 PLUGTS_BLK objects in 0 seconds Processing object type TRANSPORTABLE_EXPORT/TABLE Completed 1 TABLE objects in 1 seconds Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS Completed 1 TABLE_STATISTICS objects in 0 seconds Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER Completed 1 MARKER objects in 9 seconds Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK Completed 1 PLUGTS_BLK objects in 0 seconds Job "SYS"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at Sun Jul 10 17:17:53 2016 elapsed 0 00:00:11 As mentioned earlier, though the problem is realistically simple, but the circumstances we were in was very unusual and was not easy to find a solution matching our requirements. Our objective and motive was to save all the 7 days’ work we did it. At the end, we all happy that we did a small test, rather than directly applying the option on the target database. Once we apply the solution on the target database, the metadata import was restarted and succeeded without any issues. Take away Any typical cross platform database migrations (XTTS) requires a shell database creation on the target system. As a best practice, to ensure nothing is left out (specially the initialization parameters) from the source database configuration, generate a database template of source database and create the shell database using the template on the target system. This ensures the shell database is configured exactly same as the source database. Though we did the similar practice, we ended up with db_files issue, as explained in this article. We then did a postmortem job to figure it out what could be the reason. It was learnt during our investigation that despite we have the template from the source, somehow, the DBCA doesn’t pick-up the right value for db_files parameter. Surprisingly, the database creation script, we enabled this option while creating the shell database using the DBCA, and the init.ora file, the db_files where configured with 1000 values, but, when actually created with DBCA, it was limited to 200 only. Apparently this could be a bug, perhaps require an escalation with Oracle support. For now, it is better to cross verify the initialization parameter values between source and target databases before starting the XTTS procedure. CONCLUSION The error is a plain vanilla, but the scenario was very tricky. The object was to save the hard work we did it and succeed with the options we thought. Hope this interesting scenario would help someone somewhere on the planet.

Viewing all articles
Browse latest Browse all 1814

Trending Articles