Search

Loading...
Join this Blog for direct reference of any post into your inbox. To join just click on "Join this Site" under "Follower" then Login with your Email.*** DECLARATION: I Maintain this Blog for Helping Myself While at Work and Welcome any body Needing Help!!!.*** CAUTION: Using any of the script from this Blog may contain at Own Risk. These scripts May or May not have been Tested.***

Saturday, 26 December 2015

Highest Time taken to Manage Full and Incremental Backup

select rownum as rank
       ,name
       , DECODE(backup_type,'D','Full','I','Incremental'
           ,'L','Archive Log',backup_type) backup_type
       ,round(max_secs/60) mins
FROM ( select name , backup_type , max(elapsed_seconds)  max_secs
     from rc_backup_set bs, rc_database d
     where bs.db_key = d.db_key
     group by name, backup_type
     order by max(elapsed_seconds) desc
     )
WHERE rownum <= 10
order by backup_type;

Note: This is the highest time taken to complete a full and incremental backup. it is not the
average time to accomplish the backup. The highest time for a backup can be any reason for a particular
day backup. Always consider the average time for a backup.

Sunday, 8 March 2015

How to change Database Default Tablespace for users

1. Create new Tablespace <KAFA_ITGFIN_DBF> for schema KAFITGFIN
create tablespace KAFA_ITGFIN_DBF datafile
'D:\ORACLE\ORA92\MUJAZORC\KAFA_ITGFIN_DBF.DBF' size 5120m autoextend on;
2. Take Export of schema KAFAFIN on Seperate drive.
EXP SYSTEM/SYSMAN@mujazorc OWNER=KAFAFIN FILE=F:\Dump\kafafin_16022015.DMP GRANTS=Y BUFFER=10000 COMPRESS=Y ROWS=Y  LOG= F:\Dump\kafafin_16022015.log
3. Now create new schema or if you have already existing schema then drop and recreate it.
drop KAFITGFIN CASCADE;

CREATE USER KAFAITGFIN
  IDENTIFIED BY VALUES KAFAITGFIN
  DEFAULT TABLESPACE KAFA_ITGFIN_DBF
  TEMPORARY TABLESPACE TEMP
  PROFILE DEFAULT
  ACCOUNT UNLOCK;
  -- 1 Role for KAFITGFIN
  GRANT DBA TO KAFAITGFIN WITH ADMIN OPTION;
  ALTER USER KAFAITGFIN DEFAULT ROLE ALL;
  -- 1 System Privilege for KAFAITGFIN
  GRANT UNLIMITED TABLESPACE TO KAFAITGFIN WITH ADMIN OPTION;
  -- 1 Tablespace Quota for KAFITGFIN
  ALTER USER KAFAITGFIN QUOTA UNLIMITED ON KAFA_ITGFIN_DBF;

alter user KAFAITGFIN quota 0 on KAFA_FIN_DBF quota unlimited on KAFA_ITGFIN_DBF;
alter user KAFAITGFIN quota 0 on MUJ_FIN_DBF quota unlimited on KAFA_ITGFIN_DBF;
Note: Here in above example two different tablespace having the tables of same users.

In the case while changing the default tablespace for the user's in the same database, don't forget to assign
quota unlimited on new tablespace and revoke unlimited quota on old tablespace. This might not required while changing the tablespace of user's of different database.

revoke unlimited tablespace from KAFA_ITGFIN_DBF;

4. Now try to import the dump you have taken. Make sure dump is properly copied on the drive.

IMP SYSTEM/sysman@MUJAZORC FILE=F:\Dump\kafafin_16022015.DMP FROMUSER=KAFAFIN TOUSER=AWAED LOG=F:\Dump\kafafin_16022015.DMP.log
Don't forget to assign back quota unlimited on 'awaed' tablespace.
alter user awaed DEFAULT tablespace awaed QUOTA UNLIMITED ON awaed;

How to Move Tables into new tablespace of another database:
The case when you need to change tablespaces which is wrongly assigned and you want to move those users tables to different tablespace. First identify the tables from database which you want to move:
Select owner,table_name,tablespace_name from dba_tables where owner = 'ORAFIN';
OWNER TABLE_NAME TABLESPACE_NAME
ORAFIN ACCFILE MUJ_FIN_DBF
ORAFIN ACCOUNT_TRN           MUJ_FIN_DBF
ORAFIN ACCOUNT_TRN1         MUJ_FIN_DBF
ORAFIN APPLICATION MUJ_FIN_DBF
ORAFIN ASSETS1 MUJ_FIN_DBF
ORAFIN ASSETS1_ALL MUJ_FIN_DBF
ORAFIN ASSETS_ALL         MUJ_FIN_DBF
ORAFIN ASSETS_ALL_HI         MUJ_FIN_DBF
ORAFIN AUT_DISC         MUJ_FIN_DBF
ORAFIN AUT_LABEL         MUJ_FIN_DBF

Now you can use alter table command to move that tables to new tableapace. In the case while
moving the tables having index then you need to rebuild the index otherwise no need to do any thing.

Alter table table_name move tablespace KAF_ITGFIN_DBF;
Alter index <index_name> rebuild tablespace new_tablespace_name;

Generally Normal index moved explicitly where as LOB indexes are moved implicityly where as
related constraint will automatically moved to the new tablespace.

Now make sure the new schema is having the new default tablespace or not. Then the next time object is created on new tablespace.

Wednesday, 4 March 2015

Monitoring RMAN Backup with QBR tools

The Quest Backup Reporter is a windows client application. It can connect too many databases at a time and reports directly from oracle data dictionary views and RMAN catalog. It does not make any modification to either database as well as RMAN configuration. It is just a read only reporting tools.
Quest Backup Reporter tools offers DBA a dashboard view of all oracle backup across one desktop view. Thus it is relatively easy for the DBA who is managing different database backup to check all the backup status at a time (on single screen) and to ensure the backup are valid and recoverable. QBR supports oracle version 9i onwards and OS Windows Server2003/XP/Vista/Win7/Win Server 2008 (32 or 64).
QBR requires .Net Framework ver. 2.0–3.5 for installation and does not require oracle client installation for operation: Download Quest Backup Reporter


You can register target database using the menu Data source –> Register New Data source to access the information either through Control file or the RMAN catalog.
Once the targets are registered then you are able to check time of the successful and failure backup as well as real time status of running backup.

You can define policies if you need to meet backup with certain criteria such as retention period of backup of different types (full, level 0 and other incremental backup, archive log backup) and within various backup type you can define the days of the week as well as maximum run time.
Once a policy has been applied to a target you can click on the violations tab to see the types and categories of violations.
The Backup tab gives overview of the database backup as well as details of the particular backup. If you have already added recovery catalog you can see additional details of registered database.
Another very useful feature of QBR is checking growth of the backup size corresponding to its duration and Timelines for each type of backup.
Note: If you register more databases to QBR will take longer time to load them. This could results catalog database to slow and some other issues because it is continuously querying on the registered database catalog. 

Sunday, 1 March 2015

How to Monitor RMAN Backup through SQL Query

To Monitor RMAN you can use OEM or any other tools such as "Quest Backup Report for Oracle".
You can also used following Views in oracle 9i to check the RMAN backup status:
v$rman_configuration
v$backup_set
v$backup_piece
v$backup_spfile
v$session_longops
Script to check RMAN Configuration:
SELECT  name "PARAMETERS", value "RMAN CONFIGURATION"
FROM  v$rman_configuration
ORDER BY name;
PARAMETERS RMAN CONFIGURATION
BACKUP OPTIMIZATION ON
CHANNEL DEVICE TYPE DISK FORMAT   'H:\ORABACK\%U'
CONTROLFILE AUTOBACKUP ON
CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO 'H:\ORABACK\%F'
DEFAULT DEVICE TYPE TO DISK
RETENTION POLICY TO RECOVERY WINDOW OF 30 DAYS
Script to List RMAN Backup Piece:
SELECT bs.recid,
DECODE(   bp.status, 'A', 'Available', 'D', 'Deleted', 'X', 'Expired') status , bp.handle
handle, TO_CHAR(bp.start_time, 'dd/mm/yyyy HH24:MI:SS')start_time
, TO_CHAR(bp.completion_time, 'dd/mm/yyyy HH24:MI:SS') completion_time, bp.elapsed_seconds  "ELAPSED"
FROM
    v$backup_set bs JOIN v$backup_piece bp USING (set_stamp,set_count)
WHERE
    bp.status IN ('A', 'X') AND bp.completion_time > sysdate-1
ORDER BY bp.completion_time desc, bs.recid, bp.piece#;
Script to List RMAN Backup Set:
SELECT  bs.recid, DECODE(backup_type, 'L', 'Archived Logs', 'D', 'Datafile Full', 'I', 'Incremental') backup_type,
device_type "type", DECODE(bs.controlfile_included, 'NO', null, bs.controlfile_included) controlfile,
sp.spfile_included spfile, bs.incremental_level L,TO_CHAR(bs.start_time, 'dd/mm/yyyy HH24:MI:SS') start_time
  , TO_CHAR(bs.completion_time, 'dd/mm/yyyy HH24:MI:SS')  completion_time, bs.elapsed_seconds "ELAPSED", bp.tag, bs.block_size "BLOCK"
  FROM  v$backup_set  bs, (select distinct set_stamp, set_count, tag, device_type from v$backup_piece where status in ('A', 'X'))  bp,
   (select distinct set_stamp, set_count, 'YES' spfile_included from v$backup_spfile) sp
WHERE completion_time > sysdate -1
  AND bs.set_stamp = bp.set_stamp
  AND bs.set_count = bp.set_count
  AND bs.set_stamp = sp.set_stamp (+)
  AND bs.set_count = sp.set_count (+)
ORDER BY  completion_time desc, bs.recid;
Script to List RMAN Job done:
select SID, serial#, opname, START_TIME,TOTALWORK, sofar, (sofar/totalwork) * 100 done,
sysdate + TIME_REMAINING/3600/24 end_at, elapsed_seconds elapsed, username
from v$session_longops
Where totalwork = sofar
AND opname NOT LIKE '%aggregate%'
AND start_time > sysdate -1
AND opname like 'RMAN%';
                                                             
Note: If you need to query running status of rman backup then you have to set totalwork > sofar in where condition.
Script to Monitor RMAN Job in Progress:
SELECT sid, serial#, sofar, totalwork, opname, round(sofar/totalwork*100,2) AS pct_complete
FROM v$session_longops
WHERE opname LIKE 'RMAN%'
AND opname NOT LIKE '%aggregate%' AND totalwork != 0
AND sofar <> totalwork;
Script to list RMAN spfile & Controlfile Backup:
SELECT bs.recid, sp.spfile_included spfile
, TO_CHAR(bs.completion_time, 'dd/mm/yyyy HH24:MI:SS') completion_time
, DECODE(status, 'A', 'Available', 'D', 'Deleted', 'X', 'Expired') status, handle
FROM v$backup_set  bs, v$backup_piece  bp, (select distinct set_stamp, set_count, 'YES' spfile_included
 from v$backup_spfile) sp
WHERE bs.set_stamp = bp.set_stamp
  AND bs.completion_time > sysdate -1
  AND bs.set_count = bp.set_count
  AND bp.status IN ('A', 'X')
  AND bs.set_stamp = sp.set_stamp
  AND bs.set_count = sp.set_count
ORDER BY  bs.completion_time desc, bs.recid, piece#;
Apart from the above view you can also used below views in oracle 9i to find the size of rman backup:
v$backup_datafile
v$backup_redolog
Script to find size of Datafile and Archivelog Backup:
select trunc(completion_time) "BACKUP_DATE", sum(blocks*block_size)/1024/1024 "DBF_BACKUP",
(SELECT sum(blocks*block_size)/1024/1024  from v$backup_redolog
WHERE first_time > sysdate-1) "ARC_BACKUP"
from v$backup_datafile
WHERE completion_time > sysdate - 1
group by trunc(completion_time)
order by 1 DESC;
You can also use V$SESSION_LONGOPS to estimate when a job will complete for an RMAN backup or restore operation, because the total amount of work, the time the job began, and the amount of work left are known values.
SELECT sid, serial#, opname, ROUND(sofar/totalwork*100,2) AS pct_complete,
TO_CHAR(start_time,'dd-mon-yy hh24:mi') start_time, (sysdate-start_time)*24 hours_running,
((sysdate-start_time)*24*60)/(sofar/totalwork)-(sysdate-start_time)
*24*60 minutes_left, TO_CHAR((sysdate-start_time)/(sofar/totalwork)+start_time,'dd-mon-yy hh24:mi') est_comp_time
FROM v$session_longops
WHERE opname LIKE 'RMAN%'
AND opname NOT LIKE '%aggregate%'
AND totalwork != 0
AND sofar <> totalwork;                                              
In oracle 10 onwards you can used these view to find the rman backup and operation:
v$rman_status
v$rman_output
v$rman_backup_job_details
Script to Report RMAN full, incremental and archivelog backups
select SESSION_KEY, INPUT_TYPE, STATUS,
to_char(START_TIME,'mm/dd/yy hh24:mi') start_time,
to_char(END_TIME,'mm/dd/yy hh24:mi')   end_time,
elapsed_seconds/3600 hrs
from V$RMAN_BACKUP_JOB_DETAILS
order by session_key;
Script ro Report RMAN full, incremental without archivelog backups
SESSION_KEY, INPUT_TYPE, STATUS,
to_char(START_TIME,'mm/dd/yy hh24:mi') start_time,
to_char(END_TIME,'mm/dd/yy hh24:mi')   end_time,
elapsed_seconds/3600 hrs
from V$RMAN_BACKUP_JOB_DETAILS
where input_type='DB INCR'
order by session_key;

Saturday, 14 February 2015

Tuning PGA_AGGREGATE_TARGET

The oracle 9i introduces a new parameter PGA_AGGREGATE_TARGET to fix the issue of multiple parameters in oracle 9i such as SOR_AREA_SIZE, HASH_AREA_SIZE of earlier version.
The PGA is private memory region that contains the data and control information for a server process. Oracle Database reads and writes information in the PGA on behalf of the server process. The RAM allocated to the PGA_AGGREGATE_TARGET is used by Oracle connections to maintain connection-specific information (e.g., cursor states) and to sort Oracle SQL result sets.
PGA_AGGREGATE_TARGET allows Oracle9i perform far faster than earlier version because the memory is only allocated in used for the duration of the session upon which is immediately freed up to become available for use by other connected Oracle tasks.
Initial or Default Setting of PGA_AGGREGATE_TARGET
By default oracle database uses PGA_AGGREGATE_TARGET 20% of SGA Size. However the initial or default setting may be too low for database. In that case you need to run a representative workload on the instance and monitor performance of PGA statistics. Finally after monitoring (if required)
tune PGA_AGGREGATE_TARGET using oracle PGA advice setting.
If you have total memory of system = 4GB then memory required for OS = 20% of total Memory 0.8GB resulting memory 3.2GB available for use of SGA+PGA.
For OLTP system the PGA_AGG_TARGET = 20% of resulting memory i.e 3.2 * 0.2 = 0.64GB or 655MB. For DSS system the PGA_AGG_TARGET = 70% of resulting memory i.e 3.2 * 0.7 = 2.24GB or 1600MB
How to Tune PGA_AGG_TARGET
If the amount of data processed fits the size of optimal work area (Optimal size is when the size of a work area is large enough that it can accommodate the input data) then all the operation is performed in memory and cache is almost 100%.
If the amount of data processed larger than the optimal size then the input is divided into smaller pieces and in that case some piece of data are processed in memory while rest are spilled to temporary tablespace, thus an extra parse is performed on all or some input data, that corresponding size of available work area is called "one-pass". when the available work area size even less than "one-pass" then multiple passes over the input data are required causing dramatic increase in response time hence decreasing the cache hit %.
Generally in OLTP system, size of input data is small hence mostly run in optimal mode where as in DSS system, input data is very large thus require more PGA for good or healthy performance rate.
Before tuning PGA you need to monitor performace of automatic PGA memory. For that several dynamic performance views are available. These views show the total amount of RAM memory utilization for every RAM memory region within the database and new statistics to V$SYSTAT and the new V$PGASTAT and V$PGA_TARGET_ADVICE views assist the DBA in determining if this parameter is not set correctly and the best setting for the PGA_AGGREGATE_TARGET parameter.
Oracle allows a single process to use up to 5 %of the PGA_AGGREGATE_TARGET, and parallel operations are allowed to consume up to 30 percent of the PGA RAM pool.
SQL> Select * from v$SYSSTAT;
SQL> select  name, cnt, decode(total, 0, 0, round(cnt*100/total)) percentage
       from (select name, value cnt, (sum(value) over()) total
        from v$sysstat where name like 'workarea exec%'
       );
Profile                                    Count Percentage
----------------------------------- ------------ ----------
workarea executions - optimal              2,258      100
workarea executions - onepass              0            0
workarea executions - multipass            0            0

Select name, value/1024/1024 VALUE_MB
from   V$PGASTAT
where name in ('aggregate PGA target parameter',
'total PGA allocated',
'total PGA inuse')
union
select name, value
from   V$PGASTAT
where name in ('over allocation count');

NAME VALUE_MB
--------------- ----------------
aggregate PGA target parameter 471
over allocation count 0
total PGA allocated 33.8583984375
total PGA inuse 26.32421875

The optimal executions are performed entirely within the allocated memory areas. This is the most favorable type of execution. Sometimes, the operation is too big to perform within allocated memory area, and then some part of the operation spills on to disk. If only one-pass was needed on disk, then this execution is noted on one-pass statistics. If more than onepass was needed on disk then this execution is noted on multi-pass statistics. Ideally all execution should be in optimal statistics and the statistics for one-pass and multi-pass should be zero.
PGA_AGGREGATE_TARGET multipass executions indicate a RAM shortage, and you should always allocate enough RAM to ensure that at least 95% of connected tasks can acquire their RAM memory optimally. Thus DBA needs to increase this Parameter when "multipass" value is greater than ZERO and Reduce whenever the optimal executions are 100 percent.
SQL> select name, value from v$pgastat;
The following script provides excellent overall usage statistics for all Oracle9i connections.
NAME                                                   VALUE    
------------------------------------------------------ ----------
aggregate PGA target parameter                        284164096
aggregate PGA auto target                             235938816
global memory bound                                   14208000
total PGA inuse                                       25638912
total PGA allocated                                   35466240
maximum PGA allocated   1                             84498176
total freeable PGA memory                             0
PGA memory freed back to OS                           0
total PGA used for auto workareas                     3637248
maximum PGA used for auto workareas                   15818752
total PGA used for manual workareas                   0
maximum PGA used for manual workareas                 0
over allocation count                                 0
bytes processed                                       18302224384
extra bytes read/written                              4149905408
cache hit percentage                                  81.51
In the above display from v$pgastat we see the following statistics.
·         Aggregate PGA auto target - This column gives the total amount of available memory for Oracle9i connections. This value is derived from the value on the INIT.ORA parameter PGA_AGGREGATE_TARGET. 
·         Global memory bound - This statistic measures the max size of a work area, and Oracle recommends that whenever this statistics drops below 1MB, you should increase the value of the PGA_AGGREGATE_TARGET parameter.
·         Total PGA allocated - This statistic display the high-water mark of all PGA memory usage on the database. You should see this value approach the value of PGA_AGGREGATE_TARGET as usage increases.
·         Over allocation Count - If over allocation > 0 indicating that PGA_Target is too small to even meet the minimum PGA memory needs then you must increase the PGA_AGG_TARGET.
·         extra bytes read/write - Ideally it should be small if it is having large value you should increase the PGA_TARGET.
·         Cache hit percentage - A value of 100% means that all work are executed by the system since instance startup time have used an optimal amount of PGA memory. When it is not running optimally one or more extra parse is performed on input data which reduce the cache hit percentage.
1> If available memory >= optimal memory for 100% PGA cache hit %, set PGA_AGGREGATE_TARGET = optimal memory as in PGA_TARGET_ADVICE.
2> If available memory < optimal memory for 100% PGA cache hit %, set PGA_AGGREGATE_TARGET = available memory and settle for a lower PGA cache hit %.
·         Total PGA used for auto workareas - This statistic monitors RAM consumption or all connections that are running in automatic memory mode. Remember, not all internal processes may use the automatic memory feature. For example, Java and PL/SQL will allocate RAM memory, and this will not be counted in this statistic. Hence, we can subtract value to the total PGA allocated to see the amount of memory used by connections and the RAM memory consumed by Java and PL/SQL.
Estimated PGA memory for optimal/one-pass
This statistic estimates how much memory is required to execute all task connections RAM demands in optimal mode. Remember, when Oracle9i experienced a memory shortage, it invoked the multi-pass operation. This statistics is critical for monitoring RAM consumption in Oracle9i, and most Oracle DBA's will increase PGA_AGGREGATE_TARGET to this value.
SQL> SELECT round(PGA_TARGET_FOR_ESTIMATE/1024/1024) target_mb,
     ESTD_PGA_CACHE_HIT_PERCENTAGE cache_hit_perc,
     ESTD_OVERALLOC_COUNT
     FROM   v$pga_target_advice;
TARGET_MB CACHE_HIT_PERC ESTD_OVERALLOC_COUNT
---------- -------------- --------------------
        34             95                      0
        68            100                     0
       136            100                    0
       203            100                    0
       271            100                    0
       325            100                    0
       379            100                    0
       434            100                    0
       488            100                    0
       542            100                    0
       813            100                    0
Set the value of PGA_AGG_TARGET to a value where we avoid any over allocation, so lower target value we can set 942 where allocation count is 0 as well as cache 100%.
SQL> SELECT LOW_OPTIMAL_SIZE/1024 low_kb, (HIGH_OPTIMAL_SIZE+1)/1024 high_kb,
     estd_total_executions estd_tot_exe,
     estd_optimal_executions estd_opt_cnt,
     estd_onepass_executions estd_onepass_cnt,
     estd_multipasses_executions estd_mpass_cnt
     FROM v$pga_target_advice_histogram
     WHERE pga_target_factor = 2
     AND estd_total_executions != 0
     ORDER BY 1;
    LOW_KB    HIGH_KB ESTD_TOT_EXE ESTD_OPT_CNT ESTD_ONEPASS_CNT ESTD_MPASS_CNT
---------- ---------- ------------ ------------ ---------------- --------------
         8         16         1721         1721                0              0
        16         32           61           61                0              0
        64        128            6            6                0              0
       128        256           22           22                0              0
       512       1024           19           19                0              0
      1024       2048           61           61                0              0
      2048       4096            2            2                0              0
You can use the content of above views to determine how key PGA statistics will be impacted if you change the value of PGA_AGGREGATE_TARGET.
To change the PGA_AGG_TARGET value:
Alter system set pga_aggregate_target=987758592;
After increasing the PGA check PGA_AGG_target again.
select * from v$pgastat;
Some mor dynamic views to check the PGA perfromace:
v$process, PGA_USED_MEM, PGA_ALLOC_MEM, PGA_FREEABLE_MEM and PGA_MAX_MEM
V$SQL_WORKAREA_HISTOGRAM;
V$SQL_WORKAREA_ACTIVE;
V$SQL_WORKAREA;
V$PGA_TARGET_ADVICE;
V$PGA_TARGET_ADVICE_HISTOGRAM
V$SYSSTAT
V$SESSTAT

Sunday, 2 November 2014

How to Find Database Optimal and Needed Undo Size

Since oracle 9i, rollback segments are re-named undo logs, traditionally it were stored in rollback segments until a commit or rollback segments were issued.
Automatic undo management allows the DBA how long information should be retained after commit. The larger your undo tablespace the more you can hold for long running DML operation (Preventing "snapshot to old error on long running queries).
You can choose to allocate a specific size for the UNDO tablespace and then set the optimal UNDO_RETENTION according to UNDO size. This is specially useful when you disk space is limited and you do not want to allocate more space than required UNDO size.

OPTIMAL UNDO RETENTION = ACTUAL UNDO SIZE /(DB_BLOCK_SIZE *       UNDO_BLOCK_PER_SEC)

Find Actual Undo Size:
SELECT SUM(a.bytes) "UNDO_SIZE"
  FROM v$datafile a, v$tablespace b, dba_tablespaces c
 WHERE c.contents = 'UNDO' AND c.status = 'ONLINE'
   AND b.name = c.tablespace_name AND a.ts# = b.ts#;

UNDO_SIZE
-----------
  7948206080

Find Undo Blocks per Second:
SELECT MAX(undoblks/((end_time-begin_time)*3600*24))
      "UNDO_BLOCK_PER_SEC"
  FROM v$undostat;

Thursday, 30 October 2014

ORA-00392 While duplicating the database

We are recieving ORA-00392 error while trying to open a database after restoring and recovering a database in case of duplicating the database. When i trying to open the reset logs, i am getting the following error:
SQL> ALTER DATABASE OPEN resetlogs 
ORA-00392: log 2 OF thread 1 IS being cleared, operation NOT allowed

SQL>ALTER DATABASE clear logfile GROUP 2;
SQL>ALTER DATABASE OPEN resetlogs;

Since the path on source and destination is different, it is trying to create a file under old destination which is not present on the target server. Solution to this is you should clear the busy onlinelog then later you need to rename the files with your current location. But when i tried clearing the log then gettting now different error:

SQL>ALTER DATABASE clear logfile GROUP 2;
ALTER DATABASE clear logfile GROUP 2
ORA-00344: unable TO re-create online log 'D:\ORACLE\ORADATA\LOG02B.DBF'
ORA-27040: file CREATE error, unable TO CREATE file
OSD-04002: unable TO OPEN file
O/S-Error: (OS 3) The system cannot find the path specified.

Tuesday, 28 October 2014

Tablespace Management using Toad

You can view all of your tablespace information using toad tablespace screen from Database –> Administer –> Tablespaces
Here each table describes different detailed information across all the tablespaces in database. If you open this screen from the toad DBA module then you are also able to see the “space history” and “IO history” tab along with Files, Free space, object, Fragmentation.
With these tab you are able to able perform space monitoring and planning steps. These tab permits you check graphically space and IO usage over time. Through this way DBA are also able to estimate ‘Object size’ based on this estimation.
From the below “alter tablespace” tab you can
§         Modify the tablespace and datafile size.
§         Add new datafile to the tablespace.

Wednesday, 22 October 2014

Alter System ‘SWITCH LOGFILE’ vs ‘ARCHIVE LOG CURRENT’

Both the command is used to force the log switch but they do it in different way. ARCHIVELOG CURRENT waits for the writing to complete. This can take several minute to complete for large redologs file where as SWITCH LOGFILE is fast as it does not wait for the archiver process to complete writing the online redologs.
The ‘ARCHIVELOG CURRENT’ command is safer because it waits for OS to acknowledge that the redolog has been successfully written. Hence this command is best practice for production backup script for RMAN.
The ‘ARCHIVELOG CURRENT’ allows, you to specify the thread to archive (If you do not pass the thread parameter then oracle will archive all full online redo logs) where as ‘SWITCH LOGFILE’ archives only the current thread.
Note: In case of RAC environment, the ‘ARCHIVELOG CURRENT’ will switch the log of all nodes (instances) where as ‘SWITCH LOGFILE’ will only switch the log file on the instance where you issue this command.
SWITCH LOGFILE’ is not a correct method after hot backup
In order to be sure that all changes generated during a hot backup are available for a recovery, Oracle recommends that the current online log be archived after the last tablespace in a hot backup is taken out of backup mode. A similar recommendation stands for after a recovery manager online database backup is taken.

Sunday, 19 October 2014

Analyzing Materialized Views for Fast Refresh

The article will describe how materialized views can be analyzed and optimized to ensure they can be FAST REFRESHED, through this way refresh time will reduced from more than 5hours to less than 1hour. 
The tools used to perform the FAST REFRESH analysis are:
         DBMS_MVIEW
         MV_CAPABILITIES_TABLE
The Oracle provided DBMS_MVIEW.explain_mview procedure is used to analyze each of the existing materialized views FAST REFRESH capabilities and write the results to the MV_CAPABILITIES_TABLE.
The MV_CAPABILITIES TABLE is created in the owner’s schema by running the following Oracle 9i script:
SQL> @$ORACLE_HOME/rdbms/admin/utlxmv.sql

Steps:
        Created the necessary materialized view logs for each of the master tables making sure to include all columns referenced in any of the materialized view aggregations along with the ROWID, SEQUENCE, and INCLUDING NEW VALUES clauses based on the Oracle 9i documentation FAST REFRESH requirements.
        Created materialized views including the REFRESH FORCE ON DEMAND option, in order to prepare for the detailed analysis of the respective FAST REFRESH