Join this Blog for direct reference of any post into your inbox. To join just click on "Join this Site" under "Follower" then Login with your Email.*** DECLARATION: I Maintain this Blog for Helping Myself While at Work and Welcome any body Needing Help!!!.*** CAUTION: Using any of the script from this Blog may contain at Own Risk. These scripts May or May not have been Tested.***

Thursday, 24 October 2019

Working on Different Incident Ticket for Oracle Database

Working on Agent/Listner/DB down Incident Ticket
Login to Server and DB where the actual issue is. You can find the server name and DB name from the ticket.
ps -ef |grep emagent --- Check the Agent availability. If it is running then fine other wise you need to start the agent.
echo $AGENT_HOME     --- You can find the agent location in general agent location is same as ORACLE_HOME location.
emctl start agent    --- From the agent location run this command.
ps -ef | grep tns    --- Check the Listener availability. You can also use LSNRCTL>status command to check the listener. If it is running then fine other wise you need to start the listener.
LSNRCTL>start      --- From the LISTENER prompt run this command.  
tnsping dbname       --- Check the DB is ping. Login to the database. Check the instance status and database open_mode. If the instance is OPEN and database is in READ WRITE mode then fine other wise you need to startup the database.
uptime               --- From the oracle user run this command to check the database uptime.
last reboot   --- From the orale user run this command to check the server reboot time.

Working on Metrics Level High Incident Ticket
Login to Server and DB where the actual issue is. Check the Current_utilization of process if it is less than threshold value then fine (Make the % Calculation of current_utilization out of limit_value) otherwise you need to increase the limit_value.
Select resource_name, current_utilization,  max_utilization,  limit_value from v$resource_limit 
where resource_name in ('sessions', 'processes');

Note: When the number of sessions increases (activity is high) the matrics level suddenly increases after the sessions finished their jobs matrics level goes down so perform some more check before increasing the limit_value.

select s.username,s.status, count(s.program) Count
from gv$session s,v$process p
where p.addr=s.paddr AND s.status='INACTIVE'
group by s.username, s.status order by 2 desc; 

select  count(s.status) "INACTIVE SESSIONS > 1HOUR"
from gv$session s, v$process p
where p.addr=s.paddr and s.last_call_et > 3600 and s.status='INACTIVE';

Working on Tablespace full Incident Ticket
Login to the server and database check tablespace usage if the threshold is up > 85% then add a datafile to the tablespace. Before add datafile you must check filesystem /DG usage as datafile is autoextend.

select tablespace_name, used_percent from dba_tablespace_usage_metrics where used_percent > 85;

select df.tablespace_name "Tablespace", totalusedspace "Used MB",
(df.totalspace - tu.totalusedspace) "Free MB",df.totalspace "Total MB",
round(100 * ( (df.totalspace - tu.totalusedspace)/ df.totalspace)) "Pct. Free"
from (select tablespace_name,round(sum(bytes) / 1048576) TotalSpace
from dba_data_files group by tablespace_name) df,
(select round(sum(bytes)/(1024*1024)) totalusedspace, tablespace_name from dba_segments
group by tablespace_name) tu where df.tablespace_name = tu.tablespace_name ;

ALTER TABLESPACE xyz ADD DATAFILE '/u01/oradata/xyz/xyz01.dbf' SIZE 30G;   ### in case of normal file system.

From the below query you can get the datafile size and location of particular tablespace.
COLUMN tablespace_name FORMAT A20
COLUMN file_name FORMAT A40
select substr(tablespace_name,1,15) as Tablespace, substr(file_id,1,5) as File_id, substr(file_name,1,40) as File_Name, 
bytes/1024/1024/1024 as Size_GB from dba_data_files 
where tablespace_name = 'XYZ';

ALTER TABLESPACE xxxxx ADD DATAFILE SIZE 30G;   ### in case of OMF file system.

### in case of Big file tablespace 
You need to resize the tablespace. First check the tablespace size information from the above query then calculate resize value.
Resize value = Total_Size + Expected Size to increase


Alter Tablespace XYZ resize <resize_value>; ### resize value must be greater than Total_size

### in case of Containers and Pluggable database
Find the name of container database and you must into that container database before using alter tablespace command.
ALTER TABLESPACE xxxxx ADD DATAFILE SIZE 30G;   ### in case of OMF file system.

### in case of Temp tablespace full alert
Check the TEMP tablespace usage details. If the TEMP tablespace usage is above threshold value then perform some more check to confirm who is using temp segemnt.

select a.tablespace_name tablespace,d.TEMP_TOTAL_MB,sum (a.used_blocks * d.block_size) / 1024 / 1024 TEMP_USED_MB,
d.TEMP_TOTAL_MB - sum (a.used_blocks * d.block_size) / 1024 / 1024 TEMP_FREE_MB,
round(100 * ( (d.TEMP_TOTAL_MB - sum (a.used_blocks * d.block_size) / 1024 / 1024)/ d.TEMP_TOTAL_MB)) "Pct. Free"
from v$sort_segment a, (Select, c.block_size, sum (c.bytes) / 1024 / 1024 TEMP_TOTAL_MB
from v$tablespace b, v$tempfile c where b.ts#= c.ts#
group by, c.block_size) d where a.tablespace_name =
group by a.tablespace_name, d.TEMP_TOTAL_MB;

###Sort space used by the session
select S.username, S.osuser, P.spid, s.program, SUM(T.blocks)*TBS.block_size/1024/1024 mb_used, T.tablespace, Count(*) sort_ops
from v$sort_usage T, v$session S, dba_tablespaces TBS, v$process P
where T.session_addr = S.saddr and S.paddr = P.addr and T.tablespace = TBS.tablespace_name
Group by S.username, S.osuser, P.spid, S.program, TBS.block_size, T.tablespace order by P.spid;

Note: Temp space is used by the sessions once the sessions finishes their jobs it is automatically reclaimed temp space.

Archive Area Full or FLASH drive full incident ticket
Login to the server and validate the FLASH mount point usage, if the usage below threshold (85%) fine otherwise you need to take archive backup with delete input command. To remove all archive which is already backed up from the archive area.

rman target / connect catalog catalog/catalog@rman
Backup archive log all delete input;

Login to the server where the actual issue is.Validate /u01 mount point if the threshold value < 85% then fine otherwise you need to perform some clean up better to move older logs to other location. 

In case of ASM file system. you need to add ASM disk to the diskgroup to increase the diskgroup size. Reach to the storage team or create a ticket for storage team to provide same size of luns then create ASM disk of same size. Finally add these disks to the diskgroup. 
asmcmd> lsdg   --To check the diskgroup size information
asmcmd> lsdsk  --To check the ASM disks. Same check you can perform from ASM instance prompt.
/etc/init.d/oracleasm listdisks   ### to check the ASM all disk present on the server.

SQL> SELECT name, free_mb, total_mb, free_mb/total_mb*100 as percentage FROM v$asm_diskgroup;
SQL> select mount_status,header_status,mode_status,state,total_mb,free_mb,name,path,label from v$asm_disk where group_number = 1 order by name;

SQL> connect / as sysasm

If this were a RAC installation, the disks would only be stamped by one node. The other nodes would just scan for the disks to make sure new disk is accessible to all node.
/etc/init.d/oracleasm scandisks

ALTER DISKGROUP DATA ADD DISK 'ORCL:DATA_0001';  ### to add new disk to diskgroup

ALTER DISKGROUP DATA RESIZE ALL SIZE 100G;  ### to resize the existing disk size.

Working on DB User Unlock Incident
Login to the server and database where the actual issue is. Check current status (last lock date) of the user then unlock the user

ALTER USER username ACCOUNT unlock;

In case of reset the user password. Get the information of user and password from dba_users view.

ALTER user username identified by *****; ### paste the same password getting from dba_users in case you do not want to change the password.

Working on Flash recovery area full incident
Login to the server and Db where the actual issue is and check the flash area usage.

select name, ceil( space_limit / 1024 / 1024) size_mb,
ceil( space_used / 1024 / 1024) used_mb,decode( nvl( space_used, 0),0, 0, ceil ( ( space_used / space_limit) * 100) ) pct_used
from v$recovery_file_dest order by name desc;

Perform some more check to confirm who is using the flash recovery area before increasing the flash_recovery size.

select * from v$flash_recovery_area_usage;
show parameter db_recovery_file_dest_size;


Working on database block corruption incident
Logon to the server and database where the actual issue is and perform check block corruption.


SELECT DISTINCT owner, segment_name FROM v$database_block_corruption dbc
JOIN dba_extents e ON dbc.file# = e.file_id AND dbc.block# BETWEEN e.block_id and e.block_id+e.blocks-1

Note: Block corruption is automatically repaired by standby database other wise you need to perform recovery of that particualr block.

Recover datafile block <corrupt_block>;

Thursday, 20 December 2018

Steps by Step Oracle database Upgration/Migration

What are required Steps for Oracle 10g to 11g upgradation:
1. Installing oracle 11g home: we cannot upgrade the existing oracle home since 11g is not patchset. We have to install 11g separate oracle home in parallel to 10g oracle home.
2. Run Pre-upgrade utility: From oracle 11g home\admin folder copy utlu111i.sql file and keep it into temporary location accessible to oracle 10g home then open the oracle 10g database and from the SQL prompt run this utility to check the compatibility issues. If you find any critical warning message try to check these issues before running upgradation process.
3. Correcting  version of existing Timezone issues: Check the version of existing time zone if it is less than 4 then you must apply the patch for version 4 timezone files. Check the metalink note for more details:
If there is no official patchset for the version you are currently having then you can identify the utltzuv2.sql and timezdiff.csv combination patch to find the correct patchset.
SQL> Select * from v$timezone_file;
Once identify the correct patchset Apply it normally then bounce back the database and check the version again. Try to run the utlu111i.sql utility again check the critical issues if any:
4. Correcting DB containing stale optimizer statistics: To remove this issue connect with sys user and gather statistics
Finally run the utility again to ensure you don’t have any critical issue to run the upgrade:
5. Starting Upgrade: From the source of oracle 11g home directory to connect the database sys as sysdba. You will connected as an idle instance:
SQL> startup upgrade;
6. Post-Upgradation Task: Once the upgrade will finish it will shut down the database automatically.
Login again as sysdba and startup in normal mode
SQL> startup;
SQL> run utlul111i.sql script to display the results upgrade
SQL> run catuppst.sql from admin directory to perform upgrade action
SQL> run utrp.sql to recompile invalid object

Upgrading Oracle Database From to
Run SQL scripts and utilities from command line to upgrade a database to the new Oracle Database release.
Current ORCALE_HOME=/u01/app/oracle/product/
New ORACLE_HOME=/u02/app/oracle/product/12.2/db_1
1. Run the preupgrade tool:
/u01-/app/oracle/product/ -jar /u02/app/oracle/product/12.2/db_1/rdbms/admin/preupgrade.jar FILE DIR /u01/
Run the preupgrade_fixup.sql from 11g home
SQL> @ preupgrade_fixup.sql
A> Run the /u02/app/oracle/product/12.2/db_1/rdbms/admin/emremove.sql to remove the EM console from 12c home
B> Run this block to refresh the materialized views.
Increase the number of processes and purge the recycle bin
SQL> alter system set processes = 300 scope = spfile;
export ORACLE_HOME=/u01/app/oracle/product/12.2/db_1/
Start the database in upgrade mode from 12c home
$ sqlplus / as sysdba
Connected to idle instance
SQL> startup nomount;
SQL> alter database mount;
SQL> alter database open upgrade;
4.    Run
5.    Run “postupgrade_fixups.sql” 
To fix the time zone from 14 to 26 we need to download dbms_dst from metalink and apply it to the new oracle home.
SQL> select version from v$timezone_file;
6.    Execute utlu122s.sql:
7.    Execute catuppst.sql:
8.    Execute utlrp.sql script to compile  invalid objects.
9.    Set COMPATIBALE parameter value for 12.2.0 as well as  start the listener with 12.2 home.
Migrating a Database across platforms.
The export utility are the only method to migrate database running from one platform to another.
1.    Query in source database to find the name of tablespace and make full export of source database.
SQL> select tablespace_name from dba_tablespaces;
       $>exp system/manager FULL=y FILE=myfullexp.dmp 
2.    Move the dump file to the target database and before import the dumpfile create required tablespace.
> imp system/manager FULL=y IGNORE=y FILE=myfullexp.dmp

Tuesday, 17 July 2018

DB File Sequential Read Wait/ DB File Scattered Read

A sequential read reads operation reads data into contiguous memory. Single block I/Os are usually the result of using indexes. This event is also used for rebuilding the controlfile and reading datafile headers (P2=1). In general, this event is indicative of disk contention on index reads. To determine the actual object being waited can be checked by the p1, p2, p3 info in v$session_wait.
Select * from   v$session_event
where  event = 'db file sequential read'
order by time_waited;

Select segment_name, partition_name, segment_type, tablespace_name
from   dba_extents a, v$session_wait b
where  b.p2 between a.block_id and (a.block_id + a.blocks - 1)
and    a.file_id  = b.p1
and    b.event    = 'db file sequential read';

Select a.sid, a.serial#, a.username, a.osuser, b.sql_text
from   v$session a, v$sqltext b
where  a.sql_hash_value = b.hash_value
and    a.sql_address    = b.address and    a.sid in (select sid from   v$session_wait
where  event = 'db file sequential read')
order by a.sid, b.hash_value, b.piece;
Note: Where P1 = file#, P2 = block#, P3 = blocks 9 (should be 1)
Generally the entire database having some wait event doing IO for index scan usually. But if you see seconds in waiting greater then 0, you must tune index I/O.
To reduce this wait event follow the below points:
1.      Tuning SQL statements to reduce unnecessary I/O request is the only guaranteed way to reduce "db file sequential read" wait time.
2.      Distribute the index in different file system to reduce the contention for I/O
Tuning Physical devices, the data on different disk to reduce the I/O.
3.      Use of Faster disk reduces the unnecessary I/O request.
Increase db_block_buffers or larger buffer cache sometimes can help.
DB File Scattered Read:
This is indicating disk contention on full table scans. Rarely, data from full table scans could be fitted into a contiguous buffer area, these waits would then show up as sequential reads instead of scattered reads. However, scattered read will never read just one block (p3 is always >= 2). 
The DBA should be concerned with average I/O time and session that spend time on this event. The average multi block I/O should not exceeds 1/100 sec.
SELECT a.average_wait "SEQ READ", b.average_wait "SCAT READ"
FROM sys.v_$system_event a, sys.v_$system_event b
WHERE a.event = 'db file sequential read' AND b.event = 'db file scattered read';
---------- ----------
       .74        1.6

Select * from   v$session_event
where  event = 'db file scattered read'
order by time_waited;

Select a.sid,, a.value
from   v$sesstat a, v$statname b
where  a.statistic# = b.statistic#
and    a.value     <> 0 and = 'table scan blocks gotten'
order by 3,1;
If the average I/O wait time for the db file scattered read event is acceptable, but the event indicates waits in a certain session, then this is an application issue.
In this case DBA needs to determine which objects is being read the most from P1 and P2 values, check the relevant SQL statement, explain plan for that SQL, Perform SQL tuning. The motive is to reduce both the logical and physical I/O calls: link to check SQL or Application Tuning.
If an application that has been running fine for suddenly starts indicating the db file scattered read event then this could be an index issue. One or more indexes may have been dropped or become unusable.
To determine which indexes have been dropped, the DBA can compare the development, test, and production databases. The ALTER TABLE MOVE command marks all indexes associated with the table as unusable. Certain partitioning operations can also cause indexes to be marked unusable. This includes adding a new partition or coalescing partitions in a hash-partitioned table, dropping a partition from a partitioned table or global partitioned index, modifying partition attributes, and merging, moving, splitting or truncating table partitions. A direct load operation that fails also leaves indexes in an unusable state. This can easily be fixed by rebuilding the indexes.

Monday, 16 July 2018

Oracle 12c new features?

Oracle database 12c (c for cloud) a multi tenant database management system introduce so many important new capability in so many areas – database consolidation, query optimization, performance tuning, high availability, partitioning, backup and recovery .
Pluggable Databases:
In Oracle 12c, in a pluggable database environment, we can create a single database container, and plug multiple databases into this container. All these databases then share the exact same oracle server/background processes and memory, unlike the previous versions where each database has its own background processes and shared memory. This helps in database consolidation and reduces the overhead of managing multiple desperate databases.
Consolidation is an important business strategy to reduce the cost of infrastructure and operational expense. In many production database servers, a big portion of CPU cycles go unused. By consolidating many databases into fewer database servers, both the hardware and operational staff can be more effectively utilized.
Oracle's new pluggable database feature reduces the risk of consolidation because the DBA can easily plug or unplug an existing database to or from a container database. There is no need to change any code in the application.
It is also easy to unplug a database and convert the pluggable database to a traditional database if required. In addition, you can back up and recover pluggable databases independently of the container database; you can also perform a point-in-time recovery of a pluggable database. Further, Resource Manager can be used to control resources consumed by a pluggable database.
Optimizer features:
Oracle 12c introduces a few useful SQL Optimizer features, and most of these are automatically enabled.
It is not uncommon for Optimizer to choose an inefficient execution plan due to incorrect cardinality estimates, invalid statistics, or even stale statistics. This can have dire results. A SQL statement estimated to run for a few seconds might take hours to execute if the chosen execution plan is not optimal.
  1. Identity columns which are auto incremented at the time of insertion
SQL> create table emp (emp_id number generated as identity, emp_name varchar);
SQL> create table emp (emp_id number generated as identity (start with 1 increment by 1 cache 20 noorder), emp_name varchar;
  1. Increased size limit for VARCHAR2, NVARCHAR2, and RAW datatypes to 32K (from 4K).
  2. Default on Null (A default value is inserted into the null column).
  3. Session private statistics for GTTs (Table and index statistics are held private for each session)
  4. UNDO for temporary tables can now managed in TEMP, rather than regular UNDO tablespace.
  5. For global temporary tables will not generate UNDO. Reduces contents of regular UNDO allowing better flashback operation.
  6. In oracle 12c we are able to make column invisible
SQL> create table ss (column-name column-type invisible);
SQL> alter table ss1 modify column-name invisible;
SQL> alter table ss1 modify column-name visible;
  1. In oracle 12c, No need to shutdown the database for changing Archive log mode.
  2. Datapump now allow tuning off redo for the import (only) operation.
  3. Now can create duplicate indexes using the same column, in the same order, as an existing index.
  4. The truncate command is enhanced with a CASCADE option which allows child record.
  5. Oracle 12c allows using DDL inside the SQL statements (PL/SQL inside SQL).
  6. Moving and Renaming datafile is now ONLINE, no needs to put datafile in offline.
  1. A role can now be granted to a code unit (PL/SQL Unit Security). Thus one can determine at a very fine grain, which can access a specific unit of code.
  2. We can declare PL/SQL functions in the WITH Clause of a select statement.
  3. Map Reduce can be run from PL/SQL directly in the database.
  4. We can use Booleans values in dynamic PL/SQL. Still no Booleans as database types.
  1. Introduction of Flex ASM, with this feature, database instances uses remote ASM instances. In normal conditions in a node if ASM fails the entire node will be useless, where in 12c the ability to get the extent map from remote ASM instance makes the node useful.
  2. Introduction of Flex Cluster, with light weight cluster stack, leaf node and traditional stack hub node (application layer is the typical example of leaf nodes) where they don't require any network heartbeat.
  3. Oracle ASM disk scrubbing (Checks for logical data corruptions and repair them automatically.)
  1. Accidental Drop table, Truncate Table, Wrong scripts human error recovery.
  2. RMAN TABLE Point-In-Time Recovery (combination of Data Pump and RMAN, auxiliary instance required).
  3. Recover or copy files from Standby databases.
  4. Restore & Recover individual tables from RMAN backup.
  5. Incremental recovery more faster, many of the tasks removed. You can automate the use of incremental backup to bring the standby db in sync.
  6. Import from older export files possibilities.
  1. Partitioning enhancements, Multiples partition operations in a single DDL.
  2. Online move of a partition (without DBMS_REDEFINTIION).
  3. Interval-Ref Partitions - we can create a ref partition (to relate several tables with the same partitions) as a sub-partition to the interval type.
  4. Cascade for TRUNCATE and EXCHANGE partition.
  5. Asynchronous Global Index maintenance for DROP and TRUNCATE. Command returns instantly, but index cleanup happens later.
  1. Centralized patching - We can test patches on database copies, rolling patches out centrally once testing is complete.
  2. Compression: Automated compression with heat map. Optimization can be run on live databases with no disruption. Data optimization will monitor the data usage and with policy archive old data and hot data will be compressed for faster access. Inactive data can be more aggressively compressed or archived, greatly reducing storage costs.
Data Guard:
1.      Oracle Database 12c introduces a new redo transportation method (Fast Sync redo transport) which omits the acknowledgement to primary of the transaction on the standby.
2.      Creating a new type of redo destination (“Far Sync Standby” composed only of the standby control files), the standby redo logs and some disk space for archive logs which shall be sent to the Standby database. Failover & Switchover operations are totally transparent as the "Far Sync Standby" cannot be used as the target.
3.      Data Guard Broker commands have been extended. The "validate database" command to checks whether the database is ready for role transition or not.
4.      Dataguard Broker now supports cascaded standby.
5.      Global Temporary Tables can now be used on an Active Guard standby database.
New Views/Packages in Oracle 12c Release
dba_pdbs, v$pdbs, cdb_data_files, dbms_pdb, dbms_qopatch
Using DBUA to upgrade the existing database is the simple and quickest method.
For step by step details follow the below link: Upgrade Oracle Database 11g to 12c

Wednesday, 11 July 2018

RMAN Different errors and their Solution

Backup Fails with Invalid RECID Error: RMAN-20035, RMAN-6038
When you attempt a backup and receive the following error messages:
RMAN-3014: Implicit resync of recovery catalog failed
RMAN-6038: Recovery catalog package detected an error
RMAN-20035: Invalid high RECID error
It indicates the control file and the recovery catalog is not synchronized. RMAN detects that the control file currently in use is older than the control file previously used to resynchronize.
This due to any of the scenario you are restore a backup controlfile through a non-oracle mechanism and then open the database through Resetlogs option or making a copy of control file through operating system utility and trying the restore on new system through RNAN. You do not use catalog so RMAN does not get any information regarding this process.  The recovery catalog indicates that the highest RECID is 100, but the control file indicates that the highest RECID is 90. The control

Friday, 10 November 2017

Oracle DBA interview Question with Answer (All in One Doc)

1. General DB Maintenance
2. Backup and Recovery
3. Flashback Technology
4. Dataguard
5. Upgration/Migration/Patches
6. Performance Tuning
7. ASM
9. Linux Operating
10. PL/SQL
General DB Maintenance Question/Answer:
When we run a Trace and Tkprof on a query we see the timing information for three phase?
Parse-> Execute-> Fetch
Which parameter is used in TNS connect identifier to specify number of concurrent connection request?
What does AFFIRM/NOFFIRM parameter specify?
AFFIRM specify redo transport service acknowledgement after writing to standby (SYNC) where as NOFFIRM specify acknowledgement before writing to standby (ASYNC).
After upgrade task which script is used to run recompile invalid object?
utlrp.sql, utlprp
Due to too many cursor presents in library cache caused wait what parameter need to increase?
Open_cursor, shared_pool_size
When using Recover database using backup control file?
To synchronize datafile to controlfile
What is the use of CONSISTENT=Y and DIRECT=Y parameter in export?
It will take consistent values while taking export of a table. Setting direct=yes, to extract data by reading the data directly, bypasses the SGA, bypassing the SQL command-processing layer (evaluating buffer), so it should be faster. Default value N.
What the parameter COMPRESS, SHOW, SQLFILE will do during export?
If you are using COMPRESS during import, It will put entire data in a single extent.
if you are using SHOW=Y during import, It will read entire dumpfile and confirm backup validity even if you don’t know the formuser of export can use this show=y option with import to check the fromuser.
If you are using SQLFILE (which contains all the DDL commands which Import would have executed) parameter with import utility can get the information dumpfile is corrupted or not because this utility will read entire dumpfile export and report the status.
Can we import 11g dumpfile into 10g using datapump? If so, is it also  possible between 10g and 9i?
Yes we can import from 11g to 10g using VERSION option. This is not possible between 10g and 9i as datapump is not there in 9i
What does KEEP_MASTER and METRICS parameter of datapump?
KEEP_MASTER and METRICS are undocumented parameter of EXPDP/IMPDP. METRICS provides the time it took for processing the objects and KEEP_MASTER prevents the Data Pump Master table from getting deleted after an Export/Import job completion.
What happens when we fire SQL statement in Oracle?
First it will check the syntax and semantics in library cache, after that it will create execution plan. If already data is in buffer cache it will directly return to the client (soft parse) otherwise it will fetch the data from datafiles and write to the database buffer cache (hard parse) after that it will send server and finally server send to the client.
What are between latches and locks?
1. A latch management is based on first in first grab whereas lock depends lock order is last come and grap.
2. Lock creating deadlock whereas latches never creating deadlock it is handle by oracle internally. Latches are only related with SGA internal buffer whereas lock related with transaction level.
3. Latches having on two states either WAIT or NOWAIT whereas locks having six different states: DML locks (Table and row level-DBA_DML_LOCKS ), DDL locks (Schema and Structure level –DBA_DDL_LOCKS), DBA_BLOCKERS further categorized many more.
What are the differences between LMTS and DMTS?
Tablespaces that record extent allocation in the dictionary are called dictionary managed tablespaces, the dictionary tables are created on SYSTEM tablespace and tablespaces that record extent allocation in the tablespace header are called locally managed tablespaces.
Difference of Regular and Index organized table?
The traditional or regular table is based on heap structure where data are stored in un-ordered format where as in IOT is based on Binary tree structure and data are stored in order format with the help of primary key. The IOT is useful in the situation where accessing is commonly with the primary key use of where clause statement. If IOT is used in select statement without primary key the query performance degrades.
What are Table portioning and their use and benefits?
Partitioning the big table into different named storage section to improve the performance of query, as the query is accessing only the particular partitioned instead of whole range of big tables. The partitioned is based on partition key. The three partition types are: Range/Hash/List Partition.
Apart from table an index can also partitioned using the above partition method either LOCAL or GLOBAL.
Range partition:
How to deal online redo log file corruption?
1. Recover when only one redo log file corrupted?
If your database is open and you lost or corrupted your logfile then first try to shutdown your database normally does not shutdown abort. If you lose or corrupted only one redo log file then you need only to open the database with resetlog option. Opening with resetlog option will re-create your online redo log file.
2. Recover when all the online redo log file corrupted?
When you lose all member of redo log group then the step of maintenance depends on group ‘STATUS’ and database status Archivelog/NoArchivelog.
If the affected redo log group has a status of INACTIVE then it is no longer required crash recovery then issues either clear logfile or re-create the group manually.
ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3; -- you are in archive mode and group still not archived
ALTER DATABASE CLEAR LOGFILE GROUP 3; noarchive mode or group already archived
If the affected redo log group has a status ACTIVE then it is still required for crash recovery. Issue the command ALTER SYSTEM CHECKPOINT, if successful then follow the step inactive if fails then you need to perform incomplete recovery up to the previous log file and open the database with resetlog option.
If the affected redo log group is CURRENT then lgwr stops writing and you have to perform incomplete recovery up to the last logfile and open the database with resetlog option and if your database in noarchive then perform the complete recovery with last cold backup.
Note: When the online redolog is UNUSED/STALE means it is never written it is newly created logfile.
What is the function of shared pool in SGA?
The shared pool is most important area of SGA. It control almost all sub area of SGA. The shortage of shared pool may result high library cache reloads and shared pool latch contention error. The two major component of shared pool is library cache and dictionary cache.
The library cache contains current SQL execution plan information. It also holds PL/SQL procedure and trigger.
The dictionary cache holds environmental information which includes referential integrity, table definition, indexing information and other metadata information.
Backup & Recovery Question/Answer:
Is target database can be catalog database?
No recovery catalog cannot be the same as target database because whenever target database having restore and recovery process it must be in mount stage in that period we cannot access catalog information as database is not open.
What is the use of large pool, which case you need to set the large pool?
You need to set large pool if you are using: MTS (Multi thread server) and RMAN Backups. Large pool prevents RMAN & MTS from competing with other sub system for the same memory (specific allotment for this job). RMAN uses the large pool for backup & restore when you set the DBWR_IO_SLAVES or BACKUP_TAPE_IO_SLAVES parameters to simulate asynchronous I/O. If neither of these parameters is enabled, then Oracle allocates backup buffers from local process memory rather than shared memory. Then there is no use of large pool.
How to take User-managed backup in RMAN or How to make use of obsolete backup?
By using catalog command: RMAN>CATALOG START WITH '/tmp/KEEP_UNTIL_30APRIL2010;
It will search into all file matching the pattern on the destination asks for confirmation to catalog or you can directly change the backup set keep until time using rman command to make obsolete backup usable.
RMAN> change backupset 3916 keep until time "to_date('01-MAY-2010','DD-MON-YYYY')" nologs;
This is important in the situation where our backup become obsolete due to RMAN retention policy or we have already restored prior to that backup.
What is difference between using recovery catalog and control file?
When new incarnation happens, the old backup information in control file will be lost where as it will be preserved in recovery catalog .In recovery catalog, we can store scripts. Recovery catalog is central and can have information of many databases. This is the reason we must need to take a fresh backup after new incarnation of control file.
What is the benefit of Block Media Recovery and How to do it?
Without block media recovery if the single block is corrupted then you must take datafile offline and then restore all backup and archive log thus entire datafile is unavailable until the process is over but incase of block media recovery datafile will be online only the particular block will be unavailable which needs recovery. You can find the details of corrupted block in V$database_Block_Corruption view as well as in alert/trace file.
Connect target database with RMAN in Mount phase:
RMAN> Recover datafile 8 block 13;
RMAN> Recover CORRUPTION_LIST;  --to recover all the corrupted block at a time.
In respect of oracle 11g Active Dataguard features (physical standby) where real time query is possible corruption can be performed automatically. The primary database searches for good copies of block on the standby and if they found repair the block with no impact to the query which encounter the corrupt block.
By default RMAN first searches the good block in real time physical standby database then flashback logs then full and incremental rman backup.
What is Advantage of Datapump over Traditional Export?
1. Data pump support parallel concept. It can write multiple dumps instead of single sequential dump.
2. Data can be exported from remote database by using database link.
3. Consistent export with Flashback_SCN, Flashback_Time supported in datapump.
4. Has ability to attach/detach from job and able to monitor the job remotely.
5. ESTIMATE_ONLY option can be used to estimate disk space requirement before perform the job.
6. Explicit DB version can be specified so only supported object can be exported.
7. Data can be imported from one DB to another DB without writing into dump file using NETWORK_LINK.
8. During impdp we change the target file name, schema, tablespace using: REMAP_
Why datapump is faster than traditional Export. What to do to increase datapump performace?
Data Pump is block mode, exp is byte mode.
Data Pump will do parallel execution.
Data Pump uses direct path API and Network link features.
Data pump export/import/access file on server rather than client by providing directory structure grant.
Data pump is having self-tuning utilities, the tuning parameter BUFFER and RECORDLENGTH no need now.
Following initialization parameter must be set to increase data pump performance:
Following initialization must be set high to increase datapump parallelism:
Note: you must set the reasonable amount of STREAMS_POOL_SIZE as per database size if SGA_MAXSIZE parameter is not set. If SGA_MAXSIZE is set it automatically pickup reasonable amount of size.
Flashback Question/Answer
Flashback Archive Features in oracle 11g
The flashback archiving provides extended features of undo based recovery over a year or lifetime as per the retention period and destination size.
Limitation or Restriction on flashback Drop features?
1. The recyclebin features is only for non-system and locally managed tablespace.
2. When you drop any table all the associated objects related with that table will go to recyclebin and generally same reverse with flashback but sometimes due to space pressure associated index will finished with recyclebin. Flashback cannot able to reverse the referential constraints and Mviews log.
3. The table having fine grained auditing active can be protected by recyclebin and partitioned index table are not protected by recyclebin.
Limitation or Restriction on flashback Database features?
1. Flashback cannot use to repair corrupt or shrink datafiles. If you try to flashback database over the period when drop datafiles happened then it will records only datafile entry into controlfile.
2. If controlfile is restored or re-created then you cannot use flashback over the point in time when it is restored or re-created.
3. You cannot flashback NOLOGGING operation. If you try to flashback over the point in time when NOLOGGING operation happens results block corruption after the flashback database. Thus it is extremely recommended after NOLOGGING operation perform backup.
What are Advantages of flashback database over flashback Table?
1. Flashback Database works through all DDL operations, whereas Flashback Table does not work with structural change such as adding/dropping a column, adding/dropping constraints, truncating table. During flashback Table operation A DML exclusive lock associated with that particular table while flashback operation is going on these lock preventing any operation in this table during this period only row is replaced with old row here.
2. Flashback Database moves the entire database back in time; constraints are not an issue, whereas they are with Flashback Table.
3. Flashback Table cannot be used on a standby database.
How should I set the database to improve Flashback performance?
Use a fast file system (ASM) for your flash recovery area, configure enough disk space for the file system that will hold the flash recovery area can enable to set maximum retention target.
If the storage system used to hold the flash recovery area does not have non-volatile RAM (ASM), try to configure the file system on top of striped storage volumes, with a relatively small stripe size such as 128K. This will allow each write to the flashback logs to be spread across multiple spindles, improving performance.
For large production databases set LOG_BUFFER to be at least 8MB. This makes sure the database allocates maximum memory (typically 16MB) for writing flashback database logs.
Performance Tuning Question/Answer:
If you are getting complain that database is slow. What should be your first steps to check the DB performance issues?
In case of performance related issues as a DBA our first step to check all the session connected to the database to know exactly what the session is doing because sometimes unexpected hits leads to create object locking which slow down the DB performance.
The database performance directly related with Network load, Data volume and Running SQL profiling.
1.  So check the event which is waiting for long time. If you find object locking kill that session (DML locking only) will solve your issues.
To check the user sessions and waiting events use the join query on views: V$session,v$session_wait
2.  After locking other major things which affect the database performance is Disk I/O contention (When a session retrieves information from datafiles (on disk) to buffer cache, it has to wait until the disk send the data). This waiting time we need to minimize.
We can check these waiting events for the session in terms of db file sequential read (single block read P3=1 usually the result of using index scan) and db file scattered read (multi block read P3 >=2 usually the results of for full table scan) using join query on the view v$system_event
SQL> SELECT a.average_wait "SEQ READ", b.average_wait "SCAT READ"
  2    FROM sys.v_$system_event a, sys.v_$system_event b
  3   WHERE a.event = 'db file sequential read'AND b.event = 'db file scattered read';
---------- ----------
       .74        1.6
When you find the event is waiting for I/O to complete then you must need to reduce the waiting time to improve the DB performance. To reduce this waiting time you must need to perform SQL tuning to reduce the number of block retrieve by particular SQL statement.
How to perform SQL Tuning?
1. First of all you need to identify High load SQL statement. You can identify from AWR Report TOP 5 SQL statement (the query taking more CPU and having low execution ratio). Once you decided to tune the particular SQL statement then the first things you have to do to run the Tuning Optimizer. The Tuning optimize will decide: Accessing Method of query, Join Method of query and Join order.
2. To examine the particular SQL statement you must need to check the particular query doing the full table scan (if index not applied use the proper index technique for the table) or if index already applied still doing full table scan then check may be table is having wrong indexing technique try to rebuild the index.  It will solve your issues somehow…… otherwise use next step of performance tuning.
3. Enable the trace file before running your queries, then check the trace file using tkprof created output file. According to explain_plan check the elapsed time for each query, and then tune them respectively.
To see the output of plan table you first need to create the plan_table from and create a public synonym for plan_table @$ORACLE_HOME/rdbms/admin/utlxplan.sql)
SQL> create public synonym plan_table for sys.plan_table;
4. Run SQL Tuning Advisor (@$ORACLE_HOME/rdbms/admin/sqltrpt.sql) by providing SQL_ID as you find in V$session view. You can provide rights to the particular schema for the use of SQL Tuning Advisor:
        Grant Advisor to HR;
        Grant Administer SQL Tuning set to HR;
SQL Tuning Advisor will check your SQL structure and statistics.
SQL Tuning Advisor suggests indexes that might be very useful.
SQL Tuning Advisor suggests query rewrites.
SQL Tuning Advisor suggests SQL profile. (Automatic reported each time)
5. Now in oracle 11g SQL Access Advisor is used to suggests new index for materialized views.
6. More: Run TOP command in Linux to check CPU usage information and Run VMSTAT, SAR, PRSTAT command to get more information on CPU, memory usage and possible blocking.
7. Optimizer Statistics are used by the query optimizer to choose the best execution plan for each SQL statement. Up-to-date optimizer statistics can greatly improve the performance of SQL statements.
8. A SQL Profile contains object level statistics (auxiliary statistics) that help the optimizer to select the optimal execution plan of a particular SQL statement. It contains object level statistics by correcting the statistics level and giving the Tuning Advisor option for most relevant SQL plan generation.
DBMS_SQLTUNE.ACCEPT_SQL_PROFILE – to accept the correct plan from SQLplus
DBMS_SQLTUNE.ALTER_SQL_PROFILE – to modify/replace existing plan from SQLplus.
DBMS_SQLTUNE.DROP_SQL_PROFILE – to drop existing plan.
Profile Type: REGULAR-PROFILE, PX-PROFILE (with change to parallel exec)

9. SQL Plan Baselines are a new feature in Oracle Database 11g (previously used stored outlines, SQL Profiles) that helps to prevent repeatedly used SQL statements from regressing because a newly generated execution plan is less effective than what was originally in the library cache. Whenever optimizer generating a new plan it is going to the plan history table then after evolve or verified that plan and if the plan is better than previous plan then only that plan going to the plan table. You can manually check the plan history table and can accept the better plan manually using the ALTER_SQL_PLAN_BASELINE function of DBMS_SPM can be used to change the status of plans in the SQL History to Accepted, which in turn moves them into the SQL Baseline and the EVOLVE_SQL_PLAN_BASELINE function of the DBMS_SPM package can be used to see which plans have been evolved. Also there is a facility to fix a specific plan so that plan will not change automatically even if better execution plan is available. The plan base line view: DBA_SQL_PLAN_BASELINES.
10. SQL Performance Analyzer allows you to test and to analyze the effects of changes on the execution performance of SQL contained in a SQL Tuning Set.

Which factors are to be considered for creating index on Table? How to select column for index?
1. Creation of index on table depends on size of table, volume of data. If size of table is large and you need only few data < 15% of rows retrieving in report then you need to create index on that table.
2. Primary key and unique key automatically having index you might concentrate to create index on foreign key where indexing can improve performance on join on multiple table.
3. The column is best suitable for indexing whose values are relatively unique in column (through which you can access complete table records. Wide range of value in column (good for regular index) whereas small range of values (good for bitmap index) or the column contains many nulls but queries can select all rows having a value.
CREATE INDEX emp_ename ON emp_tab(ename);
The column is not suitable for indexing which is having many nulls but cannot search non null value or LONG, LONG RAW column not suitable for indexing.
CAUTION: The size of single index entry cannot exceed one-half of the available space on data block.
The more indexes on table will create more overhead as with each DML operation on table all index must be updated. It is important to note that creation of so many indexes would affect the performance of DML on table because in single transaction should need to perform on various index segments and table simultaneously.
What are Different Types of Index? Is creating index online possible?
Function Based Index/Bitmap Index/Binary Tree Index/4. implicit or explicit index, 5. Domain Index
You can create and rebuild indexes online. This enables you to update base tables at the same time you are building or rebuilding indexes on that table. You can perform DML operations while the index building is taking place, but DDL operations are not allowed. Parallel execution is not supported when creating or rebuilding an index online.
An index can be considered for re-building under any of these circumstances:
We must first get an idea of the current state of the index by using the ANALYZE INDEX VALIDATE STRUCTURE, ANALYZE INDEX COMPUTE STATISTICS command
* The % of deleted rows exceeds 30% of the total rows (depending on table length).
* If the ‘HEIGHT’ is greater than 4, as the height of level 3 we can insert millions of rows.
* If the number of rows in the index (‘LF_ROWS’) is significantly smaller than ‘LF_BLKS’ this can indicate a large number of deletes, indicating that the index should be rebuilt.
Differentiate the use of Bitmap index and Binary Tree index?
Bitmap indexes are preferred in Data warehousing environment when cardinality is low or usually we have repeated or duplicate column. A bitmap index can index null value
Binary-tree indexes are preferred in OLTP environment when cardinality is high usually we have too many distinct column. Binary tree index cannot index null value.
If you are getting high “Busy Buffer waits”, how can you find the reason behind it?
Buffer busy wait means that the queries are waiting for the blocks to be read into the db cache. There could be the reason when the block may be busy in the cache and session is waiting for it. It could be undo/data block or segment header wait.
Run the below two query to find out the P1, P2 and P3 of a session causing buffer busy wait then after another query by putting the above P1, P2 and P3 values.
SQL> Select p1 "File #",p2 "Block #",p3 "Reason Code" from v$session_wait
Where event = 'buffer busy waits';
SQL> Select owner, segment_name, segment_type from dba_extents
Where file_id = &P1 and &P2 between block_id and block_id + blocks -1;
What is STATSPACK and AWR Report? Is there any difference? As a DBA what you should look into STATSPACK and AWR report?
STATSPACK and AWR is a tools for performance tuning. AWR is a new feature for oracle 10g onwards where as STATSPACK reports are commonly used in earlier version but you can still use it in oracle 10g too. The basic difference is that STATSPACK snapshot purged must be scheduled manually but AWR snapshots are purged automatically by MMON BG process every night. AWR contains view dba_hist_active_sess_history to store ASH statistics where as STASPACK does not storing ASH statistics.
You can run $ORACLE_HOME/rdbms/admin/spauto.sql to gather the STATSPACK report (note that Job_queue_processes must be set > 0 ) and awrpt to gather AWR report  for standalone environment and awrgrpt for RAC environment.
In general as a DBA following list of information you must check in STATSPACK/AWR report.
¦ Top 5 wait events (db file seq read, CPU Time, db file scattered read, log file sync, log buffer spac)
¦ Load profile (DB CPU(per sec) < Core configuration and ratio of hard parse must be < parse)
¦ Instance efficiency hit ratios (%Non-Parse CPU nearer to 100%)
¦ Top 5 Time Foreground events (wait class is ‘concurrency’ then problem if User IO, System IO then OK)
¦ Top 5 SQL (check query having low execution and high elapsed time or taking high CPU and low execution)
¦ Instance activity
¦ File I/O and segment statistics
¦ Memory allocation
¦ Buffer waits
¦ Latch waits

1. After getting AWR Report initially crosscheck CPU time, db time and elapsed time. CPU time means total time taken by the CPU including wait time also. Db time include both CPU time and the user call time whereas elapsed time is the time taken to execute the statement.
2. Look the Load profile Report: Here DB CPU (per sec) must be < Core in Host configuration. If it is not means there is a CPU bound need more CPU (check happening for fraction time or all the time) and then look on this report Parse and Hard Parse. If the ratio of hard parse is more than parse then look for cursor sharing and application level for bind variable etc.
3. Look instance efficiency Report: In this statistics you have to look ‘%Non-Parse CPU’, if this value nearer to 100% means most of the CPU resource are used into operation other than parsing which is good for database health.
4. Look TOP five Time foreground Event: Here we should look ‘wait class’ if the wait class is User I/O, system I/O then OK if it is ‘Concurrency’ then there is serious problem then look Time(s) and Avg Wait time(s) if the Time (s) is more and Avg Wait Time(s) is less then you can ignore if both are high then there is need to further investigate (may be log file switch or check point incomplete).
5. Look Time Model Statistics Report: This is detailed report of system resource consumption order by Time(s) and % of DB Time.
6. Operating system statistics Report
7. SQL ordered by elapsed time: In this report look for the query having low execution and high elapsed time so you have to investigate this and also look for the query using highest CPU time but the lower the execution.
What is the difference between DB file sequential read and DB File Scattered Read?
DB file sequential read is associated with index read where as DB File Scattered Read has to do with full table scan. The DB file sequential read, reads block into contiguous (single block) memory and DB File scattered read gets from multiple block and scattered them into buffer cache. 

Dataguard Question/Answer
What are Benefits of Data Guard?
Using Data guard feature in your environment following benefit:
High availability, Data protection, Offloading backup operation to standby, Automatic gap detection and resolution in standby database, Automatic role transitions using data guard broker.
Oracle Dataguard classified into two types:
1. Physical standby (Redo apply technology)
2. Logical Standby (SQL Apply Technology)
Physical standby are created as exact copy (matching the schema) of the primary database and keeping always in recoverable mode (mount stage not open mode). In physical standby database transactions happens in primary database synchronized by using Redo Apply method by continually applying redo data on standby database received from primary database. Physical standby database can be opened for read only transitions only that time when redo apply is not going on. But from 11g onward using active data guard option (extra purchase) you can simultaneously open the physical standby database for read only access and can apply redo log received from primary in the meantime.
Logical standby does not matching the same schema level and using the SQL Apply method to synchronize the logical standby database with primary database. The main advantage of logical standby database over physical standby is you can use logical standby database for reporting purpose while you are apply SQL.
What are different services available in oracle data guard?
1. Redo Transport Service: Transmit the redo from primary to standby (SYNC/ASYNC method). It responsible to manage the gap of redo log due to network failure. It detects if any corrupted archive log on standby system and automatically perform replacement from primary.
2. Log Apply Service: It applies the archive redo log to the standby. The MRP process doing this task.
3. Role Transition service: it control the changing of database role from primary to standby includes: switchover, switchback, failover.
4. DG broker: control the creation and monitoring of data guard through GUI and command line.
What is different protection mode available in oracle data guard? How can check and change it?
1. Maximum performance: (default): It provides the high level of data protection that is possible without affecting the performance of a primary database. It allowing transactions to commit as soon as all redo data generated by those transactions has been written to the online log.
2. Maximum protection: This protection mode ensures that no data loss will occur if the primary database fails. In this mode the redo data needed to recover a transaction must be written to both the online redo log and to at least one standby database before the transaction commits. To ensure that data loss cannot occur, the primary database will shut down, rather than continue processing transactions.
3. Maximum availability: This provides the highest level of data protection that is possible without compromising the availability of a primary database. Transactions do not commit until all redo data needed to recover those transactions has been written to the online redo log and to at least one standby database.
Step to create physical standby database?
On Primary site Modification:
1. Enable force logging: Alter database force logging;
2. Create redolog group for standby on primary server:
Alter database add standby logfile (‘/u01/oradata/--/standby_redo01.log) size 100m;
3. Setup the primary database pfile by changing required parameter
Log_archive_dest_n – Primary database must be running in archive mode
Log_archive_config  -- enble or disable the redo stream to the standby site.
Log_file_name_convert , DB_file_name_convert  -- these parameter are used when you are using different directory structure in standby database. It is used for update the location of datafile in standby database.
Standby_File_Management  -- by setting this AUTO so that when oracle file added or dropped from primary automatically changes made to the standby.
              DB_Unique_Name,  Fal_server, Fal_client
4. Create password file for primary
5. Create controlfile for standby database on primary site:
alter database create standby controlfile as ‘STAN.ctl;
6. Configure the listner and tnsname on primary database.
On Standby Modification:
1. Copy primary site pfile and modify these pfile as per standby name and location:
2. Copy password from primary and modify the name.
3. Startup standby database in nomount using modified pfile and create spfile from it
4. Use the created controlfile to mount the database.
5. Now enable DG Broker to activate the primary or standby connection.
6. Finally start redo log apply.
How to enable/disable log apply service for standby?
Alter database recover managed standby database disconnect; apply in background
Alter database recover managed standby database using current logfile; apply in real time.
Alter database start logical standby apply immediate; to start SQL apply for logical standby database.
What are different ways to manage long gap of standby database?
Due to network issue sometimes gap is created between primary and standby database but once the network issue is resolved standby automatically starts applying redolog to fill the gap but in case when the gap is too long we can fill through rman incremental backup in three ways.
1. Check the actual gap and perform incremental backup and use this backup to recover standby site.
2. Create controlfile for standby on primary and restore the standby using newly created controlfile.
3. Register the missing archive log.
Use the v$archived_log view to find the gap (archived not applied yet) then find the Current_SCN and try to take rman incremental backup from physical site till that SCN and apply on standby site with recover database noredo option.
Use the controlfile creation method only when fail to apply with normal backup method. Create new controlfile for standby on primary site using backup current controlfile for standby; Copy this controlfile on standby site then startup the standby in nomount using pfile and restore with the standby using this controlfile: restore standby controlfile from ‘/location of file’; and start MRP to test.
If still alert.log showing log are transferred to the standby but still not applied then need to register these log with standby database with Alter database register logfile ‘/backup/temp/arc10.rc’;
What is Active DATAGUARD feature in oracle 11g?
In physical standby database prior to 11g you are not able to query on standby database while redo apply is going on but in 11g solve this issue by quering  current_scn from v$database view you are able to view the record while redo log applying. Thus active data guard feature s of 11g allows physical standby database to be open in read only mode while media recovery is going on through redo apply method and also you can open the logical standby in read/write mode while media recovery is going on through SQL apply method.
How can you find out back log of standby?
You can perform join query on v$archived_log, v$managed_standby
What is difference between normal Redo Apply and Real-time Apply?
Normally once a log switch occurs on primary the archiver process transmit it to the standby destination and remote file server (RFS) on the standby writes these redo log data into archive. Finally MRP service, apply these archive to standby database. This is called Redo Apply service.
In real time apply LGWR or Archiver on the primary directly writing redo data to standby there is no need to wait for current archive to be archived. Once a transaction is committed on primary the committed change will be available on the standby in real time even without switching the log.
What are the Back ground processes for Data guard?
On primary:
Log Writer (LGWR): collects redo information and updates the online redolog . It can also create local archive redo log and transmit online redo log to standby.
Archiver Process (ARCn): one or more archiver process makes copies of online redo log to standby location
Fetch Archive Log (FAL_server): services request for archive log from the client running on different standby server.
On standby:
Fetch Archive Log (FAL_client): pulls archive from primary site and automatically initiates transfer of archive when it detects gap.
Remote File Server (RFS): receives archives on standby redo log from primary database.
 Archiver (ARCn):  archived the standby redo log applied by managed recovery process.
Managed Recovery Process (MRP): applies archives redo log to the standby server.
Logical Standby Process (LSP): applies SQL to the standby server.
ASM/RAC Question/Answer
What is the use of ASM (or) Why ASM preferred over filesystem?
ASM provides striping and mirroring. You must put oracle CRD files, spfile on ASM. In 12c you can put oracle password file also in ASM. It facilitates online storage change and also rman recommended to backed up ASM based database.
What are different types of striping in ASM & their differences?
Fine-grained striping is smaller in size always writes data to 128 kb for each disk, Coarse-grained striping is bigger in size and it can write data as per ASM allocation unit defined by default it is 1MB.
Default Memory Allocation for ASM? How will backup ASM metadata?
Default Memory allocation for ASM in oracle 10g in 1GB in Oracle 11g 256M in 12c it is set back again 1GB.
You can backup ASM metadata (ASM disk group configuration) using Md_Backup.
How to find out connected databases with ASM or not connected disks list?
ASMCMD> lsct
SQL> select DB_NAME from V$ASM_CLIENT;
ASMCMD> lsdg
select NAME,ALLOCATION_UNIT_SIZE from v$asm_diskgroup;
What are required parameters for ASM instance Creation?
INSTANCE_TYPE = ASM by default it is RDBMS
DB_UNIQUE_NAME = +ASM1 by default it is +ASM but you need to alter to run multiple ASM instance.
ASM_POWER_LIMIT = 11 It defines maximum power for a rebalancing operation on ASM by default it is 1 can be increased up to 11. The higher the limit the more resources are allocated resulting in faster rebalancing. It is a dynamic parameter which will be useful for rebalancing the data across disks.
ASM_DISKSTRING = ‘/u01/dev/sda1/c*’it specify a value that can be used to limit the disks considered for discovery. Altering the default value may improve the speed disk group mount time and the speed of adding a disk to disk group.
ASM_DISKGROUPS = DG_DATA, DG_FRA: List of disk group that will be mounted at instance startup where DG_DATA holds all the datafiles and FRA holds fast recovery area including online redo log and control files. Typically FRA disk group size will be twice of DATA disk group as it is holding all the backups.
How to Creating spfile for ASM database?
Start the instance with NOMOUNT option: Once an ASM instance is present disk group can be used for following parameter in database instance to allow ASM file creation:
What are DISKGROUP Redundancy Level?
Normal Redundancy: Two ways mirroring with 2 FAILURE groups with 3 quorum (optionally to store vote files)
High Redundancy: Three ways mirroring requiring three failure groups
External Redundancy: No mirroring for disk that are already protecting using RAID on OS level.
  FAILGROUP failure_group_1 DISK '/devices/diska1' NAME diska1,'/devices/diska2' NAME diska2
  FAILGROUP failure_group_2 DISK '/devices/diskb1' NAME diskb1,'/devices/diskb2' NAME diskb2;
We are going to migrate new storage. How we will move my ASM database from storage A to storage B?
First need to prepare OS level to disk so that both the new and old storage accessible to ASM then simply add the new disks to the ASM disk group and drop the old disks. ASM will perform automatic rebalance whenever storage will change. There is no need to manual i/o tuning.
ASM_SQL> alter diskgroup DATA drop disk data_legacy1, data_legacy2, data_legacy3 add disk
‘/dev/sddb1’, ‘/dev/sddc1’, ‘/dev/sddd1’;
What are required component of Oracle RAC installation?:
1. Oracle ASM shared disk to store OCR and voting disk files.
2. OCFS2 for Linux Clustered database
3. Certified Network File system (NFS)
4. Public IP: Configuration: TCP/IP (To manage database storage system)
5. Private IP:  To manager RAC cluster ware (cache fusion) internally.
6. SCAN IP: (Listener): All connection to the oracle RAC database uses the SCAN in their client connection string with SCAN you do not have to change the client connection even if the configuration of cluster changes (node added or removed). Maximum 3 SCAN is running in oracle.
7. Virtual IP: is alternate IP assigned to each node which is used to deliver the notification of node failure message to active node without being waiting for actual time out. Thus possibly switchover will happen automatically to another active node continue to process user request.
Steps to configure RAC database:
1. Install same OS level on each nodes or systems.
2. Create required number of group and oracle user account.
3. Create required directory structure or CRS and DB home.
4. Configure kernel parameter (sysctl.config) as per installation doc set shell limit for oracle user account.
5. Edit etc/host file and specify public/private/virtual ip for each node.
6. Create required level of partition for OCR/Votdisk and ASM diskgroup.
7. Install OCFSC2 and ASM RPM and configure with each node.
8. Install clustware binaries then oracle binaries in first node.
9. Invoke netca to configure listener.
10. Finally invoke DBCA to configure ASM to store database CRD files and create database.
What is the structure change in oracle 11g r2?
1. Grid and (ASM+Clustware) are on home. (oracle_binaries+ASM binaries in 10g)
2. OCR and Voting disk on ASM.
3. SAN listener
4. By using srvctl can manage diskgroups, SAN listener, oracle home, ons, VIP, oc4g.
5. GSD
What are oracle RAC Services?
Cache Fusion: Cache fusion is a technology that uses high speed Inter process communication (IPC) to provide cache to cache transfer of data block between different instances in cluster. This eliminates disk I/O which is very slow. For example instance A needs to access a data block which is being owned/locked by another instance B. In such case instance A request instance B for that data block and hence access the block through IPC this concept is known as Cache Fusion.
Global Cache Service (GCS): This is the main heart of Cache fusion which maintains data integrity in RAC environment when more than one instances needed particular data block then GCS full fill this task:
In respect of instance A request GCS track that information if it finds read/write contention (one instance is ready to read while other is busy with update the block) for that particular block with instance B then instance A creates a CR image for that block in its own buffer cache and ships this CR image to the requesting instance B via IPC but in case of write/write contention (when both the instance ready to update the particular block) then instance A creates a PI image for that block in its own buffer cache, and make the redo entries and ships the particular block to the requesting instance B. The dba_hist_seg_stats is used to check the latest object shipped.
Global Enqueue Service (GES): The GES perform concurrency (more than one instance accessing the same resource) control on dictionary cache lock, library cache lock and transactions. It handles the different lock such as Transaction lock, Library cache lock, Dictionary cache lock, Table lock.
Global Resource Directory (GRD): As we know to perform any operation on data block we need to know current state of the particular data block. The GCS (LMSN + LMD) + GES keep track of the resource s, location and their status of (each datafiles and each cache blocks ) and these information is recorded in Global resource directory (GRD). Each instance maintains their own GRD whenever a block transfer out of local cache its GRD is updated.
Main Components of Oracle RAC Clusterware?
OCR (Oracle Cluster Registry): OCR manages oracle clusterware (all node, CRS, CSD, GSD info) and oracle database configuration information (instance, services, database state info).
OLR (Oracle Local Registry): OLR resides on every node in the cluster and manages oracle clusterware configuration information for each particular node. The purpose of OLR in presence of OCR is that to initiate the startup with the local node voting disk file as the OCR is available on GRID and ASM file can available only when the grid will start. The OLR make it possible to locate the voting disk having the information of other node also for communicate purpose.
Voting disk: Voting disk manages information about node membership. Each voting disk must be accessible by all nodes in the cluster for node to be member of cluster. If incase a node fails or got separated from majority in forcibly rebooted and after rebooting it again added to the surviving node of cluster.
Why voting disk place to the quorum disk or what is split-brain syndrome issue in database cluster?
Voting disk placed to the quorum disk (optionally) to avoid the possibility of split-brain syndrome. Split-brain syndrome is a situation when one instance trying to update a block and at the same time another instance also trying to update the same block. In fact it can happen only when cache fusion is not working properly. Voting disk always configured with odd number of disk series this is because loss of more than half of your voting disk will cause the entire cluster fail. If it will be even number node eviction cannot decide which node need to remove due to failure. You must store OCR and voting disk on ASM. Thus if necessary you can dynamically add or replace voting disk after you complete the Cluster installation process without stopping the cluster.
ASM Backup:
You can use md_backup to restore ASM disk group configuration in case of ASM disk group storage loss.
OCR and Votefile Backup:
Oracle cluster automatically creates OCR backup (auto backup managed by crsd) every four hours and retaining at least 3 backup (backup00.ocr, day.ocr, week.ocr on the GRID) every times but you can take OCR backup manually at any time using:
ocrconfig –manualbackup   --To take manual backup of ocr
ocrconfig –showbackup -- To list the available backup.
ocrdump –backupfile ‘bak-full-location’ -- To validate the backup before any restore.
ocrconfig –backuploc   --To change the OCR configured backup location.
dd if=’vote disk name’ of=’bakup file name’; To take votefile backup
To check OCR and Vote disk Location:
crsctl query css votedisk
/etc/orcle/ocr.loc or use ocrcheck
ocrcheck   --To check the OCR corruption status (if any).
Crsctl check crs/cluster --To check crs status on local and remote node
Moving OCR and Votedisk:
Login with root user as the OCR store on root and for votedisk stops all crs first.
Ocrconfig –replace ocrmirror/ocr -- Adding/removing OCR mirror and OCR file.
Crsctl add/delete css votedisks --Adding and Removing Voting disk in Cluster.
List to check all nodes in your cluster from root or to check public/private/vi pip info.
olsnodes –n –p –I
How can Restore the OCR in RAC environment?
1. Stop clusterware  all node and restart with one node in exclusive mode to restore. The nocrs ensure crsd process and OCR do not start with other node.
# crsctl stop crs, # crsctl stop crs –f
# crsctl start crs –excel –nocrs  
Check if crsd still open then stop it: # crsctl stop resource ora.crsd  -init 
2. If you want to restore OCR to and ASM disk group then you must check/activate/repair/create diskgroup with the same name and mount from local node. If you are not able to mount that diskgroup locally then drop that diskgroup and re-create it with the same name. Finally run the restore with current backup.
# ocrconfig –restore file_name;   
3. Verify the integrity of OCR and stop exclusive mode crs
# ocrcheck
# crsctl stop crs –f
4. Run ocrconfig –repair –replace command all other node where you did not use the restore. For example you restore the node1 and have 4 node then run that rest of node 3,2,4.
# ocrconfig –repair –replace  
5. Finally start all the node and verify with CVU command
# crsctl start crs
# cluvfy comp ocr –n all –verbose
Note: Using ocrconfig –export/ocrconfig –import also enables you to restore OCR
Why oracle recommends to use OCR auto/manual backup to restore the OCR instead of Export/Import?
1. An OCR auto/manual backup is consistent snapshot of OCR whereas export is not.
2. Backup are created when the system is online but you must shutdown all node in clusterware to take consistent export.
3. You can inspect a backup using OCRDUMP utility where as you cannot inspect the contents of export.
4. You can list and see the backup by using ocrconfig –showbackup where as you must keep track of each export.
How to Restore Votedisks?
1. Shutdown the CRS on all node in cluster
Crsctl stop crs
2. Locate current location of the vote disk restore each of the votedisk using dd command from previous good backup taken using the same dd command.
Crsctl query css votedisks
Dd if=<backup of votedisk> of=<votedisk file>
3. Finally start crs of all node.
Crsctl start crs
How to add node or instance in RAC environment?
1. From the ORACLE_HOME/oui/bin location of node1 run the script
$ ./ -silent "CLUSTER_NEW_NODES={node3}"
2. Run from ORACLE_HOME/ script of node3
3. Run from existing node srvctl config db -d db_name then create a new mount point
4. Mkdir –p ORACLE_HOME_NEW/”mount point name”;
5. Finally run the cluster installer for new node and update the inventory of clusterware
In another way you can start the dbca and from instance management page choose add instance and follow the next step.
How to Identify master node in RAC ?
# /u1/app/../crsd>grep MASTER crsd.log | tail -1
cssd >grep -i  "master node" ocssd.log | tail -1
OR You can also use V$GES_RESOURCE view to identify the master node.
Difference crsctl and srvctl?
Crsctl managing cluster related operation like starting/enabling clusters services where srcvctl manages oracle related operation like starting/stoping oracle instances. Also in oracle 11gr2 srvctl can be used to manage network,vip,disks etc.
What are ONS/TAF/FAN/FCF in RAC?
ONS is a part of clusterware and is used to transfer messages between node and application tiers.
Fast Application Notification (FAN) allows the database to notify the client, of any changes either node UP/DOWN, Database UP/DOWN.
Transport Application Failover (TAF) is a feature of oracle Net services which will move a session to the backup connection whenever a session fails.
FCF is a feature of oracle client which receives notification from FAN and process accordingly. It clean up connection when down event receives and add new connection when up event is received from FAN.
How OCCSD starts if voting disk & OCR resides on ASM?
Without access to the voting disk there is no css to join or accelerate to start the CLUSTERWARE as the voting disk stored in ASM and as per the oracle order CSSD starts before ASM then how it become possible to start OCR as the CSSD starts before ASM. This is due to the ASM disk header in 11g r2 having new metadata kfdhbd.vfstart, kfdhbd.vfend (which tells the CSS where to find the voting files). This does not require to ASM instance up. Once css getting the voting files it can join the cluster easily.
Note: Oracle Clusterware can access the OCR and the voting disks present in ASM even if the ASM instance is down. As a result CSS can continue to maintain the Oracle cluster even if the ASM instance has failed.
Upgration/Migration/Patches Question/Answer
What are Database patches and How to apply?
CPU (Critical Patch Update or one-off patch):  security fixes each quarter. They are cumulative means fixes from previous oracle security alert. To Apply CPU you must use opatch utility.
Shutdown all instances and listener associated with the ORACLE_HOME that you are updating.
Setup your current directory to the directory where patch is located and then run the opatch utility.
After applying the patch startup all your services and listner and startup all your database with sysdba login and run the catcpu.sql script.
Finally run the utlrp.sql to validate invalid object.
To rollback CPU Patch:
Shutdown all instances or listner.
Go to the patch location and run opatch rollback –id 677666
Start all the database and listner and use catcpu_rollback.sql script.
Bounce back the database use utlrp.sql script.
PSU (Patch Set Update): Security fixes and priority fixes. Once a PSU patch is applied only a PSU can be applied in near future until the database is upgraded to the newer version.
You must have two things two apply PSU patch:  Latest version for Opatch, PSU patch that you want to apply
1. Check and update Opatch version: Go to ORACLE_HOME/OPATCH/opatch version
Now to Update the latest opatch. Take the backup of opatch directory then remove the current opatch directory and finally unzip the downloaded patch into opatch directory. Now check again your opatch version.
2. To Apply PSU patch:
cd 13923374
opatch apply  -- in case of RAC optach utility will prompt for OCM (oracle configuration manager) response file. You have to provide complete path of OCM response if you have already created.
3. Post Apply Steps: Startup database with sys as sysdba
SQL> @catbundle.sql psu apply
SQL> quit
Opatch lsinventory  --to check which psu patch is installed.
Opatch rollback –id 13923374  --Rolling back a patch you have applied.
Opatch nrollback –id 13923374, 13923384 –Rolling back multiple patch you have applied.
SPU (Security Patch Update): SPU cannot be applied once PSU is applied until the database is upgraded to the new base version.
Patchset: (eg. to Applying a patchset usually requires OUI.
Shutdown all database services and listener then Apply the patchset to the oracle binaries. Finally Startup services and listner then apply post patch script.
Bundle Patches: it is for windows and Exadata which include both quarterly security patch as well as recommended fixes.
You have collection of patch nearly 100 patches. How can you apply only one of them?
By napply itself by providing the specific patch id and you can apply one patch from collection of many patch by using opatch util napply <patch_location> - id9- skip_subset-skip_duplicate. This will apply only patch 9 within many extracted patches.
What is rolling upgrade?
It is new ASM feature in oracle 11g. This enables to patch ASM node in clustered environment without affecting database availability. During rolling upgrade we can maintain node while other node running different software.
What happens when you use STARTUP UPGRADE?
The startup upgrade enables you to open a database based on earlier version. It restrict sysdba logon and disable system trigger. After startup upgrade only specific view query can be used no other views can be used till catupgrd.sql is executed.