Saturday, 23 December 2017

STEPS FOR APPLYING PATCHSET


11. release version
2.components
0.application
1.patchs
0.platform

Differences between PSU , CPU and SPU and how to install
========================================================

#Critical Patch Update (CPU) --> Release of security fixes each quarter instead the cumulative database security patch for the quarter.

#Patch Set Updates (PSU) --> Cumulative patches that include both the security fixes and priority fixes.  They are "version upgrades"
modifying the las number of version (11.2.0.1.1, 11.2.0.1.2,...).

#Security Patch Update (SPU) --> It is the new terminology of CPU (From October 2012) . SPU's can't be applied if any PSU's have been
 applied, unless your database is upgraded with a new base version.


==> How to apply opatch tool to upgrade patch. ***PSU APPLY***
______________________________________________________________

==backup existing opatch folder

1.go to oracle home location and move existing opatch opatch_old(old opatch)

2.unzip the patch set under oracle home folder

3.set path variables in bash_profile

    Export path=$path:/usr/ccs/bin

4.bring down the database and listener

5.execute opatch apply from softwares location


==> Post Installation instruction
=============================

6.connect to sysdba with /nolog

$ sqlplus /nolog

Sql> connect /as sysdba

Sql> startup

7.sql> @catbundle.sql (psu will be apply)

sql> quit

8.check opatch lsinventory from OS level "under $

cd $ORACLE_HOME/OPatch"

==================completion of PSU apply=====================================

STEPS TO GENERATE AND ANALYSE AWR REPORT

Steps To Generate AWR Report :



Step 1: Go to $ORACLE_HOME/rdbms/admin

Step 2: Run command ls -lrt awr*

Step 3: Connect to sqlplus

Step 4:  From the sql prompt run awrrpt (or) awrrpti (for specified instance)

    EX: From sql> @?/rdbms/admin/awrrpt.sql


Step 5: it will now ask for the report as either ‘HTML’ or ‘TEXT’. (choose one)


Step 6: Select number of days you want to go back or just hit enter for listing all completed snapshots.


Step 7: Then specify Begin and End snapshot Ids.

Step 8: Here you specify the name of the report or select the default name assigned. (or) select default name "Press enter"

Step 9: The report gets generated.

Step 10: Exit SQLPLUS.

Step 11: Run command ls –ltr newname to show the new file created under the path you are in




Analyse Statistics from report:


step 1: Go to my computer on linux machine

    select and click on file system ==>  click on "home" folder ==> click on "oracle" folder
   
    ==> select and Right click on awrrpt icon .. open with Web Browser

PROCESS BEHIND QUERY EXECUTION

 IMPORTANT AND DEFINITE INTERVIEW QUESTION

Q. How does the query execution occur?


1.  SQL*plus checks the syntax on client side.

2.  If syntax is correct the query is stamped as a valid SQL statement and
    encrypted into OCI (Oracle Call Interface) packets and sent via LAN using TCP
    to the server.

3.  Once the packets reach the server the server process will rebuild the
    query and again perform a syntax check on server side.

4.  Then if syntax is correct SP will continue execution of the query.

5.  The SP will go to the library cache. The L.C. will keep the recently
    executed SQL statements along with their execution plan.

6.  In the library cache the server process will search from the MRU (Most
    Recently Used) end to the LRU (Least Recently Used) end for a match for the
    SQL statement. It does this by using a hash algorithm that returns a hash
    value. If the hash value of the query we have written matches with that of the
    query in L.C. Then SP need not generate an execution plan (soft parsing) but
    if no match is found then SP has to proceed with the generation of execution
    plan (hard parsing).

7.  Parsing is the process undertaken by Oracle to generate an execution plan.

8.  The first step in parsing involves performing a semantic check. This is
    nothing but check for the existence of the obj and its structure in the
    database.

9.  This check is done by SP in the data dictionary cache. Here SP will ask
    for the definition of the object, if already available within the DDC , SP
    will process the check. If not available then SP will retrieve the required
    information from the system tablespace.

10. After this SP will approach the optimizer, who will read the SQL statement
    and generate the execution plan of the query.

11. After generation of the e-plan's the SP will pick the best e-plan and go
    to the L.C.

12. SP will keep the e-plan in the L.C. Along with the original SQL text.

13. At this point in time the parsing ends and the execution of the SQL
    statement will begin.

14. SP will then go to the database cache and checks whether the data required
    by the query is already available or not in the cache.

15. If available that data can be returned to the client else it brings the
    data from the database files.

16. If sorting and filtering is required by the query then the PGA is utilized
    along with the temporary tablespace for performing sort run.

17. After sort run the data is returned to the client and SQL*plus client will
    show the data to the users.

ORACLE DBA COMMANDS


                    ******ORACLE DBA commands*******
                          ~~~~~~~~~~~~~~~~~~~   



To get IP Address of host:
~~~~~~~~~~~~~~~~~~~~~~~~~
$ Host machine_name                     where machine_name=hostname
Ex:  Host   dev41.dxx.xxx.wxxxx.com
dev41.dxx.xxx.wxxxx.com   has address 10.0.8.138



To see the Current memory and SWAP usage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
$ free



To see the current HUGE pagesize
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
grep  Huge /proc/meminfo



To see the OS release version
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
cat    /etc/redhat-release       (For only  Linux)
cat   /etc/issue                         (General )

How to know Linux machine is 32 or 64 bit?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# uname  -m
If   i386 or i686 then it is 32 bit
if   x86_64 then it is 64bit

Check Kernel:
~~~~~~~~~~~~
rpm -qa --queryformat '%{installtime} (%{installtime:date}) %{n}-%{v}-%{r}\n' | sort -n | grep -e kernel-devel -e oracleasm | tail -6
Find my HP-Unix machine is 32 or 64 bit?
Method 1:
# getconf   KERNEL_BITS

Method 2:
# ps -ef |grep RT

Method 3:
# file /stand/vmunix

If 64 bit it will show something like: ELF-64 executable object file

How to know Sun Solaris machine is 32 or 64 bit?
# isainfo -v

Find SGA size:
~~~~~~~~~~~~~~
SELECT name,value/1024/1024 "SGA (MB)" FROM v$sga;
SELECT sum(value)/1024/1024 "TOTAL SGA (MB)" FROM v$sga;
Select POOL, Round(bytes/1024/1024,0) Free_Memory_In_MB From V$sgastat Where Name Like '%free memory%';
select * from v$sgainfo;

To find the no. of CPU’s   in a db:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
SQL > Show parameter cpu        (or)
$ cat /proc/cpuinfo                             (see the processors)    (or)
$ cat /proc/cpuinfo | grep "processor"|wc -l              (LINUX)  (or)   lscpu
$ ioscan -C processor | grep processor | wc -l          (HP-UX)
$ psrinfo -v|grep "Status of processor"|wc -l           (SOLARIS)
$ lsdev -C|grep Process|wc -l                                 (AIX)

Delete OS files older than ‘X’ days :
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
find /path/to/files* -mtime +5 -exec rm {} \;

Possible error (if there are Huge files): /bin/rm: Argument list too long
Sol:   find /path/to/files* -mtime +5  | xargs rm

Total DATABASE info:
~~~~~~~~~~~~~~~~~~~~

select dbid, name, instance_name, host_name, created, log_mode, open_mode from v$database, v$instance;


Archive log generation on on daily basis:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
select trunc(COMPLETION_TIME,'DD') Day, thread#, round(sum(BLOCKS*BLOCK_SIZE)/1048576) MB,count(*) Archives_Generated from Gv$archived_log group by trunc(COMPLETION_TIME,'DD'),thread# order by 1;


Find what SQL’s is Currently Running:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

select session.sid, session.username, optimizer_mode, hash_value, address, cpu_time, elapsed_time,      sql_text from v$sqlarea sqlarea, v$session session
 where session.sql_hash_value = sqlarea.hash_value
   and session.sql_address    = sqlarea.address
   and session.username is not null ;

To count all OBJECTS in particular Schema:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
SQL>select object_type,count(*) from dba_objects where owner='SACORP' group by object_type;
OBJECT_TYPE          COUNT(*)
------------------             ----------
PACKAGE BODY                4
PROCEDURE                      61
VIEW                                    3
TABLES                              24

To LIST all OBJECTS in All schemas in database:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
select owner,object_type,count(*) from dba_objects where owner in (select distinct owner from dba_segments)  group by object_type,owner order by 1;


To count all OBJECTS in particular Tablespace:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
select OWNER,SEGMENT_TYPE,count(*) from DBA_SEGMENTS where TABLESPACE_NAME ='Tablespace_name'  group by OWNER,SEGMENT_TYPE order by 1;

Objects created from past ‘x’ days:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
SQL> select object_name, object_type from dba_objects where sysdate-created <x;
SQL > select OWNER,OBJECT_TYPE,count(*) from dba_objects where   sysdate-created <x group by OWNER,OBJECT_TYPE;

How to check the last DML  happened in a table
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
SELECT MAX(ORA_ROWSCN) FROM  yourtable_name;
select scn_to_timestamp(input from above query) from dual;

To check invalid objects:
~~~~~~~~~~~~~~~~~~~~~~~~
SQL> select object_type,count(*) from dba_objects where status='INVALID' group by object_type;

To check USED & FREE space of ARCHIVELOG destination/FRA:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Archive log list
select space_limit/1024/1024/1024 maxGB,space_used/1024/1024/1024 usedGB from v$recovery_file_dest;

SELECT FILE_TYPE "Type",PERCENT_SPACE_USED "% Used",PERCENT_SPACE_RECLAIMABLE "% Reclaim",
NUMBER_OF_FILES "# Files" FROM V$FLASH_RECOVERY_AREA_USAGE;


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

REDO LOG:
Redo Logs info:
set linesize 300
column REDOLOG_FILE_NAME format a50

SELECT     a.GROUP#,    a.THREAD#,  a.SEQUENCE#,  a.ARCHIVED,  a.STATUS,
    b.MEMBER    AS REDOLOG_FILE_NAME,  (a.BYTES/1024/1024) AS SIZE_MB
FROM v$log a  JOIN  v$logfile b  ON  a.Group#=b.Group#  ORDER BY a.GROUP# ASC;


Nohup
~~~~~~~~~~~~~~~~~~~~
$ Nohup   run_sql.sh   indexes &         (output creates as indexes.out)
$ nohup sqlplus / as sysdba @move_lob.sql &

To see MAXDATAFILE parameter in control file
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
SQL>  select  records_total  from  v$controlfile_record_section   where   type = 'DATAFILE';
SQL>  show parameter db_files;
SQL >  Alter system set db_files=700 scope=both;

SQL >  Create pfile  from spfile;             (wrong, throws an error)
SQL>  create pfile=’/var/backup/corppsdb/abc.ora’ from spfile ;          (right)

Copying CONTROL FILE
~~~~~~~~~~~~~~~~~~~~
We copy control file using CP command to some location it will create but we can’t open or edit
So we can copy control file using,
SQL> Alter database backup control file to trace (or)
SQL> Alter database backup control file to ‘../../..’

SQL> sho parameter NLS_LENGTH_SEMANTICS;
SQL >   ALTER  SYSTEM  SET  NLS_LENGTH_SEMANTICS=CHAR;
or
SQL >   ALTER  SYSTEM  SET  NLS_LENGTH_SEMANTICS=BYTE;


Create MAX EXTENTS size to unlimited
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Spool newrun.sql

SQL> select 'alter'||' '||object_type||' '||object_name||' '||'move storage (maxextents unlimited);' from dba_objects where object_type in ('TABLE','INDEX') and owner = 'SYSFC';

Set head off
set pagesize 50000

:
SQL> @newrun.sql


Total no. of USERS connected to database :
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

SQL> select username, osuser from v$session where username is not null;
Or
select count(*) from v$session where username is not null and username != 'SYS';

Total connections to DB:
~~~~~~~~~~~~~~~~~~~~~~~

Set linesize 280;
select count(*),INST_ID,username,osuser,status,machine from gv$session group by INST_ID,username,osuser,status,machine order by count(*),INST_ID;

To see the udump,bdump,cdump locations in 10g :
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

SQL> show parameter user_dump_dest;               (similarly all)

To see the udump,bdump,cdump locations in 11g:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

$  env |grep diag

To see the last lines in logfile or Trace file :
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
SQL> tail -20 alertlog(or)tracefile                      (to display last 20 lines)

To see Tablespace and its datafiles:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

SQL> select FILE_NAME, TABLESPACE_NAME, STATUS from dba_data_files where TABLESPACE_NAME='CORP_DAT2' ;

To see all data files, Control files and redo logs including TEMP file in one query:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

SQL> select name from v$datafile union select name from v$tempfile;
SQL> select name from v$controlfile;
SQL> select member from v$logfile;

While writing spool
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

SQL> set echo off
SQL> set verify off
SQL> set heading off
SQL> set pagesize 5000
SQL> set feedback off

SQL> select 'alter index'||' '||INDEX_NAME||' '||'storage (maxextents 2147483645);' from user_indexes where MAX_EXTENTS is NULL;
================================================================
INDEXES:

List all indexes on a table:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Select owner, table_name, index_name, column_name FROM user_ind_columns Order by owner, table_name, column_name Where table_name=’TABLE_NAME’;

To see Tables, indexes and their Tablespace :
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
select INDEX_NAME, TABLE_NAME, TABLESPACE_NAME from user_indexes;

Moving index to another Tablespace:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

alter index <index_name> rebuild tablespace  <tablespace_name>;

How to find if index is LOCAL or GLOBAL in oracle:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

select index_name, locality from all_part_indexes where table_name='tab_name’;

Find no. of objects of a user in each tablespace
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

select OWNER,SEGMENT_TYPE,TABLESPACE_NAME,count(*) from dba_segments where OWNER='DDS_DISTRIBUTE_MODE' group by OWNER,SEGMENT_TYPE,TABLESPACE_NAME;

To make index extent size to unlimited:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Alter index Ind_name rebuild online storage (MAXEXTENTS  Unlimited);

Script for the same :
Spool runme.sql

Select 'alter index'||' '||INDEX_NAME||' '||'rebuild online storage(MAXEXTENTS  Unlimited);' from user_indexes;
                             or
select 'alter index'||' '||INDEX_NAME||' '||'rebuild online storage(MAXEXTENTS  Unlimited);' from user_indexes where max_extents is null;

TABLE :
Count Number of COLUMNS in a Table:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

select count(column_name) from user_tab_columns  where  table_name=’TABLE_NAME’;

To change table data type
~~~~~~~~~~~~~~~~~~~~~~~~~
alter  table  table_name  modify( name varchar2(32));                       (changed from 30 to 32)

Finding SIZE of a TABLE:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

select sum(BYTES)/1024/1024/1024 GB from user_segments where SEGMENT_NAME=’tab_name’;

Find LOCK on a TABLE:
~~~~~~~~~~~~~~~~~~~~~~

select a.session_id,a.oracle_username, a.os_user_name, b.owner "OBJECT OWNER", b.object_name,b.object_type,a.locked_mode from
(select object_id, SESSION_ID, ORACLE_USERNAME, OS_USER_NAME, LOCKED_MODE from v$locked_object) a,
(select object_id, owner, object_name,object_type from dba_objects) b where a.object_id=b.object_id

Moving Table to another Tablespace:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
SQL> alter table tab_name move tablespace <new_tablespace_name>;

=======================================================================

LINUX
~~~~~~~~~~~
~~~~~~~~~~~

Sqlplus commands from LINUX:  echo “sql_statement” | sqlplus –s username/passwd
Ex: echo "select count(*) from tab;"  | sqlplus -s / as sysdba
  COUNT(*)
----------
      4740

tar -czvf name-of-archive.tar.gz /path/to/directory-or-file
z: Compress the archive with gzip.       -c: Create         -f : Filename

Count specific word in Linux file:
grep -o 'CREATE SEQUENCE'  abc.txt | wc –l

Find command :  find –name abc.sql
To see LISTENER status :     $ ps -fu oracle |grep tns
To see all HOME locations :         $ env | grep HOME           (use CAPS)
                                                                $ env | grep ORACLE_HOME
                                                                $ echo  $ORACLE_HOME
Similarly for ASM :                          $  env |grep  ASM
To see all SID’s :                                $  env |grep SID
                                                                $ env |grep ORACLE_SID    (for only Oracle_sid)

To see everything :                        $ env | grep ORACLE   (use CAPS)
Ex:  > env |grep ORACLE
ORACLE_BASE=/opt/oracle
ORACLE_HOME=/opt/oracle/product/11.2.0.2
ORACLE_HOSTNAME=vcd04124.ute.fedex.com
ORACLE_SID=VCD04124
ORACLE_UNQNAME=VCD0412

To know which shell are we using in database :    env | grep SHELL

NOTE :  Use coraenv when using the C Shell and oraenv when using a Bourne, Korn or Bash shell.

Running   .oraenv  à    .  ./.oraenv
Running   .profile  à     .  .profile

Find SIZE of files/directories

$ du -sh * | sort -n       (Display all sizes of files in a current path only)   
$ du  -sh          (Displays total size of the directory where you are in)
$ df  -h              (shows all the used  and free sizes on mount points)
   df  -h    /var/backup          (to see for a particular mount point)
$ stat  (file(or)directory name)                      (gives the complete status(timestamp) of that file/directory)

To know how many hours I was connected to the server till today
$ ac  -d                        (‘ac’ is an LINUX command )

To know connect time for all the users
$ ac  -p

To know the connect time for a specific user (ex: basupally)
$ ac  -d  basupally

$ w
Gives complete info of the session connected like
16:11:34 up 54 days, 20:09,  2 users,  load average: 0.29, 0.24, 0.19
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT
cb858037 pts/0    199.81.99.89     15:22   34:58   0.01s  0.01s sshd: cb858037 [priv]
cb858037 pts/1    199.81.99.89     15:23    0.00s  0.01s  0.01s sshd: cb858037 [priv]

To know about any User
$ Finger   userid   (ex: finger cb858037)

$ chmod  777 *.log                          (for files)
$ chmod  -R 777 /var/back/omega                     (giving  rwx permissions to a directory OMEGA)

To Run DBCA:
$ Export DISPLAY=ipaddress:0  (ip address from cmd promptàipconfig)


Vi   editor Commands

1)      Replacing a word in entire file    :%s/old/new/g
2)      Replacing a word in single line only     :s/old/new/g
3)      Replacing a word in line ‘n’         :ns/old/new
4)      To go to the END of a line     ‘:$’  or  ‘G’
5)      To go to the FIRST line  ‘gg’
6)      To find CURRENT LINE number   :.=  or (ctl g   -> gives line #, file name,Total lines in a file)
7)      Count the word ABC  in entire vi file     :%s/ABC/ABC/g
8)      Count a string from outside file :  grep “abc  efg“ file_name  | wc –l
                                                        grep -c "abc efg”  file_name
                                                    more  /etc/profile | grep -i ulimit
9)      To give numbering to each line    :set nu
10)   To turn off   the numbering      :set nonu
11)    To open a file with cursor at last line     vi  +  filename
12)   To open a file with cursor at line number 25    vi  +25  filename
13)   To open a file with cursor at first occurrence of WORD  ‘abc’     vi   /abc  filename

Edit multiple files at a time :
$ vi   ipom.xml pom.xml ppom.xml tpom.xml
Opens 1st file(ipom)-->edit and save (:w) --> :n (to go to another next file) --> :n (go to next file) and so on
                                 --> :N (to go to Previous  file)




User Management

Find User privileges:
SQL> select * from dba_sys_privs where grantee='SCHEMA_NAME';

select TABLE_NAME,PRIVILEGE,GRANTOR from dba_TAB_PRIVS  where grantee='SCHEMA_NAME';

Find user roles:
SQL>   select * from dba_role_privs where grantee='SCHEMA_NAME';

To see the current Active Roles of any schema, then log in into that schema and query this:
SQL>   select * from session_roles;

Find privileges granted to a ROLE:
SELECT * FROM  role_sys_privs  where  role=’ROLE_NAME’;
SELECT * FROM  role_tab_privs  where  role=’ROLE_NAME’;

Roles Granted to a ROLE :
SELECT * FROM  role_role_privs  where  role=’ROLE_NAME’;

Get Oracle User DDL with dbms_metadata

set long 200000 pages 0 lines 131
column meta format a121 word_wrapped
select dbms_metadata.get_ddl('USER', '&&username') meta from dual;
select dbms_metadata.get_granted_ddl('OBJECT_GRANT', '&&username') meta from dual;
select dbms_metadata.get_granted_ddl('ROLE_GRANT', '&&username') meta from dual;
select dbms_metadata.get_granted_ddl('SYSTEM_GRANT', '&&username') meta from dual;



About   DATABASE

How to Calculate the Size of the Database (Doc ID 1360446.1)

(Used space):
----------------
select sum(bytes)/1024/1024/1024 GB from dba_segments;


(Free space):
---------------
select sum(bytes)/1024/1024/1024 GB from dba_free_space;


(Total database size):
---------------------------
 select sum(bytes)/1024/1024/1024 GB from dba_data_files;

  +

(Temp size):
---------------
SQL> select SUM(bytes_used)/1024/1024/1024 GBused, SUM(bytes_free)/1024/1024/1024 GBfree from  v$temp_space_header;

(Or)

SELECT SUM (a.log_space + b.data_space + c.tempspace) "Total_DB_Size (G)"   FROM (SELECT ROUND (SUM (BYTES/1024/1024/1024), 2) data_space  FROM dba_data_files) b, (SELECT ROUND (SUM (BYTES*members/1024/1024/1024), 2) log_space  FROM v$log) a,   (SELECT NVL(ROUND(SUM(BYTES/1024/1024/1024),2), 0) tempspace  FROM dba_temp_files) c;


DROP all USERS like name:

BEGIN
  FOR i IN (
    SELECT t.username
    FROM DBA_USERS t
    WHERE t.username LIKE 'QA_%')
  LOOP
    EXECUTE IMMEDIATE 'DROP USER '|| i.username||' CASCADE';
  END LOOP;
 EXCEPTION WHEN OTHERS THEN
   dbms_output.put_line(sqlerrm);
END;
/

Oracle server uptime:
SQL> SELECT TO_CHAR (startup_time, 'DD-MON-YYYY HH24:MI:SS') "DB Startup Time" from SYS.v_$instance;
Or  $ uptime

Find all Child table FK’s for a parent PK:

select a.owner,a.table_name, a.constraint_name from sys.all_constraints a,(select owner,constraint_name from sys.all_constraints where owner = ‘OWNER’ and table_name = ‘TABLE_NAME’ and constraint_type in ('P','U')) b where a.constraint_type = 'R' and a.r_constraint_name = b.constraint_name and a.r_owner = b.owner;

Find SCHEMA size:
SELECT sum(bytes)/1024/1024 MB FROM dba_segments where owner=’owner_name’;

Tablespace used and free space:
SELECT SUBSTR (df.NAME, 1, 50) file_name, df.bytes / 1024 / 1024 allocated_mb,
((df.bytes / 1024 / 1024) - NVL (SUM (dfs.bytes) / 1024 / 1024, 0))
used_mb, NVL (SUM (dfs.bytes) / 1024 / 1024, 0) free_space_mb
FROM v$datafile df, dba_free_space dfs WHERE df.file# = dfs.file_id(+) and dfs.tablespace_name=’tablespace_name’
GROUP BY dfs.file_id, df.NAME, df.file#, df.bytes ORDER BY file_name;

Find used and free space in TABLESPACES:
select * from DBA_TABLESPACE_USAGE_METRICS;

All schema sizes in Database :
select owner, sum(bytes)/1024/1024 MB from dba_segments  group by owner order by MB desc;

Find schema and its tablespace:
SQL> select USERNAME, DEFAULT_TABLESPACE from dba_users where USERNAME='SYSFMSO';

Find schema and all related tablespaces:
select owner,tablespace_name from dba_tables where owner='CVM_SCHEMA' group by owner,tablespace_name;

Finding BIGFILE tablespace in database:
select name, bigfile   from v$tablespace;

Find total number of procedures in a database:
Select count(*) from dba_procedures;

To count the number of data files in a database
SQL>  select  count(file_name)  from  dba_data_files;

To find out the path to your current session’s trace file
SQL>  select value from v$diag_info where name = 'Default Trace File';

Find redo size generated per day
SQL>  select trunc(completion_time) rundate,count(*) logswitch ,round((sum(blocks*block_size)/1024/1024)) "REDO PER DAY (MB)" from v$archived_log group by trunc(completion_time) order by 1;

To findouts Trace file of different session:
select instance_name || ‘_ora_’ || spid || ‘.trc’ filename from v$process p, v$session s, v$instance
where p.addr = s.paddr and s.sid = 170;

Find my own SID:
select sid from v$mystat where rownum  <=1;
(OR)   select distinct  sid from v$mystat;

To check all the spfile parameters
SQL>   SHOW PARAMETER;

Startup database with pfile or spfile
SQL> CONNECT sys/password AS SYSDBA
SQL> startup pfile='<pfile location>';

Analyze all tables and indexes that are owned by the ‘ABC’ user:
execute DBMS_UTILITY.ANALYZE_SCHEMA('ABC','ESTIMATE')                 (before 8i DBMS_UTILITY is used)

To analyze single table
ANALYZE   TABLE   SCOTT.RA_INTERFACE_DISTRIBUTIONS_ALL   COMPUTE   STATISTICS;         

CREATE OR REPLACE procedure SYS.flush_buffer_and_shared_pool
as
begin
execute immediate 'alter system flush shared_pool';
execute immediate 'alter system flush buffer_cache';
dbms_output.put_line('!!!!...Shared pool and buffer cache has been flushed successfully..!!!!');
end flush_buffer_and_shared_pool;
/

Created this procedure to do both @same time
Running the above procedure as follows:
SQL > Exec SYS.flush_buffer_and_shared_pool          (from sys user)

Gather stats on the entire database...
execute  dbms_stats.gather_database_stats;
exec dbms_stats.delete_database_stats;

Gather stats for a single schema...
execute dbms_stats.gather_schema_stats('SCOTT');
exec dbms_stats.delete_schema_stats('SCOTT');
(or)
exec dbms_stats.gather_schema_stats(ownname=>'USER_NAME',OPTIONS=>'GATHER AUTO');

Gather stats for a schema table...
exec dbms_stats.gather_table_stats('<owner>', '<table_name>');
exec dbms_stats.delete_table_stats('SCOTT', 'EMPLOYEES');
exec dbms_stats.delete_index_stats('SCOTT', 'EMPLOYEES_PK');

exec dbms_stats.gather_schema_stats( -
     ownname          => 'SCOTT', -
     options          => 'GATHER AUTO'   )


Find TABLE modifications done after gathering stats
Select TABLE_NAME,INSERTS,UPDATES,DELETES,TIMESTAMP from DBA_TAB_MODIFICATIONS where TABLE_OWNER='SCHEMA_NAME’;

To restrict the out of a query use rownum parameter
Ex: select * from user_tables  where rownum <5 ;         (Displays only 4 rows)
      select * from user_tables  where rownum < 9;          (Displays only 8 rows)

Convert SCN to Timestamp:
SQL>   select scn_to_timestamp(SCN_number) from dual;

Conn / as sysdba
(if u want to run anything as DEMO user and if you don’t know the passwd then do this)

SQL> alter session set current_schema=DEMO;
SQL> select sys_context('USERENV','SESSION_USER') current_user,sys_context('USERENV','SESSION_SCHEMA') current_schema from dual;

CURRENT_USER
--------------------------------------------------------------------------------
CURRENT_SCHEMA
--------------------------------------------------------------------------------
SYS
DEMO


List autoextensible datafiles

SQL>   Select  file_name from dba_data_files  where  autoextensible = 'YES'

Datapump job status:

SELECT owner_name, job_name, operation, job_mode,state, attached_sessions FROM dba_datapump_jobs WHERE job_name NOT LIKE 'BIN$%' ORDER BY 1,2;


Temporary Tablespace :

Check all TEMP tablespace space in database:
SELECT     A.tablespace_name tablespace,    D.mb_total,   SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_used,   D.mb_total - SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_free
FROM    v$sort_segment A, (SELECT    B.name,    C.block_size,    SUM (C.bytes) / 1024 / 1024 mb_total
FROM    v$tablespace B,    v$tempfile C WHERE    B.ts#= C.ts#  GROUP BY    B.name,    C.block_size) D
WHERE    A.tablespace_name = D.name  GROUP by    A.tablespace_name,    D.mb_total
/

Checking Default Temporary tablespace:
select  property_value  from  database_properties where  property_name = 'DEFAULT_TEMP_TABLESPACE';

To see Temporary segment usage:    v$tempseg_usage

To check if anyone doing sort operations:
SQL> SELECT COUNT(*) FROM v$sort_usage WHERE tablespace = 'TEMP'

LIST all Temp files;
select FILE_NAME, BYTES/1024/1024/1024 GB from dba_temp_files;


DIFFERENCES:

DBA_ views: These views are built on Data dictionary they're not available if the database is not mounted and opened.

V$ views : These views tend to run against the instance, and therefore may be available if the database is not mounted, or is not mounted and opened, depending on the nature of the view.


***DATAPUMP PARALLEL option is only available in Enterprise Edition of Oracle Database ***

-- locate Data Pump master tables:

SELECT o.status, o.object_id, o.object_type,        o.owner||'.'||object_name "OWNER.OBJECT"
  FROM dba_objects o, dba_datapump_jobs j  WHERE o.owner=j.owner_name AND o.object_name=j.job_name   AND j.job_name NOT LIKE 'BIN$%' ORDER BY 4,2;

EXCLUDE=SCHEMA:"IN\(\'SYS\',\'SYSMAN\',\'SYSTEM\',\'XDB\',\'PUBLIC\',\'OUTLN\',\'ORDSYS\',\'OLAPSYS\',\'DBSNMP\',\'APEX_030200\',\'CTXSYS\',\'EXFSYS\',\'FLOWS_FILES\',\'MDSYS\',\'ORDDATA\',\'OUTLN\'\)"


RMAN:


Full bkup:

run {
2> backup as backupset tag 'QA1_FULL' database FORMAT '/usr/mware/full_%u_%s_%p' include current controlfile PLUS ARCHIVELOG;
3> }


Rman TARGET SYS/pwd NOCATALOG debug trace=rman.trc log=’/pullpath/rman.log’

$ORACLE_HOME/bin/rman target sys/VCOMUPP@VCOMUPPB auxiliary sys/VCOMUPP@VCOMUPSS << _END_ >> standby_6.log
RUN
{
CONFIGURE DEVICE TYPE DISK PARALLELISM 2;
DUPLICATE TARGET DATABASE
FOR STANDBY
FROM ACTIVE DATABASE
DORECOVER
NOFILENAMECHECK;
}
exit
_END_

The following query shows the status RMAN jobs.

select OPERATION, START_TIME, END_TIME, OBJECT_TYPE, STATUS from v$RMAN_STATUS order by START_TIME;

RMAN>  Delete archivelog all completed before  'SYSDATE-7';

COLD Backup:

RMAN >  run {
backup full tag full_cold_backup
format ‘/var/backup/corppsdb/rman/db_t%t_s%s_p%p’
database;
}

Rman running job status :

  SELECT SID, SERIAL#, CONTEXT, SOFAR,TOTALWORK, ROUND(SOFAR/TOTALWORK*100,2)  "%_COMPLETE" FROM V$SESSION_LONGOPS WHERE OPNAME LIKE 'RMAN%'  AND OPNAME NOT LIKE '%aggregate%'   AND TOTALWORK != 0 AND SOFAR <> TOTALWORK ;

Tape backup status:
select name, start_time, end_time, INPUT_TYPE,status, time_taken_display ELAPSED, compression_ratio COMPR,
input_bytes_display INB, output_bytes_display OUTB, input_bytes_per_sec_display INBPERSEC, output_bytes_per_sec_display OUTBPERSEC  from V$RMAN_BACKUP_JOB_DETAILS, V$database where output_device_type = 'SBT_TAPE' order by session_stamp desc;



To see any Dictionary view use DICT view:
Example
SQL> select table_name from dict where table_name like '%BACKUP%';



DATAGUARD (STANDBY) :

select DB_UNIQUE_NAME, DATABASE_ROLE,STATUS,OPEN_MODE from v$database,v$instance;

Check standby redo logs:
select * from  v$logfile where type='STANDBY';
SELECT GROUP#,THREAD#,SEQUENCE#,ARCHIVED,STATUS FROM V$STANDBY_LOG;

Use the following query to confirm that Data Guard is in active mode:
 SELECT 'Using Active Data Guard' ADG FROM V$MANAGED_STANDBY M, V$DATABASE D WHERE M.PROCESS LIKE 'MRP%' AND D.OPEN_MODE='READ ONLY';
ADG
-----------------------
Using Active Data Guard

no rows selected    ---->  then Active Data Guard is not enabled


To determine if Redo Apply has recovered all redo that has been received from the
primary, query the V$DATAGUARD_STATS view.

SQL> SELECT * FROM V$DATAGUARD_STATS WHERE NAME=’apply lag’;


To see if the MRP is running or not

SQL> SELECT PROCESS, STATUS FROM V$MANAGED_STANDBY;

To start Redo Apply, issue the following statement:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

To stop recovery of a standby database:

SQL>   ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

On primary:
SELECT THREAD# "Thread",SEQUENCE# "Last Sequence Generated" FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)  ORDER BY 1;

On Standby:
set line 280
SELECT name "Database name", STATUS "MRP process",ARCH.THREAD# "Thread", ARCH.SEQUENCE# "Last Sequence Received", APPL.SEQUENCE# "Last Sequence Applied", (ARCH.SEQUENCE# - APPL.SEQUENCE#) "Difference" FROM (select name from v$database) name,(SELECT STATUS  FROM V$MANAGED_STANDBY where process like '%MRP%') process,(SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH, (SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL WHERE ARCH.THREAD# = APPL.THREAD# ORDER BY 1;

Check Archive gap on Standby:
SELECT THREAD#, LOW_SEQUENCE#, HIGH_SEQUENCE# FROM V$ARCHIVE_GAP;

Check the max log sequence on Primary DB and on Standby DB

SELECT Max(sequence#) FROM   v$log_history;

# check the last log applied on STANDBY

SELECT thread#,   Max(sequence#) "Last Standby Seq Applied" FROM   v$archived_log   WHERE  applied = 'YES'  GROUP  BY thread#   ORDER  BY 1;

On Primary check ERROR MESSAGE to standby:
SELECT gvi.thread#, timestamp, message FROM gv$dataguard_status gvds, gv$instance gvi WHERE gvds.inst_id = gvi.inst_id AND severity in ('Error','Fatal') and rownum <11 ORDER BY timestamp, thread#;

Recovery speed:
set linesize 400
col Values for a65
col Recover_start for a21
select to_char(START_TIME,'dd.mm.yyyy hh24:mi:ss') "Recover_start",to_char(item)||' = '||to_char(sofar)||' '||to_char(units)||' '|| to_char(TIMESTAMP,'dd.mm.yyyy hh24:mi') "Values" from v$recovery_progress where start_time=(select max(start_time) from v$recovery_progress);


ASM:

Export ORACLE_SID=+ASM

To see DISK GROUP free and Used space:
select GROUP_NUMBER, NAME,TOTAL_MB, FREE_MB, USABLE_FILE_MB from V$ASM_DISKGROUP;

To see DISK free and Used space in a DISK GROUP:
select disk_number "Disk #", free_mb from v$asm_disk where group_number = x order by 2;

To view ASM operation(after add/drop/resize):
SQL> select group_number, operation, state, power, actual, sofar, est_work, est_rate, est_minutes from v$asm_operation;

ASM disk path and header_status:
select path, header_status, mode_status from v$asm_disk;


Database TUNING:

Find table having STALE stats or not:

select OWNER,TABLE_NAME,LAST_ANALYZED,STALE_STATS from DBA_TAB_STATISTICS where TABLE_NAME=’TABLE_NAME’;

Begin DBMS_STATS.GATHER_TABLE_STATS (
ownname => 'SCOTT',
tabname => 'EMP',
degree => 2,
cascade => TRUE,
METHOD_OPT => 'FOR COLUMNS SIZE AUTO',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE);
     END;
     /

There are several values for the options parameter that we need to know about:

gather – re-analyzes the whole schema.
gather empty – Only analyze tables that have no existing statistics.
gather stale – Only re-analyze tables with more than 10% modifications (inserts, updates, deletes).
gather auto – This will re-analyze objects which currently have no statistics and objects with stale statistics.   Using gather auto is like combining gather stale and gather empty.

Monitor all current sql executions in database:

SET HEAD ON PAGES 50000 ECHO OFF VERIFY OFF FEED ON LINESIZE  400 TRIMSPOOL ON AUTOTRACE OFF
COLUMN percent         FOR 999.99
COLUMN message         FOR A90
COLUMN "Start  Time"   FOR A23
COLUMN "Actual Time"   FOR A23
COLUMN username        FOR A10
COLUMN sql_id          FOR A15

SELECT a.sid, a.username,s.status,   a.sql_id,s.OSUSER,s.program,s.state,s.SECONDS_IN_WAIT ,
        TO_CHAR(start_time,'DD-Mon-YYYY HH24:MI:SS')  "Start  Time",
        TO_CHAR(SYSDATE   ,'DD-Mon-YYYY HH24:MI:SS')  "Actual Time",
        message,   (sofar/DECODE(NVL(totalwork,1),0,1,totalwork))* 100 percent
FROM   v$session_longops a, v$session s WHERE a.sid=s.sid and DECODE(sofar/DECODE(NVL(totalwork,1),0,1,totalwork),0,1,sofar/DECODE(NVL(totalwork,1),0,1,totalwork)) * 100 <> 100;

List all STALE objects in databases:

SET SERVEROUTPUT ON

DECLARE
ObjList dbms_stats.ObjectTab;
BEGIN
dbms_stats.gather_database_stats(objlist=>ObjList, options=>'LIST STALE');
FOR i in ObjList.FIRST..ObjList.LAST
LOOP
dbms_output.put_line(ObjList(i).ownname || '.' || ObjList(i).ObjName || ' ' || ObjList(i).ObjType || ' ' || ObjList(i).partname);
END LOOP;
END;
/

Query to check blocking sessions:  (use GV$ for RAC db)

SELECT s1.username || '@' || s1.machine
    || ' ( SID=' || s1.sid || ' )  is blocking '
    || s2.username || '@' || s2.machine || ' ( SID=' || s2.sid || ' ) ' AS blocking_status
    FROM v$lock l1, v$session s1, v$lock l2, v$session s2
    WHERE s1.sid=l1.sid AND s2.sid=l2.sid
    AND l1.BLOCK=1 AND l2.request > 0
    AND l1.id1 = l2.id1
    AND l2.id2 = l2.id2 ;


Script to Check Maximum Processes (check if database is hitting ORA-00020: maximum number of processes)

select resource_name, current_utilization/limit_value*100,current_utilization, max_utilization,limit_value from v$resource_limit where resource_name in ('processes','sessions');

Find out the locks in the table

select a.session_id,a.oracle_username, a.os_user_name, b.owner "OBJECT OWNER", b.object_name,b.object_type,a.locked_mode from
(select object_id, SESSION_ID, ORACLE_USERNAME, OS_USER_NAME, LOCKED_MODE from v$locked_object) a,
(select object_id, owner, object_name,object_type from dba_objects) b
where a.object_id=b.object_id;


Find LOCK on object:

select owner||'.'||object_name obj,oracle_username||' ('||s.status||')' oruser,os_user_name osuser,
machine computer,l.process unix,
''''||s.sid||','||s.serial#||'''' ss,r.name rs,to_char(s.logon_time,'yyyy/mm/dd hh24:mi:ss') time
from v$locked_object l, dba_objects o, v$session s, v$transaction t, v$rollname r
where l.object_id = o.object_id and s.sid=l.session_id and s.taddr=t.addr
and t.xidusn=r.usn and o.object_name=’OBJECT_NAME’
order by osuser, ss, obj;



SELECT O.OBJECT_NAME, S.SID, S.SERIAL#, P.SPID, S.PROGRAM,S.USERNAME,S.MACHINE,S.PORT , S.LOGON_TIME,SQ.SQL_FULLTEXT FROM V$LOCKED_OBJECT L, DBA_OBJECTS O, V$SESSION S, V$PROCESS P, V$SQL SQ  WHERE L.OBJECT_ID = O.OBJECT_ID AND L.SESSION_ID = S.SID AND S.PADDR = P.ADDR
AND S.SQL_ADDRESS = SQ.ADDRESS;


FIND how much undo rollbacked:
select start_time,used_urec,used_ublk from v$transaction;

SELECT s.sid, s.serial#, s.username, s.program, t.used_ublk, t.used_urec  FROM v$session s, v$transaction t
 WHERE s.taddr = t.addr  ORDER BY 5 desc, 6 desc, 1, 2, 3, 4;

Find last DDL & DML time on a object:

 select  (select last_ddl_time from dba_objects where object_name='T' and owner='SYS') "DDL Time",
  decode(maxscn,0,'N/A',scn_to_timestamp(maxscn)) "DML Time"
  from   (select nvl(max(ora_rowscn),0) maxscn from cvm_schema.CI_CUST_HIER_SUMMARY);


Scheduler job status:
select    owner, job_name, job_class, enabled, next_run_date, repeat_interval from dba_scheduler_jobs where job_name=

Check PLSQL  DDL:
select text from dba_source where name ='Procedure_name’;

SQL text for particular SID:

select a.sid,a.program,b.sql_text from v$session a, v$sqltext b
where a.sql_hash_value = b.hash_value and a.sid=144 order by a.sid,hash_value,piece;


Total cursors open, by session
select a.value, s.username, s.sid, s.serial# from v$sesstat a, v$statname b, v$session s
where a.statistic# = b.statistic#  and s.sid=a.sid and b.name = 'opened cursors current' order by 1;

Kill particular  OSUSER sessions
select 'Alter system kill session '''||sid||','||serial#||''';' from v$session where osuser='username' ;
or

begin
  for sessions in ( select sid,serial# from   v$session  where  username = 'QA_SR')
  loop
    execute immediate 'alter system kill session '''||sessions.sid||','||sessions.serial#||'''';
  end loop;
end;
/

Getting BIND variable values:
select * from V$SQL_BIND_CAPTURE where sql_id='8f6haaamzf';

Finding bind variable values :
select sn.BEGIN_INTERVAL_TIME,sn.END_INTERVAL_TIME,sb.NAME,sb.VALUE_STRING from DBA_HIST_SQLBIND sb,DBA_HIST_SNAPSHOT sn  where sb.sql_id='8f6hssspf' and sb.WAS_CAPTURED='YES' and sn.snap_id=sb.snap_id  order by sb.snap_id,sb.NAME;

Query to show sql_ids related to SQL Profiles:

select distinct  p.name sql_profile_name,s.sql_id from dba_sql_profiles p, DBA_HIST_SQLSTAT s where
p.name=s.sql_profile;


AWR:

-- This causes the repository to refresh every 15 minutes & and retain all data for 2 weeks.
Exec dbms_workload_repository.modify_snapshot_settings  (retention=>20160, interval=> 15);

Create Manual snapshot:  EXEC dbms_workload_repository.create_snapshot;

======================================================================================

RAC:

$GRID_HOME/log/<hostname>/alert<<hostname>>.log
$GRID_HOME/log/<hostname>/cssd/ocssd.log
ASM alert log location:
. oraenv
+ASM
Adrci
Show alert
$ORACLE_BASE/diag/asm/+asm/+ASM{instance number}/trace/ alert_+ASM {instance number}.log
OS logs: /var/log/messages
rpm –qa | grep oracleasm
./crsctl stop crs  (OR)  ./crsctl stop crs –f
crsctl check crs
crsctl check cluster –all   (lists all on all the nodes)
crsctl query css votedisk
crs_stat –t    (or) crsctl status resource –t

Check autostart of CRS and HAS:
./crsctl config has(or)crs

Enable autostart of CRS and HAS:
./crsctl enable has(or)crs

Oracle RESTART:
crsctl start has – to manually start the Oracle Restart stack when running disabled or after manually stopping it
crsctl stop has [-f] – to manually stop the Oracle Restart stack. The -f option
crsctl enable has – to enable the stack for automatic startup at server reboot
crsctl disable has – to disable the stack for automatic startup at server reboot
crsctl config has – to display the configuration of Oracle Restart
crsctl check has –



Archive log Gap Solution in Standby

Resolving huge gap beween PRIMARY and STANDBY



STEPS:


Please use below query to find out archive gap on Standby:

1.

 SELECT ARCH.THREAD# “Thread”, ARCH.SEQUENCE# “Last Sequence Received”, APPL.SEQUENCE# “Last Sequence Applied”,
(ARCH.SEQUENCE# – APPL.SEQUENCE#) “Difference” FROM (SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME )
 IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH, (SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY
 WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#))
 APPL WHERE ARCH.THREAD# = APPL.THREAD# ORDER BY 1;


2.

Find the SCN on the PRIMARY:

SQL> select current_scn from v$database;

CURRENT_SCN
———– 
242671761

Find the SCN on the STANDBY:

3.

SQL> select current_scn from v$database;
CURRENT_SCN
———– 
223771173

Clearly you can see there is difference

4.

Stop and shutdown the managed standby apply process:

SQL> alter database recover managed standby database cancel;

Database altered.

5.

Shutdown the standby database

SQL> shut immediate

6.

On the primary, take an incremental backup from the SCN number where the standby current value 223771173:

 RMAN> run { allocate channel c1 type disk format ‘/backup/%U.bkp’;
backup incremental from scn 223771173 database;
 }

7.

On the primary, create a new standby controlfile:

SQL> alter database create standby controlfile as ‘/backup/for_standby.ctl’;

Database altered.

8.

Copy the standby controlfile to STANDBY and bring up the standby instance in nomount status with standby controlfile:

SQL> startup nomount

SQL> alter database mount standby database;

9.

Connect to RMAN on STANDBY, Catalog backup files to RMAN using below commands:

$ rman target=/

RMAN> catalog start with ‘/backup’;

PERFORM RECOVER:

RMAN> recover database;

Start managed recovery process:

SQL> alter database recover managed standby database disconnect from session;

Database altered.

Check the SCN’s in primary and standby it should be close to each other.

Refreshing Schema using Data Pumps

Refreshing schema in oracle Database



Schema refresh task might be regular for DBA's working on Database migration project.Schema refresh is done to make our production
Database data in sync with developmnent,test and performance environment.

Preparatory Steps:


Create directory or use an exiting directory by giving read and write permission for 'system' Database
user to use that direcotry(TEST_MIG).

SQL> grant read,write on directory TEST_MIG to system;

Grant succeeded.

SQL> alter user system identified by TESTDBdba account unlock;


PRODDB:


Step 1:Exporting the Data from the source Database(PRODDB in our case)


vi expdp_refresh_schema_sep27.sh

$ expdp system/PRODDB@PRODDB DUMPFILE=REFRESH_SCHEMA.DMP DIRECTORY=DATA_PUMP_DIR SCHEMAS=REFRESH_SCHEMA
 LOGFILE=REFRESH_SCHEMA.log

$ nohup sh expdp_refresh_schema_sep27.sh>refresh_schema.out &


Step 2:Copying the dump file(Source Data) to Target Database server


We can use 'winscp' tool(A graphical utility for copying files from windows to linux or viceversa) or ftp or scp or tar or rsync
 for coping Data from source server to target server.

Step 3:Moving Data into the target Database.



$ impdp system/TESTDBdba@TESTDB DUMPFILE=REFRESH_SCHEMA.DMP DIRECTORY=TEST_MIG REMAP_SCHEMA=REFRESH_SCHEMA:REFRESH_SCHEMA
 LOGFILE=REFRESH_SCHEMA.log


Step 4:Verify the Data in Source and Target Databases.


Note:
~~~~~
In oracle 11g rel2,version:11.2.0.1.0 there are about 44 Distinct object_types comparing to previous versions this number is huge.

SQL> select *from v$version;

SQL> select distinct object_type from dba_objects;

Source Database:



PRODDB:


SQL> select count(*) from dba_objects
where owner='REFRESH_SCHEMA';

COUNT(*)
----------
132

SQL> select count(*) from dba_tables
where owner='REFRESH_SCHEMA';

COUNT(*)
----------
34

SELECT COUNT(*) FROM DBA_OBJECTS
WHERE OWNER='REFRESH_SCHEMA'
AND OBJECT_TYPE IN('TABLE','JOB','VIEW','PACKAGE','TRIGGER','SYNONYM','FUNCTION','PROCEDURE','TYPE')
ORDER BY OBJECT_TYPE;
SQL> SELECT COUNT(*) FROM DBA_OBJECTS
WHERE OWNER='REFRESH_SCHEMA'
AND OBJECT_TYPE IN('TABLE','JOB','VIEW','PACKAGE','TRIGGER','SYNONYM','FUNCTION','PROCEDURE','TYPE')
ORDER BY OBJECT_TYPE;
2 3 4
COUNT(*)
----------
62


SELECT COUNT(*) FROM DBA_OBJECTS
WHERE OWNER='REFRESH_SCHEMA'
AND OBJECT_TYPE IN


TARGET DATABASE:



TESTDB:
-------------
SQL> select count(*) from dba_objects
where owner='REFRESH_SCHE'; 2

COUNT(*)
----------
131

SQL> select count(*) from dba_tables
where owner='APEX4_DEV'; 2

COUNT(*)
----------
34

SQL> SELECT COUNT(*) FROM DBA_OBJECTS
WHERE OWNER='REFRESH_SCHEMA'
AND OBJECT_TYPE IN('TABLE','JOB','VIEW','PACKAGE','TRIGGER','SYNONYM','FUNCTION','PROCEDURE','TYPE')
ORDER BY OBJECT_TYPE;

COUNT(*)
----------
62
                *********Refreshing done**********

        Hope it helps.Happy refreshing our Database environments


$ expdp username/password directory=dir_name dumpfile=expscott.dmp logfile=expscott.log schema=scott compress=all job_name=j1.

Saturday, 2 December 2017

IMPORTANTS OF SCAN IN RAC ENVIRONMENT

 IMPORTANT SCAN LISTNER SETUP




1.What is SCAN listener?

A. scan listener is something that additional to node listener which listens the incoming db connection requests from the client which got through the scan IP, it got end points configured to node listener where it routes the db connection requests to particular node listener.

SCAN IP can be disabled if not required. However SCAN IP is mandatory during the RAC installation. Enabling/disabling SCAN IP is mostly used in oracle apps environment by the concurrent manager (kind of job scheduler in oracle apps).
Steps to disable the SCAN IP,
i. Do not use SCAN IP at the client end.
ii. Stop scan listener
srvctl stop scan_listener
iii.Stop scan
srvctl stop scan (this will stop the scan vip's)
iv. Disable scan and disable scan listener
srvctl disable scan



2.  How to setup SCAN in 11gr2 and how it works?

A. SCAN is a single name defined in DNS that resolves to three IP addresses using a round-robin
algorithm. The three IP addresses are on the same subnet as RAC’s default public network in the cluster.

How to setup SCAN:
In order to successful install grid infrastructure you need to configure your DNS prior to installing the grid
infrastructure to resolve the name accordingly. Oracle requires at least one IPs to be configured for the
scan name.
You can have two options to define SCAN name
1) Define it in your DNS (Domain Name Service)
2) Use GNS (Grid Naming Service)

How SCAN works:
1) Client sends connection to SCAN name SCAN name resolved to SCAN IP address returned by DNS.
SCAN IPs are returned in round-robin process. [If GNS is used for SCAN ip management then DNS
delegates the request to GNS services and GNS services in turn return a SCAN IP address ( again in
round-robin fashion)
2) The TNS request is now forwarded by SCAN IP to the SCAN Listeners. Remember that the
remote_listener parameter is already made to point to SCAN tnsnames.ora entry and local_listener uses
VIP Listener entry.
3) SCAN listeners in turn forward the request to local listeners (least loaded one).  The remote listeners
which points to SCAN listeners will do the load balancing and local listeners will take care of new process
spawning and connection to database.
4) Local listeners take care of client request.
PMON registers with SCAN listener as defined in parameter ‘remote_listener’ setting and also with the
node listener depending on the local listener settings. On the basis of PMON provided details SCAN will
choose the least loaded node to forward the request received from client.

3.  What are benefits of SCAN?

A) NO NEED TO RECONFIGURE CLIENT: the SCAN makes it possible to add or remove nodes from the
cluster without needing to reconfigure clients (Because the SCAN is associated with the cluster as a
whole, rather than to a particular node). Before SCAN, if the cluster changed, then the client
TNSNAMES.ORA files (or other tns connect strings like Easy Connect strings) would need to change.

B) LOCATION INDEPENDENCE: SCAN can connect from one node to any node . This is  location
independence for the databases, so that client configuration does not have to depend on which nodes
are running a particular database.
C) LOAD BALANCING: Round-robin on DNS level allows for a connection request load balancing across
SCAN listeners floating in the cluster.

New features of SCAN in database 12c:
1. SCAN and Oracle Clusterware managed VIPs now support IPv6 based IP addresses
2. SCAN is by default restricted to only accept service registration from nodes in the cluster
3. SCAN supports multiple subnets in the cluster (one SCAN per subnet)

IMPORTANT Oracle RAC Background Processes

 Oracle RAC Background Processes


The GCS and GES processes, and the GRD collaborate to enable Cache Fusion. The
Oracle RAC processes and their identifiers are as follows:

1. ACMS: Atomic Controlfile to Memory Service (ACMS)
In an Oracle RAC environment, the ACMSper-instance process is an agent that
contributes to ensuring a distributed SGA memory update is either globally
committed on success or globallyaborted if a failure occurs.

2.GTX0-j: Global Transaction Process
The GTX0-jprocess provides transparent supportfor XA global transactions in an
Oracle RAC environment. The database autotunes the number of these processes
based on the workload of XA global transactions.

3. LMON: Global Enqueue Service Monitor
The LMONprocess monitors global enqueues and resources across the cluster and
performs global enqueue recovery operations.

4. LMD: Global Enqueue Service Daemon
The LMDprocess manages incoming remote resource requests within each
instance.

5. LMS: Global Cache Service Process
Note: The SGA size requirements for Oracle RAC are greater than
the SGA requirements for noncluster Oracle databases due to Cache
Fusion.
Overview of Automatic Workload Management
Introduction to Oracle RAC 1-7
The LMSprocess maintains records of the data file statuses and each cached block
by recording information in a Global Resource Directory (GRD). The LMSprocess
also controls the flow of messages to remote instances and manages global data
block access and transmits block images between the buffer caches of different
instances. This processing is part of the Cache Fusion feature.

6. LCK0: Instance Enqueue Process
The LCK0process manages non-Cache Fusion resource requests such as library
and row cache requests.

7. RMSn: Oracle RAC Management Processes (RMSn)
The RMSnprocesses perform manageability tasks for Oracle RAC. Tasks
accomplished by an RMSnprocess include creation of resources related to Oracle
RAC when new instances are added to the clusters.

8 RSMN: Remote Slave Monitor manages background slave process creation and
communication on remote instances. These background slave processes perform
tasks on behalf of a coordinating process running in another instance.

How to Verify resources on Rac Cluster

How to verify all the resources are up and running  on cluster nodes?

From grid home , use ./crs_stat -t

select inst_id,sid,serial#,terminal,program,process from gv$session;

v$session to identify the user session id
Which connection established from which terminal.

Every connection has sid and serial#

But in RAC - we use gv$session
includes background processes sessions.

Finding number of instances on RAC

SQL> select instance_name from gv$instance;

INSTANCE_NAME
----------------
prod1
prod2

Finding which Instance running on which NOde :

SQL> select inst_id,instance_number,instance_name,host_name,startup_time,status,thread#,database_status from gv$instance;



SQL> alter tablespace users add datafile '+DATA' size 100m;

Tablespace altered.

SQL> select file_name from dba_data_files where tablespace_name like 'USERS';

FILE_NAME
--------------------------------------------------------------------------------
+DATA/prod/datafile/users.259.936451463
+DATA/prod/datafile/users.269.936534815


Find free space from v$asm_disk_stat

QL> select member from v$logfile;

MEMBER
--------------------------------------------------------------------------------
+DATA/prod/onlinelog/group_2.262.936451561
+DATA/prod/onlinelog/group_1.261.936451559
+DATA/prod/onlinelog/group_3.266.936451923
+DATA/prod/onlinelog/group_4.267.936451927

SQL> select instance_name,status from gv$instance;

INSTANCE_NAME    STATUS
---------------- ------------
prod2            OPEN
prod1            OPEN



Database name is one prod but instances are two prod1 and prod2.


























Top 29 Asm Question and Answers

ASM Interview Questions & Answers

1) What is ASM?

Oracle ASM is a volume manager and a file system for Oracle database files.

Oracle ASM uses disk groups to store data files; an Oracle ASM disk group is
 a collection of disks that Oracle ASM manages as a unit.

ASM Provides uniform performance across the disks.

With out downtime can add/remove disks from disk group while a database continues
 to access files from the disk group.

When you add or remove disks from a disk group, Oracle ASM automatically redistributes
1 the file contents and eliminates the need for downtime when redistributing the content

In Oracle Database 10g/11g there are two types of instances: database and ASM instances. The ASM instance, which is generally
named +ASM, is started with the INSTANCE_TYPE=ASM init.ora parameter. This parameter, when set, signals the Oracle initialization
 routine to start an ASM instance and not a standard database instance. Unlike the standard database instance, the ASM instance
contains no physical files; such as logfiles, controlfiles or datafiles, and only requires a few init.ora parameters for startup.

Upon startup, an ASM instance will spawn all the basic background processes, plus some new ones that are specific to the operation
of ASM. The STARTUP clauses for ASM instances are similar to those for database instances. For example, RESTRICT prevents database
instances from connecting to this ASM instance. NOMOUNT starts up an ASM instance without mounting any disk group. MOUNT option
simply mounts all defined diskgroups

For RAC configurations, the ASM SID is +ASMx instance, where x represents the instance number.

2) What are the key benefits of ASM?

ASM provides filesystem and volume manager capabilities built into the Oracle database kernel. Withthis capability, ASM simplifies
storage management tasks, such as creating/laying out databases and disk space management. Since ASM allows disk management to be
done using familiar create/alter/drop SQL statements, DBAs do not need to learn a new skill set or make crucial decisions on
provisioning.

The following are some key benefits of ASM:

ASM spreads I/O evenly across all available disk drives to prevent hot spots and maximize performance.
ASM eliminates the need for over provisioning and maximizes storage resource utilization facilitating database consolidation.
Inherent large file support.
Performs automatic online redistribution after the incremental addition or removal of storage  capacity.
Maintains redundant copies of data to provide high availability, or leverages 3rd party RAID functionality.
Supports Oracle Database as well as Oracle Real Application Clusters (RAC).
Capable of leveraging 3rd party multipathing technologies.
For simplicity and easier migration to ASM, an Oracle database can contain ASM and non-ASM files.
Any new files can be created as ASM files whilst existing files can also be migrated to ASM.
RMAN commands enable non-ASM managed files to be relocated to an ASM disk group.
Enterprise Manager Database Control or Grid Control can be used to manage ASM disk and file activities.

3) Describe about ASM architecture.

Automatic Storage Management (ASM) instance

Instance that manages the diskgroup metadata

Disk Groups

Logcal grouping of disks
Determines file mirroring options
ASM Disks
LUNs presented to ASM
ASM Files
Files that are stored in ASM disk groups are called ASM files, this includes database files


Notes:
~~~~~

Many databases can connect as clients to single ASM instances

ASM instance name should only be +ASM only

One diskgroup can serve many databases

4) How does database connects to ASM Instance?

The database communicates with ASM instance using the ASMB (umblicus process) process. Once the database obtains the necessary extents
 from extent map, all database IO going  forward is processed through by the database processes, bypassing ASM. Thus we say ASM is not
 really in the IO path. So, the question how do we make ASM go faster…..you don’t have to.

4) What init.ora parameters does a user need to configure for ASM instances?

The default parameter settings work perfectly for ASM. The only parameters needed for 11g ASM:
• PROCESSES*
• ASM_DISKSTRING*
• ASM_DISKGROUPS
• INSTANCE_TYPE

5) How does the database interact with the ASM instance and how do I make ASM go faster?

ASM is not in the I/O path so ASM does not impede the database file access. Since the RDBMS instance is performing raw I/O,
the I/O is as fast as possible.

6) Do I need to define the RDBMS FILESYSTEMIO_OPTIONS parameter when I use ASM?

No. The RDBMS does I/O directly to the raw disk devices, the FILESYSTEMIO_OPTIONS  parameter is only for filesystems.

7) Why Oracle recommends two diskgroups?

Oracle recommends two diskgroups to provide a balance of manageability, utilization, and performance.

8) We have a 16 TB database. I’m curious about the number of disk groups we should use; e.g. 1 large disk group, a couple of disk
groups, or otherwise?

For VLDBs you will probably end up with different storage tiers; e.g with some of our large customers they have Tier1 (RAID10 FC),
 Tier2 (RAID5 FC), Tier3 (SATA), etc. Each one of these is mapped to a diskgroup.

9) We have a new app and don’t know our access pattern, but assuming mostly sequential access, what size would be a good AU fit?

For 11g ASM/RDBMS it is recommended to use 4MB ASM AU for disk groups. See Metalink Note 810484.1

10) Would it be better to use BIGFILE tablespaces, or standard tablespaces for ASM?

The use of Bigfile tablespaces has no bearing on ASM (or vice versa). In fact most database object related decisions are transparent
 to ASM.

11) What is the best LUN size for ASM?

There is no best size! In most cases the storage team will dictate to you based on their standardized LUN size. The ASM administrator
 merely has to communicate the ASM Best Practices and application  characteristics to storage folks :
• Need equally sized / performance LUNs
• Minimum of 4 LUNs
• The capacity requirement
• The workload characteristic (random r/w, sequential r/w) & any response time SLA
Using this info , and their standards, the storage folks should build a nice LUN group set for you.

12) In 11g RAC we want to separate ASM admins from DBAs and create different users and groups. How do we set this up?

For clarification
• Separate Oracle Home for ASM and RDBMS.
• RDBMS instance connects to ASM using OSDBA group of the ASM instance.
Thus, software owner for each RDBMS instance connecting to ASM must be
a member of ASM's OSDBA group.
• Choose a different OSDBA group for ASM instance (asmdba) than for
RDBMS instance (dba)
• In 11g, ASM administrator has to be member of a separate SYSASM group to
separate ASM Admin and DBAs.

13) Can my RDBMS and ASM instances run different versions?

Yes. ASM can be at a higher version or at lower version than its client databases. There’s two
components of compatiblity:
Software compatibility
Diskgroup compatibility attributes:
compatible.asm
compatible.rdbms

14) Where do I run my database listener from; i.e., ASM HOME or DB HOME?

It is recommended to run the listener from the ASM HOME. This is particularly important for RAC env, since the listener is
 a node-level resource. In this config, you can create additional [user] listeners from the database homes as needed.

15) How do I backup my ASM instance?

Not applicable! ASM has no files to backup, as its does not contain controlfile,redo logs etc.

16) When should I use RMAN and when should I use ASMCMD copy?

RMAN is the recommended and most complete and flexible method to backup and transport database files in ASM.
ASMCMD copy is good for copying single files
• Supports all Oracle file types
• Can be used to instantiate a Data Guard environment
• Does not update the controlfile
• Does not create OMF files

17) I’m going to do add disks to my ASM diskgroup, how long will this rebalance take?

Rebalance time is heavily driven by the three items:
1) Amount of data currently in the diskgroup
2) IO bandwidth available on the server
3) ASM_POWER_LIMIT or Rebalance Power Level

18) We are migrating to a new storage array. How do I move my ASM database from storage A to storage B?

Given that the new and old storage are both visible to ASM, simply add the new disks to the ASM disk group and drop the old disks.
ASM rebalance will migrate data online.

Note 428681.1 covers how to move OCR/Voting disks to the new storage array

19) Is it possible to unplug an ASM disk group from one platform and plug into a server on another platform (for example,
from Solaris to Linux)?

No. Cross-platform disk group migration not supported. To move datafiles between endian-ness platforms, you need to use XTTS,
 Datapump or Streams.

20) How does ASM work with multipathing software?

It works great! Multipathing software is at a layer lower than ASM, and thus is transparent.
You may need to adjust ASM_DISKSTRING to specify only the path to the multipathing pseudo devices.

21) Is ASM constantly rebalancing to manage “hot spots”?

No…No…Nope!! ASM provides even distribution of extents across all disks in a disk group. Since each disk will equal number of extents,
 no single disk will be hotter than another. Thus the answer NO, ASM does not dynamically move hot spots, because hot spots simply do
 not
occur in ASM configurations. Rebalance only occurs on storage configuration changes (e.g. add, drop, or resize disks).

22) What are the file types that ASM support and keep in disk groups?

Control files
Flashback logs
Data Pump dump sets

Data files
DB SPFILE
Data Guard configuration

Temporary data files
RMAN backup sets
Change tracking bitmaps

Online redo logs
RMAN data file copies
OCR files

Archive logs
Transport data files
ASM SPFILE

23. List Key benefits of ASM?

Stripes files rather than logical volumes
Provides redundancy on a file basis
Enables online disk reconfiguration and dynamic rebalancing
Reduces the time significantly to resynchronize a transient failure by tracking changes while disk is offline
Provides adjustable rebalancing speed
Is cluster-aware
Supports reading from mirrored copy instead of primary copy for extended clusters
Is automatically installed as part of the Grid Infrastructure

24. What is ASM Striping?

ASM can use variable size data extents to support larger files, reduce memory requirements, and improve performance.

Each data extent resides on an individual disk.

Data extents consist of one or more allocation units.

The data extent size is:

Equal to AU for the first 20,000 extents (0–19999)
Equal to 4 × AU for the next 20,000 extents (20000–39999)
Equal to 16 × AU for extents above 40,000
ASM stripes files using extents with a coarse method for load balancing or a fine method to reduce latency.

Coarse-grained striping is always equal to the effective AU size.
Fine-grained striping is always equal to 128 KB.

26. How many ASM Diskgroups can be created under one ASM Instance?

ASM imposes the following limits:

63 disk groups in a storage system
10,000 ASM disks in a storage system
Two-terabyte maximum storage for each ASM disk (non-Exadata)
Four-petabyte maximum storage for each ASM disk (Exadata)
40-exabyte maximum storage for each storage system
1 million files for each disk group
ASM file size limits (database limit is 128 TB):
External redundancy maximum file size is 140 PB.
Normal redundancy maximum file size is 42 PB.
High redundancy maximum file size is 15 PB.

27) What is a diskgroup?

A disk group consists of multiple disks and is the fundamental object that ASM manages. Each disk group contains the metadata that
 is required for the management of space in the disk group. The ASM instance manages the metadata about the files in a Disk Group
in the same way that a file system manages metadata about its files. However, the vast majority of I/O operations do not pass through
 the ASM instance. In a moment we will look at how file
I/O works with respect to the ASM instance.

29) Diagram that how database interacts with ASM when a request is to read or open a datafile.

image

1A. Database issues open of a database file
1B. ASM sends the extent map for the file to database instance. Starting with 11g, the RDBMS only receives first 60 extents the
remaining extents in the extent map are paged in on demand, providing a faster open
2A/2B. Database now reads directly from disk
3A.RDBMS foreground initiates a create tablespace for example
3B. ASM does the allocation for its essentially reserving the allocation units
for the file creation
3C. Once allocation phase is done, the extent map is sent to the RDBMS
3D. The RDBMS initialization phase kicks in. In this phase the initializes all
the reserved AUs
3E. If file creation is successful, then the RDBMS commits the file creation

Going forward all I/Os are done by the RDBMS directly