Home

Thursday, 16 October 2014

Components of SGA

COMPONENTS OF SGA

 

Components of SGA :

 Mandatory Components :

  1.DBBC(Database Buffer cache)
  2.Shared Pool
    a.Library Cache
    b.Data Dictionary cache
  3.Redolog buffer

 Optional Components :
  
   1.Large pool
   2.Java pool
   3.Streams pool

 Components of Background Process

  Mandatory Components 

   1.DB WRITER (DBWR)
   2.LOG WRITER (LGWR)
   3.CHECK POOL PROCESS (CKPP)
   4.SYSTEM MONITOR (SMON)
   5.PROCESS MONITOR (PMON)
   6.RECOVER PROCESS (RECO)

  Optional Components

   1.ARCHIVER (ARCH)
   2.MEMEORY MANAGER
   3.DISPATCH PROCESS
   4.CLOCK MONITOR
   5.JAK QUEE (CJQ)

 


Infomation table of datafiles

Brief notes on Database files

  Database files :


 There are 3 different database files

   1.Control Datafile
   2.Redolog Datafile
   3.Datafile


1. Control file:

   ->It controls the database
   ->the most important file in an oracle database
   ->it is small binary file necessary for DB to open or start database successfully.
   ->Every Oracle DB must have alteast 1 control file.

   The control file contains :

   1.It contains name of the database
   2.Time Stamp of database creation
   3.It contains all the files of database
   4.It Contains name and locations of datafiles and redolog.
   5.Archived log Information
   6.log sequence no (LSN)
   7.check point Information
   8.System Change no
   9.Backup set details


2.Redolog File:

  ->It Contains latest Transaction
  ->It contains all the modification or Updations
  ->This file is very important file during the recovery database.
  ->All committed  data goes redolog file
  ->In database we must create 2 redolog groups , in each group it should have alteast 1 redolog file


3 Datafile :

To create a database there will be 3 datafiles
     a.System datafile
     b.Sysaux datafile
     c.Undo datafile


a.System datafile :

  ->It is introduced in oracle version 9C
  ->It contains the metadata (Structure of database) stored in system file in the form base tables.
 

 b.Sysaux Datafile :

   ->It acts as secondary system file to reduce the performance related tables

         AWB (Automatic workload repository)
         ADDM (Automatic DB Diagnostic Moniter)

   ->To reduce the work load of the system file


 c.Undo Datafile :

   ->It contains the previous image of the transaction




Optional Database file :


 1.User datafile :it contains the user related Information .

 2.User index : It contains indexes information created by user.

 3.Temporary datafile : It contains temporary data for sorting purpose.





Database Arcitecture

Database Arcitecture

Database architecture is divided in to 2

   1.Database instance
   2.Database


1.Database Instance :

-->Database Instance Structure. When an instance is started, Oracle Database allocates a memory area
called the system global area (SGA) and starts one or more background processes. The SGA serves various
purposes, including the following:

-->The combination of both SGA and background process is called the database instance .

2.Database:  

              It is classified into 2 layers
                a. logical layer
                b. Physical layer

a)Logical layer - (Tables,Views,Index,Synonyms , Sequences)
b)Physical layer - ( DBfiles --> Filesystem --> Operating system --> Disk)


Wednesday, 15 October 2014

Physical Backup (Cold Backup)

Backup's

There are two types of backup's

1.Logical backup

     Exp & Imp
     Expdb & Impdb ( Datapump)


2.Physical Backup

     Cold Backup
     Hot Backup
     Rman

Physical Backup ( Cold Backup )

Cold Backup:

==>A cold backup is done when there is no user activity going on with the system. Also called as offline
backup, is taken when the database is not running and no users are logged in. all files of the database
are copied and no changes during the copy are made

==>The benefit of taking a cold backup is that it is typically easier to administer the backup
and recovery process. For cold backups the database does not require being in archive log mode and
thus there will be a slight performance gain as the database is not cutting archive logs to disk.


1. Cold backup is offline Backup, and Consistency backup

      shutdown Normal
      shutdoen transactional
      shutdown immediate

Two Different Types of failures

1.Instance Failure
2.Media failure

1) Instance Failure :

      Power failure
      Shutdown abort
      Killing Background Process

** SMON is responsible for performing instance crash recovery during the Next Startup

During the data recovery 2 process are invoked 

    1.Roll forward process
    2.Roll back process

2)Media failure

       Block Corruption
       Disk Error or Crashed
       If we loose Control,Redolog,Data files

During the media recovery 2 process are invoked

     1.Restore the previous Cold Backup
     2.Recovery --> Apply Archived & redolog files

Recovery Process:

    Is categorized into @

    1.Complete recovery -- Apply all archived log and Online redologfiles

    2.Incomplete Recovery

1.Complete Recovery will done when we have 

      present control file
      Archived redolog files
      redolog files


   -->Complete recovery can be done in 2 ways 

       1.Online
           Non system datafile

       2.Offline

           System Datafile
           Undo datafile

2. Incomplete recovery can be done 

       Until cancel -- Apply all archived logfiles generated
       Until time    -- Apply all archived redolog generated Until Time
       Until SCN    -- Applu all archived redolog generated until SCN

 After performing Incomplete Option recovery we have to open database in resetlogs

      open resetlog Option (SCN reset to 1)

commands :

To check the database is in archive log mode

  sys>>archive log list;
  sys>>select log_mode from v$database

To check the path of CRD files

   sys>> select name from v$controlfile;

   sys>> select name from v$datafile;

   sys>> select member from v$logfile;


    <============================>Loss of CRD file <=========================>

Steps to recovery when loss of CRD files:

Backup :

Take backup of CRD Files

$ mkdir Cold

$ cp * Cold/

sqlplus / as sysdba

sys>>Startup

sys>> conn u1/u1

sys>> insert records into table and commit;

sys>> alter system switch logfile

sys>> shutdown immediate


Restore the privious CRD file into

$ cd cold

$cp * ../

Recover will be done in mount state

sqlplus / as sysdba

sys>> startup mount;

sys>> alter database recover automatic using backup controlfile until cancel;

sys>> recover cancel;

sys>> alter database resetlogs;

>>desc v$logfile;

>>desc v$database_incardination

>>select incardination#,resetlogs_id from v$database_incardination;


<==========================>Loss Of Control Files<=========================>

For recovery need privious CRD files backup

Previous backup should contains  contains CRD files

Note :

If there is no previous Backup we cannot perform recovery

sqlplus / as sysdba
sys>> shut immediate;

Recovery steps :

sqlplus / as sysdba

sys>> startup;

sys>> insert records into the table

sys>> shutdown abort

Restore the previous control file backup

Recovery will done in mount state

sqlplus / as sysdba

sys>>startup mount

sys>> alter database recover automatic using backup controlfile until cancel;

sys>> recover cancel;

sys>> alter database open resetlogs;


<=======================> Loss of System Datafiles :<=========================>


For recovery need previous system data files  to recovery

Note :

If there is no privious backup of system datafiles we cannot perform the recovery

 sqlplus / as sysdba

 sys>> shutdown abort;

 $ ps -eaf |grep -i smon

 $kill -9 10250

Restore the previous systems files

 $cp * .dbf ../

 sqlplus / as sysdba

sys>> startup mount;
sys>> recover database

sys>> alter database open;

<=========================>Loss of Undo Datafile<==========================>

For recovery need previous CRD files backup

Previous backup contains CRD files

sqlplus / as sysdba

sys>> shutdown immediate

Restore the undo datafile

$ cp Undo.dbf ../

sqlplus / sysdba

sys>> startup mount

sys>> alter database datafile 3 offline;

sys>> alter database recover automatic datafile 3;

sys>> alter database datafile 3 online;

sys>> alter database open;



<=======================>Loss of Non System Datafile<========================>

For recovery need previous CRD files backup

Previous backup contains CRD files


sqlplus / as sysdba

sys>>shut immediate

Restore the non system datafile

$ cp * non system datafile

sqlplus / as sysdba

sys>> startup mount;

sys>> alter database datafile 4 offline;

sys>> alter database recover automatic datafile '/disk1/oradata/ORCL/undo.dbf';

sys>> alter database datafile 4 online;

sys>> alter database open;


<======================> Demo Unbackedup Datafile <========================>

sys>>create tablespace demo datafile '/disk1/oradata/ORCL/demo.dbf';

sys>> grant connect,resource to demo identified by demo;

sys>> alter user demo default tablespace demo;

sys>> select username,default_tablespace from dba_users;

Conn demo/demo

create table and insert some records

demo>> conn / as sysdba

sys>> alter system switch logfile;

Delete datafile

conn demo/demo

demo>>  inset records into table

Error will get and note datafile id

conn / as sysdba

sys>> alter database datafile 6 offline;

sys>> alter database create 6;

sys>> alter database datafile online;

sys>> alter database open;


<=========================>Loss of Redolog Files <=========================>


For recovery need privious CRD files backup

Privious backup contains CRD files

sqlplus / sysdba

sys>>startup

sys>> conn demo/demo

insert some records

sys>>conn / sysdba

sys>> alter system switch logfile;

sys>> conn demo/demo

sys>> insert some records into table


Delete all redolog files

 $ ps -eaf |grep -i smon

 $kill -9 10250

Copy the datafile and control file from previous backup

 $cp *.dbf ../

 $cp control.ctl ../

 sqlplus / as sysdba

 sys>>startup mount;

 sys>> alter database recover automatic using backup controlfile until cancel;

 sys>> recover cancel

 sys>> alter database open resetlogs;

Logical Backup (Datapump (expdp and impdp) )

Datapump (expdp and impdp) :


-->Introduced in oracle 10g

expdp help=y
impdp help=y


Advantages in Expdp and Impdp :

  1. Time Consuming Process
  2. Dumpfiles will not over written
  3. dumpfiles are in universal location
  4. can assaign a job name
  5. can stop and ongoing export operation
  6. can estimate the size of dumpfile.


  To create the directory Pump in db level and o/s level

Db level :

   sys>> create directory dpump as /disk1/oradata/ORCL/dpump
 
To drop the directory Pump

   sys>> drop directory dpump

O/S Level:
   
       $mkdir -p /disk1/oradata/ORCL/dpump

Master Process :

  -Name of the process(DMmn)
  -For a particular job only one master process will be there

Work process:

  -4 work process will be invoke in export process
  paralle=4
  PARALLEL Change the number of active workers for current job.

Theory :

  --During datapump operation a table called master table will be create to track of dp operation

  --This master table is created in the schema of the user running the import/export operation

  --this table is created by process called master process based on Job name

  --Only one master process runs per Job

  --Once the Job has finished it dumps a table content into dump files and delete the tables

  --there is another process called worker process (DWnn)

      This is the process that actually performs the work. we can have number of worket process is running in same Job

   Commands :

   expdp dumpfile=full.dmp logfile=full.log full=y directory=dpump

                   :/ as sysdba

  To Override the Dump file in the location :

     REUSE_DUMPFILES -- Overwrite destination dump file if it exists [y].
     expdp dumpfile=full.dmp logfile=full.log full=y directory=dpump reuse_dmpfiles=y

  To estimate the size of the dump
    
     ESTIMATE_ONLY -- Calculate job estimates without performing the export.
     expdp dumpfile=full.dmp logfile=full.log full=y directory=dpump estimate_only=y

  To assign the job name of the export operation :
     
     JOB_NAME -- Name of export job to create.
     expdp dumpfile=full.dmp logfile=full.log full=y directory=dpump job_name=y
  
   To Stop the Job

    Ctrl+C
    
export>Stop_job=immediate   (STOP_JOB Orderly shutdown of job execution and exits the client.)

Valid keyword values are: IMMEDIATE.

Are you sure you wish to stop this job ([yes]/no): yes

To resume the privious export Job

     $expdp attach=full
     To see the master table name
       
     export>> continue_client

Check the status of job name with the below command :

     sys>> select table_name from dba_tables where table_name=<jobname>



 Different types of imp and exp

    1.Full DB Backup
    2.Table Spaces Backup
    3.Users level Backup
    4.Tables Backup
    5.Query's Backup


expdp help=y gives the all parameters related to export
impdp help=y gives the all parameters related to import


1.Full DB Backup :
 
  Export Syntax :
 

   expdp dumpfile=full.dmp logfile=full.log full=y directory=dpump

            : / as sysdba

  Import Syntax
  
   impdp dumpfile=full.dmp logfile=full.log full=y directory=dpump

            : / as sysdba


 2.Table Spaces Backup
    Export Syntax :

     expdp dumpfile=ts.dmp logfile=ts.log tablespaces=<table spacename> directory=dpump

            : / as sysdba

    Import Syntax :
  
     impdump file=ts.dmp logfile=ts.log full=y directory=dpump

             : / as sysdba

3.Users level Backup
   
    Export Syntax :
      
        expdp dumpfile=usr.dmp logfile=usr.log schemas=<user name> directory=dpump

              : owner username/password
         
    
    Import Syntax :

impdp dumpfile=usr.dmp logfile=usr.log remap_schema=<export username>:<import  username> directory=dpump

              : owner username/password
         
   
4.Tables Backup

    Export Syntax :

         expdp dumpfile=table.dmp logfile=table.log tables=username.tablename1,username.tablename2 directory=dpump

               : owner username/password


                 
     Import Syntax :  
    
impdp dumpfile=table.dmp logfile=table.log remap_schema=<export username>:<import username> directory=dpump

                        : owner username/password
         
5.Query's Backup

     Export Syntax :

         expdp dumpfile=query.dmp logfile=query.log tables=username.<tablename> query=\'where deptno=30\' directory=dpump

               : owner username/password

     Import Syntax :
    
         impdp dumpfile=query.dmp logfile=query.log tables=username.<tablename> remap_schema=<export username>:<import username> directory=dpump

                                    : owner username/password

Draw backs:

1. we cannot perform the incremental backups






    
     

Logical Backup (Exp and Imp)

Backup's :

There are two types of backup's

1.Logical backup


     1.Exp & Imp
     2.Expdb & Impdb ( Datapump)

2.Physical Backup

     1.Cold Backup
     2.Hot Backup
     3.Rman

1.Logical backup

     1.Exp & Imp

Different types of Imp and Exp:

    1.Full DB Backup
    2.Table Spaces Backup
    3.Users level Backup
    4.Tables Backup
    5.Query's Backup

exp help=y gives the all parameters related to export
imp help=y gives the all parameters related to import

1.Full DB Backup :

  Export Syntax :
 
   exp file=full.dmp log=full.log full=y

            : / as sysdba

  Import Syntax
  
   imp file=full.dmp log=full.log full=y

            : / as sysdba


 2.Table Spaces Backup

    Export Syntax :

     exp file=ts.dmp log=ts.log tablespaces=<table spacename>

            : / as sysdba

    Import Syntax :
  
     imp file=ts.dmp log=ts.log full=y

             : / as sysdba

3.Users level Backup
   
    Export Syntax :
      
        exp file=usr.dmp log=usr.log owner=<user name>

              : owner username/password
         
    
    Import Syntax :

        imp file=usr.dmp log=usr.log fromuser=<export username> touser=<import username>

              : owner username/password
         
   
4.Tables Backup

    Export Syntax :
         exp file=table.dmp log=table.log tables=username.tablename1,username.tablename2

               : owner username/password


                 
     Import Syntax :  
    
         imp file=table.dmp log=table.log fromuser=<export username> touser=<import username>

              : owner username/password
         
5.Query's Backup :

     Export Syntax :

         exp file query.dmp log=query.log tables=username.<tablename> query=\'where deptno=30\'

               : owner username/password

     Import Syntax :
    
         imp file=query.dmp log=query.log tables=username.<tablename> fromuser=<export username> touser=<import username> ignore=y

                 : owner username/password


  Incrementals Backup :

There are 3 types of  Incremental Backups

    1.Complete
    2.Incremental
    3.Cumulative


Complete Backup :

Syntax:

         exp file=comp.dmp log=comp.log inctype=complete

Incremental Backup: 

Syntax:

         exp file=comp.dmp log=comp.log inctype=incremental
   
Cumulative :

Syntax:

         exp file=comp.dmp log=comp.log inctype=cumulative


Drawbacks in Exp and Imp :

  1. Time Consuming Process
  2. Dumpfiles willbe over written
  3. Dumpfiles are in scattered location
  4. cannot assaign a job name
  5. cannot stop and ongoing export operation
  6. cannot estimate the size of dumpfile.


   

Managing Users Priviliges and Roles

Managing Users  Privileges and Roles :


Two types of privileges :
1.System privileges
2.Object privileges


1.System privileges :

sys>>grant create session to user1,finance;

sys>>grand create user,alter table to user1,finance;

sys>>revoke create table from user1;


2.Object privileges :


alter   -- tables,sequences
delete  -- tables,views
execute -- procedures
index   -- tables
insert  -- tables,views
reference -- tables
select    -- tables,sequences,views
update    -- tables,views

Example:

sys>>grant insert,update on emp to u1;


Managing Users Roles :

system defined roles:

connect             ---- create session
resource            ---- create cluster,create procedures,create sequence,create table, create triggen, create dba                    ----  export tables ,users schema etc
exp_full_database   ---- export full database
imp_full_database   ---- import full database
delete_catalog_role ----delete privileges on all dictionary packages for this role
execute_catalog_role --- execute privileges on all catalog tables and viwes for this role
select_catalog_role  ---select privilege on all catalog tables and viwes for this role

Creating,Altering,Dropping,granting and revoking a Role:

create:

sys>>create role clerk identified by demo;

altering :

sys>>alter role clerk identified by <password>;

Dropping
:

sys>>drop role clerk;

granting :

sys>>grant manager to user_01 with admin options;

revoking :

sys>>revoke clerk from user1;


Creating the profiles:

session_per_user         --limits the number of concurrent session for the user
cpu_per_session         --limits the CPU time for session. This is expressed in hundredths of seconds
cpu_per_call              --Limits CPU time for a call.This is expressed in hundredths of seconds
connect_time             --limits the total elapsed connect time a session
failed_login_attempts   --no of failed attempts after which accounts is going to locked
password_life_time      --no of days password is valid
password_reuse_max      --no of time password can be changed
paswword_verify_function--Function with which it is going to verify the password
password_lock_time      -- no of days password going to be locked
password_grace_time     --no of days it s going to prompt for password expiry
idle_time               -- defines the maximum amount of continuos inactive time span.
logical_reads_per_session--limits the number of data blocks read ina session
logical_reads_per_call   -- limits the number of data blocks read for a call to process a SQL statement
private_sga
composite_limit

Creating the Profile :


sys>>create PROFILE ramu limit
            SESSIONS_PER_USER   1
        CPU_PER_CALL        6000 
        CONNECT_TIME        560
        PASSWORD_LIFE_TIME  60 
        PASSWORD_GRACE_TIME 10
        IDLE_TIME           15
        FAILED_LOGIN_ATTEMPTS 3;

Changing the values for profile :

sys>>alter profile ramu limit
           LOGICAL_READS_PER_SESSION  20000
       CPU_PER_CALL  default
       LOGICAL_READS_PER_CALL 100;


 To drop the profile

sys>>drop profile ramu cascade;

To lock the Users account:

sys>>alter user ramu account lock;

 To check the status of the users :

sys>>select username,user_id,account_status,lock_date from dba_users where username='ramu';

To expire the users password :

sys>>alter user ramu password expire;

To unlock the user account :


sys>>alter user ramu account unlock;

 To list all system privilege grants :
sys>>select * from dba_sys_privs;

 To list all role grants :

sys>>select * from dba_role_privs;

To list object privileges granted to user :

sys>>select table_name,privilege,grantable from dba_tab_privs;

To list all the column specific privileges that have been granted :


sys>>select grantee,table_name,column_name,privilege from dba_col_privs;

sys>>select * from session_privs;

To list roles of the database :

sys>>select * from dba_roles;

sys>>select granted_role,admin_option from role_role_privs where role='system_admin';

To check the granted roles and their privileges to ramu user

sys>>select a.grantee,a.granted_role,b.privilege from dba_role_privs a,dba_sys_privs b
     where a.granted_role=b.grantee and a.grantee=user1;

Listing privilege and role information


     ALL_COL_PRIVS
     USER_COL_PRIVS
     ALL_TAB_PRIVS
     USER_TAB_PRIVS
     ALL_TAB_PRIVS_MADE
     USER_TAB_PRIVS_RECD
     DBA_ROLES
     DBA_COL_PRIVS
     DBA_SYS_PRIVS
     DBA_ROLE_PRIVS
     DBA_TAB_PRIVS
     ROLE_ROLE_PRIVS
     SESSION_PRIVS
     SESSION_ROLES




Managing Users and Resources

 Managing Users and Resources


Creating users :
      You can create a new database user by using the create user dialog box in SQL*DBA or the SQL command creates users .when you create a new user giving username and password is mandatory , where as following will take default values if not provited

    1.Default tablespace
    2.temporary tablespace
    3.tables space quotas
    4.Profile


1.Default tablespace :

       Each user is associated with a default tablespace.the default setting for every users default tablespace is system tablespace.if a user creates any type of object you should specifically assaign the user a default tablespace


2.Temporary tablespace:

      Each user is associated with a temporary tablespace , when a user execute the SQL statement that requires the creation of temporary segments,the user temporary tablespace contains the emporary segments.

 To create the new database user(ramu)

sys>>create user ramu identified by ramu
         default tablespace tbs1
         quota 5m ON tbs1
         quota 3m ON system
         profile clerk;

 sys>>alter user ramu profile clerk;

 To change the password for the user

 sys>> alter user ramu identified by <newpassword>;

To drop the user
 sys>> drop user ramu cascade;


 Creating and alter a Profiles:


Session_per_user         --Limits the number of concurrent session for the user
Cpu_per_session         --Limits the CPU time for session. This is expressed in hundredths of seconds
Cpu_per_call               --Limits CPU time for a call.This is expressed in hundredths of seconds
Connect_time              --Limits the total elapsed connect time a session
Failed_login_attempts --no of failed attempts after which accounts is going to locked
Password_life_time     --no of days password is valid
Password_reuse_max   --no of time password can be changed
Paswword_verify_function--Function with which it is going to verify the password
Password_lock_time    -- no of days password going to be locked
Password_grace_time   --no of days it s going to prompt for password expiry
Idle_time                      -- defines the maximum amount of continuos inactive time span.
Logical_reads_per_session--limits the number of data blocks read ina session
Logical_reads_per_call   -- limits the number of data blocks read for a call to process a SQL 
Private_sga
Composite_limit

Creating the profile :

SYS>>create PROFILE ramu limit
         SESSIONS_PER_USER   1
        CPU_PER_CALL    6000 
        CONNECT_TIME   560
        PASSWORD_LIFE_TIME  60 
        PASSWORD_GRACE_TIME 10
        IDLE_TIME   15
        FAILED_LOGIN_ATTEMPTS 3;

Changing the values for profile :

sys>>alter profile ramu limit
        LOGICAL_READS_PER_SESSION  20000
       CPU_PER_CALL  default
       LOGICAL_READS_PER_CALL 100;

To drop the profile

sys>>drop profile ramu cascade;

SESSIONS and user licensing:

LICENCE_MAX_SESSIONS
LICENCE_SESSION_WARNING
LICENCE_MAX_USERS

  To set the maximum sessions for an Instance:

sys>>ALTER system set licence_max_sessions=100;

 To set both WARNING limit and maximun limit

sys>>alter system set licence_max_session=64
                      licence_sessions_warning=54;

 To set USER limit :

sys>>alter system set licence_max_users=30;

Password file Athentication:

$cd $ORACLE_HOME/dbs
$orapwd file=<filename> password=<password> entries=<max_users>

Example:
$orapwd file=orapwDEMO password=oracle entries=3


Viewing Information about database users and profiles:

DBA_USERS
ALL_USERS
USER_USERS
DBA_TS_QUOTAS
USER_TS_QUOTAS
USER_PASSWORD_LIMITS
USER_RESOURCE_LIMITS
DBA_PROFILES
RESOURCE_COST
v$session
v$sessat
v$statname




examples:

sys>>create PROFILE prof_clerk limit
         session_per_user 1
        idle_time 30
        connect_time 600;


sys>>create user ramu identified by ramu
                  default tablespace ts1
          temporary tablespace tmp1
          quota 50m on ts1
          profile ramu;

  Listing all users and associated information :

sys>>select username,account_status,default_tablespace,temporary_tablespace,
                                             created,profile from dba_users;

  Viewing memory use for each session :

sys>>select username,value || 'BYTES' "current UGA memory"
            from v$SESSION sess,v$SESSTAT stat,v$STATNAME name where
        sess.SID=stat.SID AND stat.STATISTIC# = name.STATISTIC#
        and name.NAME = 'session uga memory';

  To see the current licensing limits;

sys>> select sessions_max,sessions_warning,sessions_current s_current,sessions_highwater s_high,
      users_max from V$License;


Tuesday, 14 October 2014

UNDO File Management

Undo file management :

-->It consist of set of sql statement.starts with insert,update,delete and ends with commit and rollback

The data will commit when we 

         1.commited 
         2.exit
         3.close a session
         4.DDL Operation

The data will rollback when :
     
         1. rollback
         2.during the instance failure
         3.when auser process fails
         4.Insufficient space


-->It is composed of one or more files containing undo segments.for each oracle instance ,you only have toallocate enough disk space for the workload in that instance in an undo tablespace.


->rollback segments are still used but are internallycreated and maintained and are called undo segments.
->with automatic undo management,you cannot CREATE DROPor alter undo segments
->undo segments have the same structure as normal rollback segments but they.
->support automatic creation
->use a modified policy compared to oracle8I
->support dynamic extents transfer.
->SMON shrinks undo segments when needed.


->If yo use the rollback segment method of managing undospace,you are said to be operating in the manual undo management mode.

UNDO_MANAGEMENT=AUTO
UNDO_TABLESPACE=UNDOTBS
UNDO_RETENTION=900


UNDO RETENTION:
->retention is specified in units of seconds.it is persistent and can survive system crashes .that is,
undo generated before an instance crash,is retained until its retention time has expired even across restarting the instance.
the oracle flashback features depend upon the availability of older undo information.

sys>>alter system set undo_retention=900;


STEPS to create and maintain the Undo tablespace:
1.create an undo tablespace:

sys>>create undo tablespace undo_02 datafile '/disk1/oradata/ORCL/undo_01.dbf' size 10m;

->An undo tablespace is a permanent,locally managed tablespace,read-write and in logging mode with default block size. values for
  MINIMUM EXTENT and default storage are system generated.

2.altering or resizing a datafile

sys>>alter database datafile '/disk1/oradata/ORCL/undo1.dbf' resize 10m;

3.Adding the undofile to undo tablespace

sys>>alter tablespace undotbs add datafile '/disk1/oradata/ORCL/undo01.dbf' size autoextend on;



SQL> alter system set undo_tablespace=UNDOTBS_1 scope=both;

System altered.

SQL> show parameter undo

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
undo_management                      string      AUTO
undo_retention                       integer     2700
undo_tablespace                      string      UNDOTBS_1


Set UNDO_RETENTION in the initialization parameter file.

    UNDO_RETENTION = 1800

Change UNDO_RETENTION at any time using the ALTER SYSTEM statement:

SQL>ALTER SYSTEM SET UNDO_RETENTION = 2400;


SQL> drop tablespace undotbs including contents and datafiles;

Tablespace dropped.


->Different status of undo tablespace

   1.ONLINE
   2.OFFLINE
   3.PENDING OFFLINE


->Different status of undo segments :
    1.active
    2.unexpired
    3.expired


Retention Guarantee:

sys>>create undo tablespace undo_02 datafile '/disk1/oradata/ORCL/redolog1a.log'
                                                   size 10m retention guarantee;


Existing undo tablespace can also be made to comply with the quarantee by altering them ,as in
sys>>alter tablespace undo_02 retention gurantee;

Incase we dont want quarantee retention then

sys>>alter tablespace undo_02 retention nogurantee;

You can verify that the tablespace has guaranteed undo retention with :

sys>>select retention from dba_tablespaces where tablespace_name='undo_01';

To set undao tablespace  :

sys>>alter system undo tablespace=undo01;

show queries :
sys>>show parameter undo_retention;
sys>>show parameter und_tablespace;



for more information check the Data dictionary views:

DBA_UNDO_EXTENTS
DBA_SEGMENTS
USER_SEGMENTS
v$UNDOSTAT
v$TRANSACTION




Rollback Segment Management

Rollback segments :

ROllback segments stores undo information and are used for following purpose

1.To undo the privious command
2.For read consistency and
3.For crash recovery

Each rollment segment consists of:

->consistes of several rollback entries from multiple transaction.
->stores block information such as files and blocked,as well as data as it existed before being modified
->must be created under special circumstances and brought online before being used.
->may increase due to large transacton
->will automatically shrink to optimal if extented
->transaction can be assaigned automatically or explicity

-->Rollback segment maintains a table for every transaction,which is identified by SMON during recovery.when
you create database ,system tablespace and system rollback segment is created .you cannot drop this segment .depending on the number of
transaction you have to decide how amny rollback segments are needed. they are 2 types of rollback segments
   1.PUBLIC and
   2.PRIVATE

sys>>select segment_type,segment_name from dba_segments;

   
If we want to create a new rollback segment RBS1 in tablespace rbs_demo

sys>>create rollback segment rbs1 tablespace rbs_demo storage (initial 40k next 40k OPTIMAL 120k MINEXTENDS 2 maxextends 10);

->If you want to make it ONLINE or want to change storage parameters you have to use alter command as follows :

note :rollback segment cannot be created in a locally managed tablespaces.to crete rollback segment the tablespace
must be dictionary-managed tablespace

To convert a locally manged tablespace to dictionary managed

sys>>EXEC DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_FROM_LOCAL ('RESDATA');


To change the storage parameters:

sys>>alter rollback segment rbs1 storage (maxextends 15);

To make it Online :
sys>>alter rollback segment rbs1(ONLINE/OFFLINE);

To bring rollback segments online automatically,set rollback_segments parameter in init<ORACLE_SID>,ora parameter file

to explcity assaign a rollback segment to a transaction

sys>>set transaction use rollback segment rbs1;


sys>>alter rollback segment rbs1 SHINK to 100k;


Droping a rollback segment
sys>>alter rollback segment rbs1 offline;
sys>>drop rollback segment rbs1


Redolog File Management

Redolog file management :

The purpose of checkpoint process to synchronize the buffer cache information with
the information on the disk

-->Redolog record all the block level updating made to the database.
  Every database must have at least two redolog groups and one file per group.

-->These files can be mirrored to avoid single point failure.

-->These are used by oracle during instance recovery and media failure.these files are written
  in circular fashion to save disk space.

-->The filled redolog files will be archived if the databse is running in archivedlog mode.

-->It is strongly recommended that database should archivelog mode
 
For example:

   If power failures abruptly and data in memory cannot be written on datafiles,however oracle  recovers the unrecorded data in datafiles by applying redologs.the process of applying the redolog during recovery operation is called as rolling forward.

Mirrored REDO Logs :

The recommended redolog file configuration is,at least 2 redolog members per group.

All memebers of a group of logfiles contain the same information.
Group member update simultaneously.
Each group may or may not contain the same number of members as other groups.


Log switches :
A log switches occurs when oracle switches from one redolog to another.
A log switc occurs LGWR has filled one log file group.
A log switch can be forced by a DBA when the current redo log needs to be archived.


At alog switch the current  redolog file is assigned a log sequence number that identifies the information stored
in that redolog and is also used for synchornization.A checkpoint automatically occurs at a logswitch.


Number of stages in redolog file

      1.UNUSED
      2.CURRENT
      3.ACTIVE
      4.INACTIVE


1. UNUSED: New creates redolog file.

2.CURRENT: Is the redolog file the log writter is currently writing to disk.

3.ACTIVE : when ever logwritter occurs the sattus of current redolog file changes to active and check point process is invoke.the checkpoint process invoke the and it take the snap shot of redolog file and stored it into at archive redolog file.after that the status of redolog file changes into inactive.
 
-->When ever a new data gets into database buffer cache it moves to redolog buffer and from redolog b-uffer to it moves to redolog file.

-->In case of 1st redolog file filled up automatically a log switch occurs and a log sequence number is
generated is also know as (LSN).

-->oracle generates log sequence number to uniquely identify every log switch status.if the second file is alsofilled up logwriter starts overwriting the first redolog file.therefore recovery of entire of data impossible from theredolog file.


-->by default archivelog mode is disabled


 To check the name of the database instance

sys>>select name from v$database;

 To check the archive log status

sys>>select log_mode from v$database;
sys>>archive log list;

Steps to enable the archive log mode;


Step 1: Shutdown the database
  sys>>shutdown immediate;

Step 2: Edit the p file

  $cd $ORACLE_HOME/dbs

  $vi init$ORACLE_SID.ora

   log_archive_dest=/disk1/oradata/ORCL/arch/

   :wq (save and exit)

   $mkdir -p /disk1/oradata/ORCL/arch

Step 3: sqlplus / as sysdba
  sys>>startup mount;

Step 4 : Enable the archive log mode
  sys>>alter database archivelog;
 
Step 5: sys>>alter database open;

 
 Check the status of databse and archivelog.

sys>>select status from v$instance;

sys>>select log_mode from v$database;

sys>>archive log list;

        
 To switch logfile

sys>>alter system switch logfile;


sys>>select GROUP#,members,sequence#,archived,status from v$log;

sys>>select group#,bytes/1024/1024 from v$log;


 ADDING redolog GROUP:

sys>>alter database add logfile group 3 ('/disk1/oradata/ORCL/redolog3a.log',
                                         '/disk1/oradata/ORCL/redolog3a.log') size 4m;


 ADDING redolog file  :

sys>>alter database add logfile member '/disk1/oradata/ORCL/redolog2b.log' to group 2;

 

 To rename redolog file :

Bring the database to mountstate

Step 1: Sshutdown the database
  sys>>shutdown immediate;

Step 2: Copy and move the redolog file to different location

  $cp /disk1/oradata/ORCL/redolog2a.log /disk2/oradata/ORCL/redolog2a.log

Step 3: Bring the databse to mount
  sys>>startup mount;

Step 4: Alter database rename file '/disk1/oradata/ORCL/redolog2a.log'
                          To '/disk1/oradata/ORCL/redolog2new.log';

Step 5: Open the database
  sys>>alter database open;

 To clear Online redolog files

sys>>alter database clear unarchived logfile group 2;

Dropping a redolog group :

sys>>alter database drop logfile group 3;

 Dropping a logfile from Group
sys>>alter database drop logfile member '/disk1/oradata/ORCL/redolog2a.log';


Note :
we cannot drop a file which is active or current

SQL> select group#,members,status from v$log;


    GROUP#    MEMBERS STATUS
---------- ---------- ----------------
         1          1 INACTIVE
         2          1 CURRENT
         3          1 INACTIVE


for more information check the Data dictionary views :

V$LOG
V$LOGFILE
V$LOG_HISTORY
V$LOGHIST
V$RECOVERY_LOG
V$ARCHIVED_LOG

ARCHIVED Redolog File Management

ARCHIVED redolog files:


-->It is the dba's whether to run the database in ARCHIVELOG mode or in NOARCHIVELOG mode

-->If the database is runing in archivelog mode ,contents of the online redolog files are
copied to an archive area by an oracle background processes archiver (ARCn)there archive file
is know as archived redolog files,archived redologs or simply archived redo

-->These files are sometimes referred to as the offline redolog files since they are not open
during normal opertaion of the database and are required only during recovery.

-->The redolog files(online and archived)are essential for database recovery since they contain
information on all changes  made to the databse.

-->If the database is chosen to operate in noarchivelog mode online backups and database recovery
will not be possible

  To convert the database into archivelog mode

Step 1: shutdown the database
sys>>shutdown immediate;

Step 2: Edit the pfile and add the parameter

  $cd $ORACLE_HOME/dbs
  
   log_archive_dest=/disk1/oradata/ORCL/arch
  
   :wq  (save and quit)
  
Step 3: Create the folder with name arch

   $mkdir -p /disk1/oradata/ORCL/arch

Step 4 : Check the archivelog status

  sys>>startup mount;

Check the archive log list status :
 
  sys>>archive log list;
  sys>> select log_mode from v$database;

  if the status noarchivelog_mode type the below command

  sys>>alter database archivelog;

Step 5: Open the Database

  sys>>alter database open;


 for more info check the  Data dictionary views

  v$ARCHIVED_LOG
  v$ARCHIVE_DEST
  v$ARCHIVE_PROCESSES



Control File Management

Control file management :

1.Backing up control file

       A.Backup to binary file
       B.Backup to human read able format
 
Backup control file to binary format

sys>>alter database backup controlfile to '/disk1/oradata/backup/control.ctl';
 
 Backup control file to txt format

sys>>alter database backup controlfile to trace;

 Backup control file to txt format to particular path

sys>>alter database backup controlfile to trace as '/disk1/oradata/backup/control.ctl';

Multiplexing control file :


Step 1: shutdown the database
    sys>>shutdown immediate;

Step 2: Go to the controlfile location and copy the controlfile to new location

   $cd /maggy/app/oracle/ORCL
   $mkdir backup
   $cp control.ctl backup/

Step 3: Edit the pfile (add the new control file path in pfile)
  $cd $ORCALE_HOME/dbs

  control_files='/maggy/app/oracle/ORCL/control.ctl','/maggy/app/oracle/ORCL/backup/control.ctl'

  :wq

Step 4: sqlplus / as sysdba
   start the database

   sys>>startup

Demultiplexing control file:


Step 1:
shutdown the database
    sys>>shutdown immediate;

Step 2:Edit the pfile (remove the newcontrol file path in pfile)
  $cd $ORCALE_HOME/dbs

  control_files='/maggy/app/oracle/ORCL/control.ctl'
  :wq

Step 3: sqlplus / as sysdba
   start the database

   sys>>startup


check the path of the control file

sys>>select name from v$controlfile;

sys>>show parameter control_files;


for more info check Data dictionary views:

v$CONTROLFILE
v$CONTROLFILE_RECORD_SECTION


Schema Management

Schema Management :

Visible and invisble Indexes :

Indexes can be visible or invisible , an invisble index is maintained by DML operations and cannot be
used by the optimizer. making an index invisible is an alternative to making it unusable or dropping it.

Oracle 11g New Feature: Invisible index advantages for performance tuning

Oracle 11g introduced a new feature called invisible indexes. We can make an index to invisible mode when it is not needed. Once we make an index to invisible the index will not come to optimizer reach. The syntax for making an index to invisible is as below.



ALTER INDEX index_name INVISIBLE;
ALTER INDEX index_name VISIBLE;

Invisible index is maintained by Oracle database but, ignored by the optimizer unless you specify explicitly.
How to check the visibility parameter of an index?
It can be identified from VISIBILITY column of the dba_indexes table.

SQL>select index_name, visibility from dba_indexes where index_name='EXAMPLE_INVI_IDX';

INDEX_NAME VISIBILIT
------------------------------ ---------
EXAMPLE_INVI_IDX VISIBLE


Creating an invisible index:

sys>>create INDEX emp_ename ON emp(ename)
tablespace users
storage (initial 20k
next 20k
pctincrease 75)
invisible;

Making an index invisible

sys>>alter index emp_ename INVISIBLE;

 Making invisble index to visible

sys>>alter index emp_ename visible;

  To check an index or Invisble or visble

sys>>select INDEX_NAME,VISIBILITY from users_INDEXES;


Read Only tables;

In oracle 11g, can place a table in read only mode by merely executing alter table read only statements

 Creating a table

sys>create table emp (a number);

sys>>insert into emp values (1);

>>commit;

  Using alter table_read only statement

sys>>alter table emp read only;

note:
cannot insert ,update,delete contents in the read only table.


sys>>select table_name,read_only from user_tables;

 Table back to read write

sys>>alter table emp read write;


DDL Wait Options:

sys>>alter session set ddl_lock_timeout=10;

sys>>alter table sales add(taxe_code varchar1(10));


Virtual columns:

If sales_amt           and sale_amt is less     then sale_category
is more than           that or equal to             is

 0                            1000                             LOW

 10001                    100000                         MEDIUM

 100001                  1000000                       HIGH

 1000001                unlimited                     ULTRA




 sys>>create table sales
        (
       sales_id     number,
       cust_id      number,
       sales_amt   number,
           sale_category  varchar2(6)
       generate always as
       (
         case
            when sales_amt <= 10000 then 'LOW'
        when sales_amt > 10000  and sales_amt <= 100000 then 'MEDIUM'
        when sales_amt > 100000 and sales_amt <= 1000000 then 'HIGH'
        else 'ULTRA'
        end
        ) virtual
        );



OLTP TABLE Comparison:

BASIC COMPARISON
:


sys>>create table emp (tno number) compress basic;
sys>>create table dept (sno number) compress;


sys>>select user_table,compression,compress_for from user_tables;


OLTP compression:

sys>>create table sr1 (no number) compress for OLTP;

sys>>select user_table,compression,compress_for from user_tables;


Disable the compress the table;



sys>>alter able <tablename> nocompress;

sys>>alter table emp nocompress;


Steps to Rename a Datafile

Renaming the datafile :


There are 2 methods to rename the datafile

     1.Shutdown database
     2.Alter status OFFLINE

Method 1 :

Step 1:
shutdown the database
sys>>shutdown immediate

Step 2:
At the o/s level copy the datafiles to new location

    $cd disk1/oradata/ORCL
    $mkdir backup
    $cp * backup/

Step 3:
sys>>startup mount

Step 4 :
alter database rename file '/disk1/oradata/ORCL/user01.dbf' to '/disk1/oradata/ORCL/user01.dbf'

Step 5:
open the database
  sys>>alter database open;


check the status :

sys>> select filename,tablespace_name from dba_data_files;

Method 2 :

Step 1: Make the particular tablespace offline taht the datafile belogs to..
   
    sys>>alter tablespace users OFFLINE;

Step 2: At the o/s level copy the datafiles to new location
  
  $cd disk1/oradata/ORCL
    $mkdir backup
    $cp * backup/

Step 3: 
alter tablespace users rename datafile '/disk1/oradata/ORCL/user01.dbf' to '/disk1/oradata/ORCL/user01.dbf'

Step 4: 
Make the particular tablespace ONLINE

  sys>>alter tablespace users ONLINE;


check the status:
sys>> select filename,tablespace_name from dba_data_files;