From:
                                                                       15:03
Subject:
To: laishev@SMTP.DeltaTel.RU

                  Oracle DBA Training Course
                  --------------------------

               (Covers DB7A, DB7B and more)

                  By Dimitri Bezrukov, FORS

Background needed:

- SQL;

- Client/Server processing concepts;

- General understanding of 4GL concepts;

- 3GL experience is desirable.

                   1. Introduction
                      ------------

- SQL. Language constructions. DDL, DML, DCL and query
  sublanguages. SQL usage in Oracle.  PL/SQL. Datatypes.
  ROWIDs. Integrity constraints. DB triggers, procedures,
  packages. Mechanism for constraint checking. Indexes. Disabling
  and enabling triggers.

- Transactions. Commit and rollback.

- Oracle's approach to client-server computing. Oracle software
  on server and client sides (RDBMS+TOOLS).

- SQL*Net V1 and SQL*Net V2.

- Data dictionary. SYS and SYSTEM users.  DBA_, USER_, ALL_
  tables. Dynamic performance V_$... tables.

- USER and SYSDATE variables.

- Steps of SQL statement processing.  PL/SQL on client and
  server sides. Two task and one task architecture.

- Oracle tools survey. Application tools and end-user (decision
  support) tools. Character mode application tools on server and
  client sides.  GUI based bitmapped tools. CDE1 and CDE2.
  CASE. Tools for MS-Windows.  DBA tools. EXPort and IMPort,
  SQL*Loader and SQL*Plus.  Third-party tools.

              2. How to Use SQL and PL/SQL in Oracle
                 -----------------------------------

- Server side processing. SQL*Plus basics.  Programming in
  SQL*Plus. Dynamic generation of temporary SQL*Plus routines.
  Tips and techniques.

  !!! IMPORTANT NOTE:

     Oracle sometimes fetches rows from rollback segment even
     when the table is modified by the same transaction!!!

     Example 1 (SQL script):

      create table TEST (N number(2), C char(10));
      create table TEMP (N number(2), C char(10));
      declare
           cursor T is select N, C from TEST;
           I number(2);
           VC char(10);
           VN number(2);
      begin
      --  Insert 2 rows into table TEST
          insert into TEST values (1,'A');
          insert into TEST values (2,'B');
          commit;
      --  Open cursor. Begin of the next transaction
          Open T;  -- Parse, execute, bind and describe phases
                   -  are performed ONLY ONES
            for I in 1..2 loop
            fetch T into VN, VC; -- 2 fetches - one execute
            if  VN = 1 then  -- When the 1-st row fetched, update
                             -- the second row
              update TEST set C = 'BB' where N = 2;
            end if;
            insert into temp values (VN, VC);
          end loop;
          close T;
          commit;   -- end of transaction
      end;
      /
      spool t.tmp
      prompt Table TEST
      select * from TEST;

      prompt Table TEMP
      select * from TEMP;

      drop table TEST;
      drop table TEMP;
      exit;

     Content of table T after the script executed

               N C
         ------- ----------
               1 A
               2 B

         The second row has been taken from a rollback segment.

     Example 2: All the rows are taken from the data file.

      create table TEST (N number(2), C char(10));
      create table TEMP (N number(2), C char(10));
      declare
         I number(2);
         VC char(10);
         VN number(2);
      begin
          insert into TEST values (1,'A');
          insert into TEST values (2,'B');
          commit;
          for I in 1..2 loop
           -- the following statement performs parse, execute,
           -- bind, describe and fetch as a BUNDLE
           -- (one execute - one fetch scheme):
           select N, C into VN, VC from test where N = I;
           if VN = 1 then
            update TEST set C = 'BB' where N = 2;
           end if;
           insert into temp values (VN, VC);
          end loop;
          commit;
      end;
      /
      spool t.tmp
      prompt Table TEST
      select * from TEST;
      prompt Table TEMP
      select * from TEMP;
      drop table test;
      drop table temp;
      exit

     Content of table T after the script executed

               N C
         ------- ----------
               1 A
               2 BB

         The second row has been taken from the table's segment.

    !!! END OF IMPORTANT NOTE

- Server side PL/SQL blocks, procedures, functions and packages.
  Calling RDBMS procedures from various tools. Debugging PL/SQL
  code. Oracle written standard routines library.

- Client side PL/SQL in SQL*Forms/Menu and CDE tools. Calling
  server procedures form client side PL/SQL routines.

- Tuning of SQL statement and applications.

   SQL statement tuning is normally performed by application
   developers and includes:

     - Data Design

     - Oracle optimizer

     - Indexes and Hashing

     - Row-level locking

     - Sequences

     - Clusters

     - Array processing

     - Stored procedures

     - Database triggers (are not stored in P-code in database.
       (Parsing and compiling required. You shod call DB
       procedure(s) from trigger body)

     - Declarative integrity constraints

       Example:

       alter table add constaint <NAME> foreign key (Key_Name)
             references (<Table>.<Column>);

     - Discrete transactions

     - Parallel query facilities (PH95 p#160v2)

   Query optimization:

     - In rule-based optimization, the optimizer chooses an
       execution plan based on a ranking of the operations
       involved. In fact, it is syntax-based one

     - In cost-based optimization, 1the optimizer uses
       statistics collected on the accessed tables, clusters and
       indexes to chose the most efficient execution plan. Oracle
       can not collect statistics automatically. DBA or user
       should periodically issue ANALYZE commands for each table,
       cluster and index. It may take substantial time for large
       tables.

     - To switch between optimizer modes on instance level DBA
       should use OPTIMIZER_MODE = { RULE | COST } INIT.ORA
       parameter. Default - COST. If there are no statistics
       available then RULE is used. User can switch RULE/COST at
       the session level with ALTER SESSION SET OPTIMIZER_GOAL =
       { RULE | CHOOSE | ALL_ROWS | FIRS_ROWS }.

   Collect storage statistics with ANALYZE:

     - The ANALYZE sql-statement has three distinct uses for
       objects (indexes, tables and clusters): (1) collecting
       (compute, estimate and delete) statistics on; (2)
       validating structures and (3) listing chained and
       migrating rows. (VALIDATE INDEX command also has been
       inherited from ORACLE V6 but you should uses ANALYZE
       INDEX instead.)

     - Statistics can be ESTIMATED or COMPUTED EXACTLY

     - Estimate statistics:

       - much faster then computing

       - samples row estimation (never samples more then 1064
         rows).

     - Compute statistics are exact. Can take longer than
       estimation (time is in order of full table scan).

     - Generated statistics is always updates any existing
       statistics in the data dictionary.

     - Data dictionary views ALL-,USER-, DBA_TABLES (_INDEXES,
       _CLUSTERS) contain statistics generated by ANALYZE and
       used by optimizer. Statistics contain columns: NUM_ROWS,
       AVG_SPACE, AG_ROW_LEN ...

    - Oracle INVALIDATES ANY CURRENTLY parsed SQL statements that
      access any newly analyzed objects.

- Introduction to SQL optimizer. Navigating (selecting access
  paths). Syntax (rule)-based and cost-based optimization.
  "Explain plan" and TKPROF facilities.

- Data integrity. Types of integrity:

    - Nulls

    - Primary and unique keys

    - referential integrity (primary-foreign key value)

    - database triggers

-  Data consistency and concurrency
   Overview of locking mechanism. Row locks and table locks (RX,
   RS, SRX, S, X). Latches.

- Data dictionary view (DBA_, ALL_, USER_) to tracking Oracle
   Objects.

            DICT - contains dictionary views description

            V$FIXED_TABLE  - description or V$views

            ALL_, DBA_, USER_OBJECTS

  - Tables
             ALL_, DBA_, USER_TABLES

  - Clusters

            ALL_, DBA_, USER_CLUSTERS

  - Rollback Segments

            ALL_, DBA_, USER_ROLLBACK_SEGS, V$ROLLNAME,
            V$ROLLSTAT

  - Tablespaces

    DBA_TABLESPACES

  - Segments

    DBA_SEGMENTS

  - Extents

    DBA_EXTENTS

  - Files

    DBA_DATA_FILES, V$FILESTAT
    V$DATA_FILE

  - Triggers

    ALL_, DBA_, USER_TRIGGERS

      Long column called Trigger_Body holds the source code for
      triggers

  - Mutating and constraining tables (Application Developer's
    Guide, chapter 8)

    - Mutating table is a table that is currently being modified
      by an UPDATE, DELETE and INSERT statement, or a tables that
      might need to be updated by the effects od declarative
      DELETE CASCADE referential integrity action.

    - Constraining table is a table that a triggering statement
      might need to read either directly, for SQL statement, or
      indirectly, for declarative referential integrity
      constraint.  A table is mutating or constraining only to
      the session that issued the statement in progress.

    A table is NOT mutating or constraining  for STATEMENT
    triggers, but for ROW triggers there are TWO important
    restrictions:

       - the statements of a row trigger CAN NOT READ or MODIFY
         a mutating table of a TRIGGERING statement

       - the statements of a row trigger CAN NOT change the
         PRIMARY, FOREIGN, or UNIQUE key columns of a
         constraining table of the TRIGGERING statement.

 - Procedures, functions and packages:

      (Oracle7 Application Developer's Guide, chapter 9)

    ALL_, DBA_, USER_SOURCE - source code (Not USER_sourceS!)

    ALL_, DBA_, USER_OBJECTS - object (procedure, function,
                 package). Status is VALID/INVALID

    ALL_, DBA_, USER_DEPENDENCIES - info about DIRECT
                 dependencies among objects.

    Script UTLDTREE.SQL - gives dependencies among schema
          objects.  Creates DEPTREE_TEMPTAB (Object_id number,
                               referenced_object_id number,
                               Nest_level number,
                               Seq# number);
          When connected Internally, additional dependency info
          gathered for dependent CURSORS (shared SQL areas).

    Procedure DEPTREE_FILL (Object_Type char, Object_owner char,
               Object_name char) - clears and creates
               dependencies in DEPTREE_TEMPTAB table in user
               schema. All objects that recursively reference the
               specified object are listed

    DEPTREE view - lists dependency information in the
               DEPTREE_TEMPTAB table. Parent object
               (Nested_level=0) and dependent (Nested_Level>0)
               are listed

    IDEPTREE view - the same as DEPTREE but output is in
                    graphical format (dependent objects are
                    indented)

    DEPTREE_SEQ - sequence for unique identification sets of
                  dependency information.

    V$SQLAREA - library cache

    V$SGASTAT - reports the components and sizes of SGA

    DBA_OBJECT_SIZE - allows to estimate packages/procedures'
                      space requirements

    The source code is stored in varchar column Text. Each
    row stores a single line of  source code.

       Select Text
       from ALL_SOURCE
       where Name = upper('<NAME_OF_PROCEDURE>')
       order by Line

     If there were error during compiling you should check
     the table DBA_, USER_ERRORS
       order by Line  or show error command

    ALTER VIEW <fn> COMPILE;

    ALTER PROCEDURE <proc> COMPILE;

    ALTER FUNCTION <fn> COMPILE;

    ALTER VIEW <fn> COMPILE;

    ALTER PACKAGE <pack> COMPILE BODY;

    ALTER PACKAGE <pack> COMPILE PACKAGE;

    Example:

      Create or replace function FUNC1 (I in number,
            J out number, C varchar2, /* IN assumed*/,
            D In Out varchar2) return number as
            W varchar2(30);
            begin;
             . . .
             . . .
            end;

  - Declarative integrity constraints

      ALL_, USER_, DBA_CONSTRAINTS, DBA_CONS_COLUMNS

  - Grants

      (Plenty of them??????)

  - Privileges

      (Plenty of them??????)

  - Roles

      (Plenty of them??????)

  - Auditing

      (Plenty of them??????)

      See: Unobtrusive Audit Trail, PH95p#8

              3. Server Concepts and Administering
                 ---------------------------------

                3.1 Oracle7 Architecture and tuning
                    -------------------------------

- Oracle database.

  - Files: database, redo log (2 minimum), control,
    INIT.ORA.  Mirroring:  redo log & control. README files.

    Always allocate tablespace's physical datafiles in STANDARD
    size (i.e. all the files in database should have the same
    size. That's why it's easy to manage (to move between
    physical disk drives) files because of equal sizes.

    All data files should be placed in the SAME directory on each
    disk drive.

    Trace files and alert file are BACKGROUND_DUMP_DESC (usually
    ...\rdbms\trace) directory. An alert file - a chronological
    log of message and errors. The alert file should be
    periodically checked and removed.

    Trace files are created by Oracle in case of error. They can
    be formatted with appropriate utility and sent to Oracle
    technical support for analysis.

    README files (README.FIRST, README.DOC, readmeunix.doc) are
    useful for administrator. The should be carefully read before
    installing.

  - Compatibility notes. COMPATIBLE INIT.ORA parameter.

    If Oracle 7.1.3 (and possible higher) is started up without
    COMPATIBLE parameter in INIT.ORA it works as 7.0, that is
    some 7.1 features as read only tablespaces CAN NOT be used.
    To allow using this features

        COMPATIBLE = 7.1

    parameter should be specified in INIT.ORA.

  - INIT.ORA syntax:

        "\" - continuation character at end of line.

        "#" = comment at beginning of line

        Double quotas " can surround parameters value
            ("param_value"). Useful if param. value contains
            special character.

        Several parameters delimited by SPACE can be placed on
        a single line, for example:

          PROCESSES 100 LICENSE_MAX_USERS = 5 OPEN_CURSORS = 10

     How to keep track of INIT.ORA parameters:

     1.  Server administrator's Guide (Appendix A), 1993
     2.  Oracle7 Server Documentation Addendum, 1994
     3.  README.DOC file on your distribution kit.

  - Memory/disk space measurement units in Oracle (be CAREFUL
    there is NO UNIFORM measurement conventions):

       - megabytes M

       - kilobytes K

       - bytes

       - Oracle blocks. Block=DB_BLOCK_SIZE bytes INIT.ORA

       - operating system blocks (size of traces files ...)

  Related topics:

  - Trace files for server (user) process is determined by
    USER_DUMP_DEST INIT.ORA parameter. In this case, tracing can
    be set up on database level (SQL_TRACE=TRUE INIT.ORA par.) or
    on session level ALTER SESSION SET SQL_TRACE=TRUE).

  - MAX_DUMP_FILE_SIZE = 500 (OS blocks by default)
    to limit ANY trace file size, preventing from arbitrary
    growing of ALERT.LOG

- Oracle Instance (SGA + background processes). One instance -
  one database (exception - parallel server).
  SGA contains:

      - buffer pools for database blocks (Database Buffer Cache).
        Each buffer holds a single Oracle data block.

      - redo log buffer containing redo log entries,

      - shared pool for SQL statements and dictionary cache
        entries. Size of shared pool is determined by
        SHARED_POOL_SIZE (measured in BYTES) init.ora parameter
        only one parameter).  Shared pool consists of:

           - library cache holds all SHARED SQL and PL/SQL
             statements. It contains the private and shared SQL
             and PL/SQL areas (context area for SELECT
             statements).  SQL statements remain in the library
             cache until aged out by LRU algorithm.  See

             Library cache consists of shared part and private
             part.  Shared part for each statement contains:

               - Parse tree: A representation of the results of
                  parsing a SQL statement;

               - Execution plan: A roadmap Oracle builds
                  containing the plan of how a statement will be
                  run. It is written after each statement is
                  optimized. Contained in PGA for single-threaded
                  and in SGA for multi-threded servers.

          Private part contains:

               - The persistent portion that occupies space in
                the SGA for the life of every cursor associated
                with SQL statement;

               - The run-time portion that is acquired when a SQL
                statement executes, and is released when the
                statement completes.

        M.Corey in PH95p#5v1 recommends the following:

                +--------------------------------+
                !  Computer     SHARED_POOL_SIZE !
                !  Memory                        !
                +--------------------------------+
                !  16M             4000000       !
                !  32M            12000000       !
                !  64M            30000000       !
                !  128M           60000000       !
                +--------------------------------+

       Rule of thumb: Never allocate more then 60% of your memory
       to SHARED_POOL_SIZE

        There are two additional parameters beginning with 7.1.5:

            SHARED_POOL_RESERVED_SIZE - if NON-ZERO value (max is
               50% of the SHARED_POOL_SIZE) reserves the
               contiguous portion of the shard pool for PL/SQL
               intensive operations (e.g compilation of triggers
               and procedures).

            SHARED_POOL_RESERVED_MIN_ALLOC - the mimimal size of
               the object which can be allocated from reserved
               pool.

           - data dictionary cache contains reference info about
             the database, its structures and its users.

      -Background processes:

  - LGWR and DBWR, SMON, PMON, ARCH ....

  - Naming conventions for processes: ORA_<sid>_DBWR,
    ORA_<sid>_LGWR, ...

- User and server processes. Server process parse & execute SQL,
  reads data blocks into shared pool in SGA, returns result
  to the user process. Two task and single (one) task. Single
  task - user and server processes are combined into single
  operating system process.

  Killing session:

  - Sessions. ALTER SYSTEM KILL SESSION '<sid>,<serial#>'
    If an active session cannot be interrupted (due to network
    I/O or rolling back a transaction) the session can't be
    killed until the operation completes. If the session can't be
    killed in 60 seconds it marked as "KILLED" in V$SESSION.

    Sometimes terms "server process" and "user process" are
    referred as the same things in Oracle documentation, but it's
    true with seldom used one task architecture only!

- Processing SQL statements.  (chapter 4 of A.D.G.)

  Parse phase:

      - check whether the statement is already in the shared pool,

      - checks syntax (parsing), compiling to parse tree (P-code)
        and store into shared SQL area (library cache).  This
        step also includes querying the data dictionary for
        object resolution, security privileges and the most
        effective search path.  Shared pool of size
        SHARED_POOL_SIZE (INIT.ORA) bytes includes the library
        cache.  Also the memory structure called cursor is
        created.  Cursor contains the record of statement status
        information.

  Execute phase:

      - applies parse tree to data buffers;

      - perform physical or logical read/writes if a sort is
        required due to count(), group by, DISTINCT ....;

  Fetch phase:

      - performs physical or logical reads if required;

      - retrieves rows of data for a SELECT statement;

      Notes:

        SELECT FOR UPDATE does not return data to the user during  ??
        execute phase, but visit the blocks to take out locks.
        The number of fetches is determined by number of rows
        satisfying the query conditions and the array size
        specified in the tool being used.

- Shared SQL area and the shared pool. Shared SQL area is part of
  the shared pool in the SGA.

    SQL area content:

      - shared portion - information used to execute specific SQL
        statements:

                   - text of the SQL or PL/SQL statement,

                   - parsed form (P-code) of the SQL or PL/SQL
                     statement,

                   - execution plan for SQL and PL/SQL statement.

      - private portion (???)

    SQL statements are identical when their text strings
    (including case, white spaces and comments), referenced
    objects, types and names of variables are identical.
    The rest of the shared pool content:

      - data dictionary cache containing rows of data dictionary
        information,

      - cursor state (bind variables) - ONLY for multi-threaded
        connections. For single threaded server or SQL*Net V1 this
        info is contained in Program Global Area (PGA).

     SHARED_POOL_SIZE init.ora parameter - number of bytes in the
     shared pool. Memory in the SHARED POOL is allocated
     automatically by Oracle. It's NOT POSSIBLE to allocate
     memory to SQL area separately from data dictionary cash!

     Default SHARED_POOL_SIZE which supplied with Oracle
     installation INIT.ORA file should be increased, sometimes
     BEFORE installation. Set it to TWICE the suggested default.

     Monitoring of shared pool:  UTLBSTAT.SQL/UTLESTAT.SQL
          statistics report ([LON94], chapter 6).

          Package DBMS_SHARED_PULL package helps to monitor
          shared pool and to KEEP (prevent from swap out) in
          memory some shared pool objects (Oracle Magazine
          May/June 1995,pp. 65-70, paper by Mike Ault)

  Related topics:

     - CURSOR_SPACE_FOR_TIME = FALSE (by default). (see below)

  - SESSION_CACHED_CURSORS = n - 7.1 parameter.

    Oracle uses the shared SQL area to determine if MORE THAN
    THREE parse requests have been issued on a given statement.
    If so, Oracle assumes the session cursor associated with the
    statement should be cached and moves the cursor into the
    SESSION CURSOR CACHE. Subsequent requests to parse that sql
    statement by the same session will then find the cursor in
    the session cursor cache.

    To enable caching of session cursors, you must set the
    INIT.ORA parameter SESSION_CACHED_CURSORS that specifies the
    maximum number of session cursors kept in the cache. A LRU
    algorithm ages out entries in the session cursor cache to
    make room for a new entries when needed.

    The command ALTER SYSTEM set SESSION_CACHED_CURSORS = n

    To determine whether the session cache is sufficiently large
    for your instance, your can examine statistic
    "session cursor cache hits" in the  V$SESSTAT view. This
    statistic counts the number of times a parse call found a
    cursor in the session cache. If this statistic is a
    relatively low percentage of the total parse call count for
    the session, you should consider setting
    SESSION_CACHED_CURSORS to a larger value. (Oracle7 server
    documentation addendum pp. 9-10 - 9-11)

- Reading data files.

  DB_FILE_MULTIBLOCK_READ_COUNT -
    maximum number of blocks read in one I/O during a sequential
    table scan. Value of this parameter is used by cost based
    optimizer to build access paths.

- Program Global Area (PGA) is allocated and contains info for a
  each server (user) process. It is allocated when a user
  connects to Oracle and a session is established.
  PGA consists of stack space (holds session variables and
  arrays) and user session data.

      - PGA is writeable and non-shared,

      - in case of multi-threaded connections all the bind
        variable are in the shared pool.

  Size:

      PGA's size is OS specific and not dynamic. PGA is allocated
      at connect time. ???What about variables and arrays in the
      stack??? The following initialization parameters affect the
      size of PGAs:  - OPEN_LINKS - DB_FILES - LOG_FILES -

- Sort areas - portions of memory in which Oracle sorts data.
  Sort area exists in the memory of an server (user) process.
  Sort area may grow up but can't exceed limit given in INIT.ORA
  parameter

     SORT_AREA_SIZE=n (bytes).

  During the sort  Oracle may perform some tasks that do not
  involve referencing data in the sort area. In such case Oracle
  may decrease the size of the sort area by writing some data to
  the temporary segment on disk and then deallocating the portion
  of sort area that contained that data. Such de-allocation may
  occur, for example, when Oracle return control to the
  application after finishing SELECT statement processing. In
  this case Oracle will retain in memory at least:

        SORT_AREA_RETAINED_SIZE=n  (bytes, INIT.ORA par.)

  for subsequent processing. Min. value = 0, max. =
  SORT_AREA_SIZE If the amount of data does not fit into a sort
  area, then the data is divided into smaller pieces. Each piece
  is then sorted individually and called "RUN". After sorting all
  the RUNS Oracle merges them to produce result.

  Oracle7 allocate memory in sort area in 8k
  increments. Memory can be reused after the LAST ROW
  is fetched (good algorithm).  SORT_AREA_RETAINED_SIZE should be
  a very small number (PH95p#31).

  Each time SQL statement needs memory for sorting data
  SORT_AREA_SIZE bytes are allocated. After that memory can be
  deallocated to SORT_AREA_RETAINED_SIZE and allocated again up
  to SORT_AREA_RETAINED_SIZE. After closing cursor all the memory
  for sort area size is deallocated to 0 bytes.

  Temporary segments - working space.  Sorting is needed when
  CREATE INDEX, SELECT ...  ORDER BY, SELECT DISTINCT, SELECT ...
  GROUP BY, SELECT ...  UNION, SELECT ...  INTERSECT, SELECT ...
  MINUS, unindexed joins, certain correlated subqueries.

  Script to determine sort activities in memory and to disk
  (applications should be running TWO or THREE working days
  before):

    >>> Begin of script
     select Name, Value from V$SYSSTAT where name like '%sort%';
     rem  In memory sorts should be greater then disk sorts.
     rem  If more then 25% of sort requests require disk
     rem  space, consider increasing SORT_AREA_SIZE
    >>> End of script

- Database buffer cache. Shared by all processes.
  Block size - DB_BLOCK_SIZE=n (bytes) at database creation time.
  Buffer cache size: DB_BLOCK_BUFFERS=m (Oracle blocks)

  Two lists: dirty list and LRU list.

  LRU list content:

      - free buffers,

      - pinned buffers (currently being accessed),

      - dirty buffers.

  When a server process needs to read a block from disk into
  cache it:

      - searches LRU list,

      - looks for a free buffer and

      - moves dirty buffers to the dirty list

  The server process stops searching when it finds a free buffer
  or it has searched a specified number (DB_BLOCK_MAX_SCAN_CNT) of
  buffers without finding a free buffer.
  When server process finds a free buffer it moves that buffer to
  the MRU end of the LRU list. It the processes scans specified
  number of buffers without finding a free buffer, it then
  signals the DBWR to write dirty buffers to disk.
  DB_BLOCK_BUFFERS - size of database buffer cache in BLOCKS.

  Related topics:

     DB_BLOCK_LRU_STATISTICS = FALSE(def)/TRUE

     DB_BLOCK_LRU_EXTENDED_STATISTICS = 0..n - disables or
       enables compilation of statistics in the X$KCBRBH table
       during tuning memory allocation (see below and Server
       Adm.Guide, chapter 21)

     V$PERFORMANCE view - different aspects of performance,

              for example, detecting of block swapping.

- Rollback (undo) segments. Data consistency through
  multi-versioning.  Update operations.

  At least one per instance.

  Logical record in undo segment contains:

      - transaction ID including SCN (system commit number).
        It is assigned as the next sequential number when
        transaction is committed;

      - file ID;
      - block ID;
      - row number;
      - column number;
      - old data bytes.

  Because updates to undo segment blocks occur in SGA, they are
  protected by entries in the redo log. These entries ensure that
  any uncommitted transactions will be rolled back during INSTANCE
  RECOVERY.

  Update operation and rollback segments:

      1. acquire database blocks into database buffer cache;

      2. acquire rollback blocks into database buffer cache;

      3. place exclusive row locks on rows that are about to
         change;

      4. save rollback data into a rollback segment block;

      5. apply changes to the data blocks.

  Commit makes rollback data inactive. Rollback returns data from
  the rollback segment block to the data block and clears
  rollback data. Active and inactive rollback segment entries.
  Statement level consistency and SET TRANSACTION READONLY
  Readers don't block writers and writers don't block readers.
  Overwriting of inactive rollback data.  Oracle will
  guarantee that users will get a read consistent view of data,
  but NOT THE QUERY WILL COMPLETE.  (1555-rollback segment too
  small, snapshot too old).

  Rollback segment space management. INITIAL, NEXT, MINEXTENTS
  and MAXEXTENTS. OPTIMAL storage clause. Shrinking of rollback
  segment:

      "If a transaction needs to continue writing rollback
      information from one extent to another extent in the
      rollback segment, ORACLE compares the current size
      of the rollback segment to the segment's optimal
      size. If the rollback segment is larger then its
      optimal size and the extents IMMEDIATELY FOLLOWING
      the extent just filled are inactive, ORACLE
      deallocates consecutive non-active extents from the
      rollback segment until the total size of the
      rollback segment is equal to or as close to but not
      less then its optimal size." (System Concepts Manual)

  Recommended number of rollback segments. Who knows...

  3 types of rollback segments:

      - private (ROLLBACK_SEGMENTS = (rbs1, rbs2 ...) INIT.ORA
        parameter;

      - public (an instance allocates those segments at
        startup time (system concepts). Instance attempts to
        acquire

        CEIL(TRANSACTIONS/TRANSACTIONS_PER_ROLLBACK_SEGMENT)

        rollback segments. Oracle parallel server can have ONLY
        public segments.

        TRANSACTIONS_PER_ROLLBACK_SEGMENT does NOT actually limit
        number of transactions that can use a rollback segment;

      - deferred (contains transaction information on tablespaces
        that go OFFLINE). When tablespace goes offline so that
        transactions can not be rollbacked immediately deferred
        rollback segment is created in SYSTEM (!) tablespace.
        When this tablespace comes back online deferred RBS data
        are applied to it and this segment disappear.

  A transaction can be automatically assigned to a rollback
  segment based upon LEAST USED criteria of all rollback
  segments.

  Discrete transactions (DISCRETE_TRANSACTIONS_ENABLED=TRUE
  init.ora. Appl. develop. guide, appendix A) DON'T use rollback
  segments.

  Managing rollback segments:

      - It is not possible to create additional object in
        non-SYSTEM tablespace having only system rollback
        segment. Therefor, before creating a first non-SYSTEM
        tablespace you should create an additional rollback segment
        in SYSTEM tablespace, create tablespaces (including the
        tablespace for rollback segments), create needed rollback
        segments and then drop previously created non-SYSTEM
        rollback segment in SYSTEM tablespace.

      - SET TRANSACTION USE ROLLBACK SEGMENT rbs;

      - alter rollback segment RBS1 offline;
        drop [public] rollback segment RBS1;
        create [public] rollback segment RBS1 tablespace RBS_T
               storage (initial 200k next 200k minextent 20
               optimal 400k)
        alter rollback segment RBS1 online;

      - ROLLBACK_SEGMENTS = (RBS1, ...) INIT.ORA parameter brings
        private rollback segments online at startup time.
        (!)Important: a heavy transaction-based system should
              have at least two tablespaces for rollback
              segments.  The order of rollback segments in
              ROLLBACK_SEGMENTS INIT.ORA parameter should ensure
              toggling between tablespaces.  E.g. if R1, R2, R3
              reside in one tablespace and R4, R5, R6 - in
              another one then INIT.ORA parameter should be:
              ROLLBACK_SEGMENTS= (R1,R4,R2,R5,R3,R6)
                        rather then
              ROLLBACK_SEGMENTS= (R1,R2,R3,R4,R5,R6)

      - You should always use PRIVATE rollback segment rather
        then PUBLIC because it is easy managing.

  Monitoring rollback segments :

      - select Blocks from DBA_SEGMENTS where Segment_Type =
        "ROLLBACK" and Segmanet_Name = ...
        Gives number of extents and actual number of blocks in
        rollback segment

      - select N.Name, S.Optsize from V$ROLLNAME N, V$ROLLSTAT S
        where N.Usn = S.Usn and N.Name='...'
        Optimal size monitoring. USN - Undo (Rollback) Segment
        Number.

      - Table DBA_ROLLBACK_SEGS (Tablespace_Name, Initial_Extent,
        Next_Extent, Min_Extents, Status, Instance_Num) for
        monitoring current status.

      - Space really allocated on rollback segments:
        select Segement_name, bytes, Extents from
        SYS.DBA_SEGMENTS where Segment_Type = 'ROLLBACK'
        or Segment_Type = 'DEFERRED ROLLBACK';

      - monitoring dynamic extensions and shrinking:
          dynamic extensions and shrinks during time interval:
          UTLBSTAT.SQL and UTLESTAT.SQL, SQL*DBA, V$ROLLSTAT (see
          below.)

      - rollback segment distribution per transactions/users:

      >>> Begin of script
          col Id1 format 99999 hea 'RS|ID' Word_w
          col Rollback_Seg format a9 hea 'RS|NAME' Word_w
          col Oracle_pid format 99999 hea 'ORACLE|PID' Word_w
          col System_pid format a8 hea 'SYSTEM|PID' Word_w
          col Transaction format a8  hea 'Transac|tion'Word_w
          col Term        format a6  hea 'Term|inal' Word_w
          col Osusr       format a8  hea 'OS|User' Word_w
          col Usr       format a8  hea 'User' Word_w
          rem
          select trunc(L.Id1/65536) Id1, R.Name Rollback_seg,
                 P.Pid Oracle_pid,  P.Spid System_pid,
                 nvl (P.username, 'NO') Transaction,
                 P.Terminal Term, S.Username Usr, S.Osuser Osusr
          from V$LOCK L, V$PROCESS P, V$ROLLNAME R, V$SESSION S
          where L.Sid = P.Pid(+)
            and trunc(L.Id1(+)/65536) = R.Usn
            and L.Type(+) = 'TX'
            and L.Sid = S.Sid(+)
            and L.Lmode(+) = 6
          order by R.Name;
      >>> End of script

  Choosing optimal number and extent size of rollback segments
  ([LON94] DBA Handbook, chapter 7)

  Related topics:
   CLEANUP_ROLLBACK_ENTRIES  INIT.ORA  - prevent long
   transactions from freezing out shorter ones.

   - Questions and answers

     Q. What does it mean when a rollback segment shows
       "NEEDS RECOVERY"

     A. A rollback segment with status "NEEDS RECOVERY" has
        failed to rollback an uncommited transaction in it's
        transaction table. The most common reason for failing to
        rollback a transaction is because the object is in an
        offline datafile or tablespace, the table or rollback
        segment are corrupt.

     Q: How does oracle determine which rollback segment to use?
        Is it different between v6 and Oracle7??

     A: Oracle assigns rollback segments based on the following
        rules:

       1. always assign to rbs which has least number of active
          transactions

       2. if two or more rbs have the same "least number of
          active transactions", the assign to the one which is
          after the last one used. This ensures the undo is kept
          for a longer time.

       NOTE:
             if a transaction is outside the system tablespace,
             the system rollback segment is not counted in this
             round-robin method.

- DBWR process. Writes all changed buffers to data files.

  Uses a LRU algorithm to keep most recently used blocks in
  memory.

  DBWR defers writes to database files for I/O optimization.

  DBWR will write dirty buffers to disk when:

      - the dirty list reaches a THRESHOLD LENGTH (which INIT.ORA
        parameter. Server process detects it and signals DBWR to
        write. The THRESHOLD LENGTH is defined to be ONE HALF of
        the value of the DB_BLOCK_WRITE_BATCH = n (blocks)
        INIT.ORA parameter (???I filed to find this parameter in
        7.1.3 V$parameters table???).  In this case single
        MULTI-BLOCK write is performed.  DB_BLOCK_WRITE_BATCH -
        is the number of blocks written in multi-block write. If
        the dirty list does not have DB_BLOCK_WRITE_BATCH buffers
        when DBWR is signalled, DBWR searches the LRU list for
        additional dirty buffers;

      - a server process scans a specified number
        (DB_BLOCK_MAX_SCAN_CNT = n (blocks) INIT.ORA parameter)
        of LRU list without finding a free buffer.
        Multi-block write is performed in this case also;

      - a time-out occurs (every 3 seconds). If DBWR has not been
        signalled to write during specified time (timeout), it
        signals ITSELF to write. DBWR searches a specified
        (2*DB_BLOCK_MAX_SCAN_CNT) number of buffers on the LRU
        list and writes any founded dirty buffers to disk. The
        next timeout - the new set of buffers. The DBWR signals
        the LGWR to flush the redo log buffer BEFORE if writes.
        The number of buffers searched by DBWR at each timeout is
        TWICE the value of DB_BLOCK_MAX_SCAN_CNT;

      - a checkpoint occurs. LGWR signals DBWR to write. LGWR
        specifies WHICH BUFFERS TO WRITE to disk.

   It is possible to multiple DBWR processes. Rule: one DBWR
   process per Oracle data disk drive (DATABASE_WRITERS=n). It is
   recommended  to run 2 DBWRs per disk drive used by
   each oracle instance (PH95p#5v1).

   Related topics:

      - DB_BLOCK_CHECKPOINT_BATCH = DB_BLOCK_WRITE_BATCH/4 (def)
        Max number of blocks DBWR will write in one batch during
        checkpoints.

      - DB_FILE_SIMULTANEOUS_WRITES = 4..24
        If OS supports more then one write per device.

- Server configurations. Multi-threaded versus dedicated servers.

- Log transactions. LGWR and ARCH (archive redo log
  information) processes.

  Before data is written to the log file on disk, it is first
  recorded in the rego log buffer in SGA.

  Redo log buffer contains:

      - records indicating what has changed in a row;

      - records indicating what has been recorded in a rollback
        segment;

      - records indicating that a transaction has been committed;

      - records indicating that a checkpoint has been performed;

      - records indicating that a distributed transaction has
        prepared.

- Redo log files and redo log buffer. LGWR process.

  Redo log buffer is circular - when LGWR writes to redo log,
  server process can write into it.  Length in BYTES is
  LOG_BUFFER (INIT.ORA).  Redo entries (see above) are used to
  rollback changes made to the database when recovery is
  necessary.  The ONLY time that the redo log file is READ is
  during instance or media RECOVERY. How to determine the optimal
  value for LOG_BUFFER see section "Determine whether user
  processes wait for space in the redo log buffer" below.

  Update operation and the redo log buffer:

      1. read database blocks into the database buffer cache;

      2. place exclusive row locks on rows that are about to
         change;

      3. acquire a transaction entry IN THE DATA BLOCK and assign
         transaction (space for the transaction entry should exist
         in data block);

      4. read rollback blocks into the database buffer cache;

      5. save rollback data (rollback part of the transaction
         entry) into a rollback segment block;

      6. record before image (re-formatted rollback data) in redo
         block buffer;

      7. apply changes to the data block (date part of the
         transaction entry);

      8. record after image (new data part of the transaction
         entry) in redo log buffer;

    Redo log entries from several transactions will be
    interleaved.

  LGWR writes redo log buffer entries to the redo log file when:

      - a commit occurs;

      - the redo log buffer is ONE THIRD full;

      - the DBWR needs to clean the buffer blocks for checkpoint;

      - a timeout occurs (every 3 seconds);

      - a distributed transaction prepares.

  Notes:

      - there is only one redo log per instance;

      - commit is not confirmed until the transaction has been
        recorded in the redo log file;

      - concurrent commits may be piggybacked (so called group
        commit when all the acquired during writing from redo
        log buffer into redo log is written in one disk
        write) for better performance;

      - redo log block is equal to OS block.

      - the DBWR always signals the LGWR to flush the buffer
        before it writes blocks back to the database to protect
        all changes,

      - maximum possible number of redo log groups is determined
        by MAXLOGFILE parameter used in CREATE DATABASE, max.
        number of member per group - by MAXLOGMEMBERS in CREATE
        DATABASE. LOG_FILES INIT.ORA parameter may limit number
        of online redo logs;

      - to add redo log in environment without groups: ALTER
        DATABASE ADD LOGFILES ('lf1','lf2') SIZE nnnK;

        to add new group: ALTER DATABASE ADD LOGFILE GROUP 5
                          ('lf1','lf2') SIZE nnnK;

        to add member in group:  ALTER DATABASE ADD LOGFILE
                                 MEMBER 'lf3' TO GROUP 5;

        to rename all types of files: ALTER DATABASE RENAME FILE
        'f1','f2' TO 'nf1', 'nf2';

        to drop group: ALTER DATABASE DROP LOGFILE GROUP 5;

        to drop member:ALTER DATABASE DROP LOGFILE MEMBER 'lf3';

- Database commits. Commit operations. COMMIT features.

  Commit steps:

      - a user issues commit;

      - a commit record is placed in the redo log buffer;

      - LGWR acquires all latches on the redo log buffer, allowing
        current writes to complete;

      - LGWR flushes the redo log buffers to current log file
        using multi-block writes if possible;

      - the user is notified that the transaction has been
        committed;

   Commit Features:

      - LGWR permanently records transactions while allowing DBWR
        to defer writes and reduce I/O time;

      - piggybacks redo log records from multiple transactions
        requesting to commit AT THE SAME TIME into single write;

      - at most, less then one synchronous write is required per
        transaction;

      - the size of transaction  does not affect amount of time
        needed for actual commit operation;

      - the write is multi-block, if possible;

      - LGWR writes entire operating system blocks, which may not
        always be entirely full, to the log file;

- Redo log files. Redo log groups. Minimum 2 redo log files.
  If the redo log files are MIRRORED, the same redo log
  information is written to multiple online redo log files. Each
  member (file) of a group should be located on different disks to
  guard against failure. All members of a group are written to at
  the same time.

    Redo log files (or group's members) are written in a circular
    fashion.

    All members of a group contains the same information.
    Group members are updated simultaneously.

    Each group should contain the same number of members as other
    groups.

- Log switches occurs when:

      - Oracle switches from one redo log to another (??when??);

      - LGWR has filled one log file group;

      - forced by DBA when current redo log needs to be archived,
        or dropped or moved to a new disk location. (ALTER SYSTEM
        SWITCH LOGFILE command);

      - a log switch occurs upon database shutdown;

  Notes:

  - A checkpoint automatically occurs at a log switch.

  - At a log switch the current redo log is assigned a LOG
    SEQUENCE NUMBER that identifies the information stored in that
    redo log and is also used for synchronization. This number is
    also stored in the control file and the header block of each
    file.

  - Processing can continue as long as at least one member of a
    group is available. If a member is found to be damaged or
    unavailable, messages are written to the LGWR TRACE file and to
    the ALERT file.

  - The default installation creates two redo log groups with one
    file per log.

- Checkpoints.

  DBWR writes all dirty buffers from database buffer cache to
  disk, guaranteeing that all data blocks MODIFIED SINCE THE
  PREVIOUS CHECKPOINT are actually written to the database files
  on disk. The database file are made current as of a given time
  and the checkpoint is recorded in the control file(s).

  Checkpoint tells Oracle to FLUSH(!!!) the SGA to ensure system
  integrity. Hence after checkpoint all the data acquired in the
  SGA by that moment (hot spots) are LOST! Therefor, redo log
  files should be AS BIG AS POSSIBLE to minimize SGA flushing.
  But on the other hand, if the system where to crash, it might
  take the database a lot LONGER TO RECOVER. [COR95 p.134]. For
  God sake in 7.x it is NOT CORRECT. Neither checkpoint nor
  log switch causes SGA flushing! I checked it.

  Each checkpoint should occur only on log switching. To achieve
  this set INIT.ORA LOG_CHECPOINT_TIMEOUT = 0  and
  LOG_CHECKPOINT_INTERVAL to a size greater then redo log file
  size.

  Checkpoint occurs:

      - at every log switch;

      - a specified number of seconds after the last database
        checkpoint  (LOG_CHECKPOINT_TIMEOUT INIT.ORA parameter);

      - when a predetermined number of redo log blocks have been
        written to disk since the last checkpoint
        (LOG_CHECKPOINT_INTERVAL measured in OS blocks INIT.ORA
        parameter);

      - at instance shutdown;

      - when forced by DBA with ALTER SYSTEM CHECKPOINT
        command (without a LOG SWITCH and hence faster then
        ordinary checkpoint with log switching);

      - when tablespace is taken offline (ALTER TABLESPACE
        OFFLINE command).

   Notes:

   - checkpoints DO NOT HALT ACTIVITY, NOR ARE CURRENT
     TRANSACTIONS AFFECTED;

   - During checkpoint and after log switch, LGWR will update the
     headers of database files and control files (SCN, Log Seq.
     Number) unless the checkpoint process (CKPT) has been
     started.

   - Checkpoints require that instance recovery during warm
     restart at startup time need only apply changes since last
     checkpoint.

   - checkpoints allow an online redo log file to be reused by
     guaranteeing that all changes stored in the redo log file are
     also written to the appropriate datafile.

- CKPT process. OPTIONAL.

  Notes:

  - CKPT updates headers of data and control files after a
    checkpoint has been completed, offloading this task from LGWR
    and improving performance for database with multiple files.

  - CHECKPOINT_PROCESS = TRUE INIT.ORA parameter forced Oracle to
    start  CKPT at database startup. Corey in PH95p#5v.1
    recommends to set it to TRUE

  - More frequent checkpoint will reduce the time necessary for
    recovering from instance failure, at the expenses of
    PERFORMANCE.

  - If you decide to achieve the maximal performance at the
    expenses of additional disk storage and longer instance
    recovery , you should ensure that each checkpoint should
    occur only on log switch.  To achieve this set INIT.ORA
    LOG_CHECPOINT_TIMEOUT = 0 and LOG_CECKPOINT_INTERVAL to a
    size greater then the redo log file size.

- Achieving. Archivelog and noarchivelog modes mode.
  Optional achiever process (ARCH) copies online redo log files
  to a designated storage device (tape or disk) once they become
  full for possible media failure recovery.
  ARCH operates only when a log switch occurs and when automatic
  achieving is enabled or when achieving is manually requested.
  Log sequences number is a part of archived redo log.

  Related topics:

   - LOG_ARCHIVE_BUFFERS=n
     LOG_ARCHIVE_BUFFERS_SIZE=m in OS (redo log) blocks

   - LOG_ARCHIVE_DEST=<path>
       (Support of tape archiving depends on O/S.)
        To override this  use ARCHIVE LOG START 'filespec'
     LOG_ARCHIVE_FORMAT = <string> %s - log sequence number)
                   %t - thread (instance) number.

   - LOG_ARCHIVE_START = TRUE/FALSE
     TRUE - automatic archiving, FALSE - manual.
     ARCHIVE LOG START (or STOP) overrides this parameter.

   - Common problem: Archivelog destination full (see [COR95],
     chapter 6 pp. 158-160)

     If there is enough room on disk to copy online redo log in
     LOG_ARCHIVE_DEST, then ORA-00257 error arises and it's not
     possible to proceed with normal work. The following reasons
     can cause the archive directory to suddenly fill up during
     normal operating conditions:

       - The previous night's backup routine, which is supposed
         to put archived redo logs out to tape and then erase
         them, for some reason aborted the last time it run - the
         step that was supposed to erase them never run.

       - An application  has deleted a large number of rows from
         a very large table, and the redo log files written by
         that transaction alone created a very large number of
         archived redo logs.

       - If the archive log destination drive shares space with
         other Oracle data files, you may have inadvertently
         drastically reduced the available space by adding a
         datafile to tablespace that resides on that drive.

    This leads into the next few good rules:

       - you should monitor the available disk space on
         directory that contains your archived redo logs. If its
         utilization over climbs over 90%, it requires IMMEDIATE
         INTERVENTION;

       - dedicate enough space in your archived log destination
         to hold at least two full days of archived redo logs;

       - Archived redo logs should be backed up every day and
         deleted afterwards.  DO NOT RELY ON manual purging of
         the archived redo log destination.

    [COR95] tells a very good story on pp 159-160!

 - Transactions.

   - Transaction start:
      - at session start;
      - at end of previous transaction.

   - Transaction end:

       - transaction committed:

          - COMMIT SQL statement;

          - DDL statement (CREATE, DROP, ALTER, GRANT,...) that cause
            an implicit commit;

          - normal exit from user program with valid LOGOUT;

       - transaction aborted:

          - ROLLBACK SQL statement;

          - user-requested termination (with CONTROL+C for
            example);

          - abnormal exit from user program without logout from
            Oracle;

          - processor failure;

          - disk failure.

     By default, if statement fails Oracle rolls back JUST THAT
     statement, rather then the entire transaction.

- Number of Database Files.

  Maximal number of datafiles that can be created is the lowest
  number specified by any of these three items:

  1. OS limits. If operating system allows maximum n files per
     process then  n - m  Oracle datafiles can be created where m
     is the sum of control files, redo log files, alert.log and
     other log files. For each OS concurrent open file limit
     needs adjusting by means of underlining OS. (See chapter 6
     "Show Stoppers" pp. 139-142 of [COR95]).

  2. Oracle limit MAXDATAFILES. This parameter is set during
     database creation time. It's value affects the size of the
     instance CONTROL FILE. It's possible to increase
     MAXDATAFILES without re-creating database by rebuilding the
     control file.  To do this you should:

          alter database backup control file to trace;

          The SQL script is placed in directory specified by
          init.ora USER_DUMP_DESTINATION. Extract CREATE
          CONTROLFILE statement from trace file and change needed
          values (see script below when sqldba comments are added
          to the edited control file extracted from trace.

          Shutdown database and run SQLDBA script from trace:

          #  The following commands will create a new control
          #  file and use it to OPEN the database.
          #  No data other then log history will be lost.
          #  Additional logs may be required for media recovery
          #  of offline data files in case of (1) any of the
          #  datafiles are restored backups or (2) the last
          #  shutdown was not NORMAL or IMMEDIATE.
          #  Use this script only if the current versions of ALL
          #  ONLINE LOGS are available.
          startup NOMOUNT
          create CONTROLFILE reuse database <db_name>
                    NORESETLOGS ARCHIVELOG
                    MAXDATAFILES 30
                    MAXLOGFILES 20
                    MAXLOGMEMBERS 4
                    MAXLOGHISTORY 100
               logfile
                    group 1 (
                     'G1_logfilename1',
                     'G1_logfilename2',
                     'G1_logfilename3',
                     )  size  2M,
                    group 2 (
                     'G2_logfilename1',
                     'G2_logfilename2',
                     'G2_logfilename3',
                     )  size  2M,
               datafile
                  'data_filename1' size 30M,
                  'data_filename2' size 120M,
                  'data_filename3' size 30M,
                  'data_filename4' size 300M;
          # Recovery is required if (1) any of the datafiles are
          # restored backups or if (2) the last shutdown was not
          # NORMAL or IMMEDIATE
          recover DATABASE;
          #
          # All logs need archiving and a log switch is needed:
          alter system ARCHIVE LOG ALL;
          # Database can now be opened normally
          alter database OPEN;

  3. DB_FILES=n  init.ora
    maximum number of database files that can be opened at
    run time for this database. Should be set at value close to
    MAXDATAFILES. If you have n datafiles and attempt to add one
    more datafile you get ORA-01547 error message.

- Recreating the database.

    Learn how to create database and practice before you are
    forced to do so during real-life emergency.

- Savepoints.  Very convenient feature.

      ...

      SAVEPOINT savepoint_name

      ...

      ROLLBACK [TO savepoint_name]

  Maximum number of savepoints per transaction determined by
  SAVEPOINTS = n INIT.ORA parameter.

- Process monitor (PMON) and system monitor (SMON) background
  processes.

   PMON:

      - cleans up ubnormally terminated connections;

      - rolls back uncommitted transactions;

      - releases locks help by terminated process;
        free SGA resources allocated to failed process;

      - detects deadlocks and automatically resolves by
        rolling back the transaction.

   SMON:

      - performs automatic instance recovery;

      - maintains the control file (???);

      - reclaims temporary segment space;

      - performs the space reorganization (makes one extent out
        of two or more physically contiguous extents. They can be
        found in FET$ data dictionary table);

   Important notes:

     - PMON, SMON, DBWR and LGWR are all mandatory processes for
       an Oracle instance to run.

     - PMON and SMON, unlike DBWR and LGWR CAN'T be controlled by
       modifying INIT.ORA parameters. It's NOT POSSIBLE to
       determine when they will wake up.

- SNPn, RECO and LCKn background processes.

  - SNPn processes control the automatic refreshing of snapshots
    of remote table. Number of SNP processes n is determined by
    SNAPSHOT_REFRESH_PROCESSES=n parameter.

  - Recover process, RECO, resolves failures involving in
    distributed transaction. It will handle failures that have
    occurred in distributed transaction by continually trying to
    contact the in-doubt nodes.

  - Lock processes (LCKn) perform inter-instance locking in
    a parallel server system (with parallel server option)
    see PH95 p#159v2.

  SNPn and RECO will only be present if the distributed option is
  being used.

- Parallel query processing See. PH95p#160v2

- Control file(s) - a small binary file (s) that describe the
  structure of the database. INIT.ORA parameter
  CONTROL_FILES=(...,...)  must list all the control files.
  Oracle automatically mirrors all the control files.
  Control file content and properties:

      - all necessary database files and log files are identified
        in the control file;

      - the name of the database is stored in the control file;

      - the control file IS REQUIRED to open and access the
        database. It must be available for writing whenever
        database is open;

      - synchronization information needed for recovery is stored
        inside the control file;

      - the controlfile's name is specified during CREATE
        DATABASE;

      - CREATE CONTROLFILE command should be issued when:

        a) all the current control files are lost (for
           subsequent recovery).  It's possible to create new
           control file based on info contained in remaining
           non-damaged files;

        b) when change some parameters specified during CREATE
           DATABASE is needed (for example database name);

- Tuning memory allocation. (See above about SGA and PGA).

  - Before tuning Oracle consider the O/S characteristics:

      - Reduce paging (swap a part of process onto disk to
        make room for another processes) and swapping (swap
        entire process onto disk).

      - Fit the SGA into main memory.

        The SGA should be as large as possible. On the other
        hand, the SGA should NEVER TAKE MORE THEN 50% of
        available memory. The problem is how to determine when
        the SGA size is set too high. It is possible only to
        determine that buffer cache (DB_BLOCK_BUFFERS value) is
        too high. See the appropriate script in section
        "Tuning of the buffer cache" below.

        - Typically, in well tuned system SHARED_POOL_SIZE +
          DB_BLOCK_BUFFERS should occupies about 90% of SGA total
          size.

      - allocate enough memory to individual users

  - Network characteristics should be considered also
    It is very difficult job.

  - Tuning the size of redo log files.

    Each log switch causes the checkpoint the SGA.
    Hence redo logs should be very large.

    Each checkpoint should occur only on log switching. To achieve
    this set INIT.ORA LOG_CHECPOINT_TIMEOUT = 0  and
    LOG_CECKPOINT_INTERVAL to a size greater then redo log file
    size.

    Views which are contained redo log info:

          V$LOG and V$LOGFILE

  - Tuning of the library cache (SHARED_POOL_SIZE).

    The goal is to PARSE ONES and EXECUTE MANY TIMES.

      - Sharing SQL statements.

                       there must by CHARACTER-TO-CHARACTER match
                     between the statements being examined and
                     one already in shared pool.

        Application tuning rule #2: the object being referenced
                     are exactly the same as those objects in a
                     statement that has passed the comparison in
                     Rule #1. There is one exception:  let's two
                     users reference an object O of the third
                     user through private synonyms both named
                     O_SYN (e.g. select * from O_SYN) Even though
                     each user has a its own private synonym
                     O_SYN that refers to the SAME object O in
                     the third user schema, these individual
                     private synonyms are actually database
                     objects themselves that is these are
                     DIFFERENT object and Oracle does NOT share
                     appropriate SQL statements (select * from
                     O_SYN) from users A and B, because the
                     objects represented by private synonyms with
                     the same names from a different users are
                     considered to be DIFFERENT!  Public synonyms
                     and qualified objects USER_NAME.TABLE_NAME
                     are considered to be the same. From this
                     point of view using private synonyms are
                     very BAD because SQL statements can NOT be
                     shared between processes!

        Note: if an object referenced in SQL statement in the
              shared pool is modified, the statement is flagged
              as invalid. The next time a statement is passed to
              Oracle that in the same as invalid statements, the
              old statement will be replaced by the new since the
              underlying object has been modified.

    Compare executions (Pins column) and cache misses (Reloads
    column) from V$LIBRARYCACHE table:

    >>> Begin of script
      set feedback off
      col R format 9999.9999 hea 'Ratio %'
      select sum(Pins) "Executions",
             sum(Reloads) "Cache misses while executing",
             sum(Reloads)/sum(Pins) * 100 R
      from V$LIBRARYCACHE;
      rem
      rem If ratio is MORE THEN 1% enlarge library cache
      rem (SHARED_POOL_SIZE init.ora).
    >>> End of script

  - Monitoring and keeping objects in library cache
    (SHARED_POOL_SIZE) with DBMS_SHARED_POOL package.

    The shared pool holds shared SQL and PL/SQL statements
    (referred to as the library cache), the data dictionary cache,
    and information on sessions against the database.

    DBA can monitor the shared pool through CONNECT INTERNAL and
    via sqldba or by querying v$sqlarea dictionary view.

    DBA may need
    to visit the size of the shared pool if he or she receives
    ORA-4031 error (out of shared memory when allocating NUM
    bytes), when Oracle tries to fined more space in the shared
    pool

    Oracle attempts to parse the statement and place it into
    the library cache. If there is not enough room in the cache,
    then Oracle makes room for the procedure by removing an
    existing entry. A RELOAD (see tuning library cache) means the
    removed entry is re-requested, hence, these entries was
    parsed more then once.  RELOADs should never happen in well
    tuned system.

    DBMS_SHARED_POOL package provides you with additional means
    for managing library cache. It's possible to force Oracle to
    hold some common used library cache entries (procedures,
    packages, views ...) in memory. Oracle can NOT remove holded
    entries from memory to make room for other entries.

    >>> Begin on script
      rem Monitoring
      rem
      rem   this procedure shows shared pool objects that
      rem   are larger then 10 kilobytes
      set serveroutput on size 4000
      execute sys.DBMS_SHARED_POOL.sizes(10);
    >>> End on script

    >>> Begin on script
      rem Keeping in memory
      rem
      rem   To retain package sys.STANDARD in memory:
      execute sys.DBMS_SHARED_POOL.keep('sys.STANDARD');
    >>> End on script

  - Improve performance when there are never library cache misses
    for given session by CURSOR_SPACE_FOR_TIME init.ora
    parameter.

    If the CURSOR_SPACE_FOR_TIME = TRUE then the following
    algorithm (requiring enough memory installed on you machine)
    is issued:

          If some cursor in an application was opened and then
          closed Oracle DOES NOT de-allocate shared SQL areas for
          this cursor until AT LEAST ONE CURSOR OPENED by this
          application exists. This prevents Oracle from checking
          the library cache area for identical statements for
          given application and therefor saves time.

    If CURSOR_SPACE_FOR_TIME = FALSE then the SQL statements
    remain in the library cache until aged out by usual LRU
    algorithm.

    In other words CURSOR_SPACE_FOR_TIME init.ora parameter
    affects when a statement in shared SQL can be deallocated from
    the pool. When set to FALSE (the default), Oracle deallocates
    space held by a statement in the shared pool even if
    application cursor using that statement are still open. If
    the amount of available memory is enough such that the shared
    pool can be sized to hold all application cursors, you may
    consider setting this parameter to TRUE. A small amount of
    time is saved during statement execution when the parameter
    set to TRUE, because Oracle does not have to search the
    pool to see if the statement is  already there.

  - Tuning of the data dictionary cache (SHARED_POOL_SIZE).

     In Oracle version 6 there were DC_ parameters which allowed
     the fine tuning of the data dictionary cache. Unfortunately,
     there is ONLY ONE init.ora parameter - SHARED_POOL_SIZE.

     V$ROWCACHE table columns:

        Parameter - identifies a particular data dictionary item

        Gets      - total number of requests for information on
                   an item

        Getmisses - number of requests that result in cache
                   misses

         >>> Begin of script
          set feedback off
          select sum(Gets) "Data Dictionary Gets",
              sum(Getmisses) "Data Dict Cache get Misses",
              sum(Getmisses)/sum(Gets)*100 "Ratio %"
          from V$ROWCACHE;
          rem
          rem       NOTES:
          rem
          rem You should increase SHARED_POOL_SIZE if the ratio
          rem is greater then 10-15%
          rem
          rem you may decrease SHARED_POOL_SIZE if the ratio is
          rem considerably less then 10-15%
         >>> End of script

         Put a special attention on DC_ parameters:

         >>> Begin of script
          set echo off feedback off termout on
          set verify off
          col Parameter format A30 heading "Cache|entry"
          col Pm format 990.00 heading "Percentage|miss"
          col Gets format 999,999,990 "Gets"
          col Getmisses format 999,999,990 "Misses"
          select unique Parameter,
                 Gets,    /* Read from memory */
                 Getmisses,  /* Read from disk */
                 Getmisses / (Gets+Getmisses)*100 Pm
          from V$ROWCACHE
          where Gets+ Getmisses <> 0;
          set echo on feedback on termout on
          rem
          rem The percentage for the entries Dc_table_grants,
          rem Dc_user_grants, Dc_users should be < 5% each.
          rem
          rem You should increase SHARED_POOL_SIZE if the
          rem percentage is greater then 5%
          rem
         >>> End of script

  - Tuning of the buffer cache (DB_BLOCK_BUFFERS init.ora).

    DB_BLOCK_BUFFERS should be set as high as possible. But keep
    in mind possibility of swapping.

    - How to determine that DB_BLOCK_BUFFERS value is set too
      high and swapping or paging degrade the performance.

      >>> Begin of script

       rem  We can determine if database block is swapped out by
       rem  examining of the status of the block header via X$BH
       rem  (block headers) table. Status "Free" means the that
       rem  this block is never used. There are two reasons for
       rem  presence of "Free" block: (1) number of users in your
       rem  system is too small and they can't use pool fully or
       rem  (2) blocks marked "Free" are swapped out of the main
       rem  memory. If (2) is the case - reduce DB_BLOCK_BUFFERS.
       rem
       select decode (State, 0, 'Free',
                             1, 'Read and Modified',
                             2, 'Read and Not Modified',
                             3, 'Currently Being Read',
                                'Other' ),
               Count(*)
       from X$BH
       group by decode (State, 0, 'Free',
                             1, 'Read and Modified',
                             2, 'Read and Not Modified',
                             3, 'Currently Being Read',
                                'Other' )
       ;
       rem
       rem    The larger the number of blocks with status 'Free'
       rem    the more probability that these blocks are swapped
       rem    out of main memory. In case of substantial number
       rem    of 'Free' blocks decrease DB_BLOCK_BUFFERS value.
      >>> End of script

    -  Monitoring Hit Ratio of buffer pool (DB_BLOCK_BUFFERS)

        V$SYSSYAT table. Columns: Name (name of statistic), Value.

      Names of the statistics:

      "db block gets", "consistent gets" - sum of these
                   statistics is the total number of requests for
                   data, including requests satisfied by access
                   to buffers in memory.

      "physical reads" - total number of requests for data
                         resulting in access to data files on
                        disks.

       Example of the SQL*Plus monitoring script calculating hit
       ratio:
                                    physical reads
        Hit Ratio =  100% - ------------------------------- *100%
                            db block gets + consistent gets

       >>> Begin of script
       set echo off feedback off termout on
       set verify off
       col DBG for 999999999999 hea 'db block gets' new_value DBGV
       col CG for 999999999999 hea 'consistent gets' new_value CGV
       col PR for 999999999999 hea 'physical reads' new_value  PRV
       select Value DBG from V$SYSSTAT
                    where Name='db block gets';
       select Value CG from V$SYSSTAT
                    where Name='consistent gets';
       select Value PR from V$SYSSTAT
                    where Name='physical reads';
       select 100 - &PRV/(&DBGV+&CGV)*100 "Hit Ratio" from DUAL;
       set echo on feedback on termout on
       rem
       rem   NOTE:
       rem   If the hit ratio is less then 60-70% then increase
       rem   DB_BLOCK_BUFFERS init.ora parameter.
       >>> End of script;

    - Estimating the performance gain that would result from
      changing the size of the buffer cache. Do NOT USE
      DB_BLOCK_LRU_STATISTICS during normal operations because of
      the negative impact on performance.

     +------------------------------------------------------+
     |  All the scripts below MUST be run from SYS account. |
     +------------------------------------------------------+

      - Estimating the performance gains for a Larger Buffer
        Cache:

           Set DB_BLOCK_LRU_EXTENDED_STATISTICS=n where n - is
           the number of buffers to add.
           Statistics are stored in X$KCBRBH with columns:

              Indx - one less then the number of buffers that
                     would potentially be added to the cache
                   Max(Indx) = DB_BLOCK_LRU_EXTENDED_STATISTICS-1

              Count - the number of additional cache hits that
                      would be obtained by adding additional
                      buffer number INDX+1 to the cache.

            - Determining the new hit ratio if we add
              &Buffers_To_Add buffers

                            physical reads - add cache hits
        Hit Ratio =  100% - ------------------------------- *100%
                            db block gets + consistent gets

              >>> Begin of script
                  set feedback off termout on echo off
                  set verify off
                  col DBG for 999999999999 hea 'db block gets'-
                           new_value DBGV
                  col CG for 999999999999 hea 'consistent gets'-
                              new_value CGV
                  col PR for 999999999999 hea 'physical reads'-
                              new_value  PRV
                  col ACH for 9999999999999 hea -
                       'Additional Cache Hits' new_value ACHV
                  accept Buffers_To_Add char -
                        prompt "Number of buffers to add: "
                  prompt
                  prompt Determining the new hit ratio -
                  in case of adding &Buffers_To_Add buffers
                  select Value DBG from V$SYSSTAT
                               where Name='db block gets';
                  select Value CG from V$SYSSTAT
                               where Name='consistent gets';
                  select Value PR from V$SYSSTAT
                               where Name='physical reads';
                  rem &Buffers_To_Add should be <=
                  rem      DB_BLOCK_LRU_EXTENDED_STATISTICS
                  select sum(Count) ACH from SYS.X$KCBRBH
                             where Indx < &Buffers_To_Add;
                  select &Buffers_to_Add "Adding Buffers",
                        100 - (&PRV-&ACHV)/(&DBGV+&CGV)*100
                        "New Hit Ratio %" from DUAL;
                  Undef buffers_to_add
                  set echo on feedback on termout on
              >>> End of script;

              - Group the statistics from X$KCBRBH table in
                intervals of &Interval buffers:

               >>> Begin of script
               set feedback off termout on echo off
               set verify off
                rem Interval - is the interval of additinal
                rem          buffers to be added to the cache
                rem Additional Buffer Cache Hits - add. hits for
                rem          this interval
                  accept Interval char -
                    prompt "Enter number of buffers in interval:"
                  prompt
                  prompt Additional hits per interval
                select &Interval*trunc(Indx/&&Interval)+1
                 || ' TO '
                 || &Interval*(trunc(Indx/&Interval)+1) "Interval",
                 sum(Count) "Additional Buffer Cache Hits"
                from SYS.X$KCBCBH
                group by trunc(Indx/&&Interval);
                undef Interval
                set echo on feedback on termout on
                rem  Read the output from the bottom up. Add the
                rem  sum of the previous values to the value of
                rem  the current row
               >>> End of script

      - Estimating the performance gains for a Smaller Buffer
        Cache:

           Set init.ora DB_BLOCK_LRU_STATISTICS = TRUE.
           Statistics are stored in X$KCBCBH table with columns:

              Indx - one less then the potential number of
                     buffers in the cache

              Count - the number of cache hits attributable
                      to buffer number Indx+1

           - Determining the new hit ratio if we decrease the
             size of the buffer cache up to &Reduced_size. In
             this case we will get "additional cache misses".
             The new hit ratio should be calculated as:

                            physical reads + add.cache misses
        Hit Ratio =  100% - ---------------------------------*100%
                            db block gets + consistent gets

              >>> Begin of script
                  set feedback off termout on echo off
                  set verify off
                  col DBG for 999999999999 hea 'db block gets'-
                           new_value DBGV
                  col CG for 999999999999 hea 'consistent gets'-
                              new_value CGV
                  col PR for 999999999999 hea 'physical reads'-
                              new_value  PRV
                  col ACM for 9999999999999 hea -
                       'Additional Cache Misses' new_value ACHV
                  accept Reduced_size char -
                        prompt "Enter new reduced buffer size: "
                  prompt
                  prompt Determining the new hit ratio -
                  in case of reducing cache to &Reduced_size
                  select Value DBG from V$SYSSTAT
                               where Name='db block gets';
                  select Value CG from V$SYSSTAT
                               where Name='consistent gets';
                  select Value PR from V$SYSSTAT
                               where Name='physical reads';
                  rem &Reduced_size should be <
                  rem      DB_BLOCK_BUFFERS
                  select sum(Count) ACM from SYS.X$KCBCBH
                             where Indx >= &Reduced_size;
                  select &Reduced_size "Reduced buffer size",
                        100 - (&PRV+&ACHV)/(&DBGV+&CGV)*100
                        "New Hit Ratio %" from DUAL;
                  set echo on feedback on termout on
                  >>> End of script;

              - Group the statistics from X$KCBCBH table in
                intervals of &Range:

                >>> Begin of script
                rem Interval - is the interval of buffers to be
                rem   removed from cache
                rem Buffer cache misses - mises per interval
                  accept Interval char -
                    prompt "Enter number of buffers in interval:"
                  prompt
                  prompt Misses per reducing interval
                select &Interval*trunc(Indx/&&Interval)+1
                 || ' TO '
                 || &Interval*(trunc(Indx/&Interval)+1) "Interval",
                 sum(Count) "Buffer Cache Misses"
                from SYS.X$KCBCBH
                group by trunc(Indx/&&Interval);
                undef Interval
                >>> End of script

  - After re-sizing Oracle memory structures, re-evaluate the
    performance of the library, data dictionary and buffer
    caches.

      If memory for one structure has been reduced, consider
      increasing the memory allocating for another structure.
      Tune operating system again to ensure paging and swapping
      has not become excessive.

- Tuning I/O - very important.

  Processes related to tuning I/O:

    - DBWR

    - LGWR

    - Server processes

  Issues affecting performance:

    - Disk contention

    - Number of I/O

    - Dynamic space management (new extents allocation)

   Note:

     Tuning memory generally minimizes the amount of I/O
     performed by the database. Make I/O more efficient by father
     tuning.

  - Reduce disk contention.

    Oracle I/O can be determined from statistics in the
    V$FILESTAT table with columns:

        . . . . . .

        PHYRDS  - number of reads from each database file.

        PHYWRTS - number of writes to each database file.

        . . . . . .

    >>> Begin of script
      rem Determining the number of reads and writes to each
      rem database file
      rem
           select Name, Phyrds, Phywrts
           from V$DATAFILE DF, V$FILESTAT FS
           where DF.File# - FS.File#;
    >>> End of script

    Use your operating system facilities to determine the
    number of reads and writes to EACH NON-DATABASE FILE.

    The total I/O fr each disk is the total number of reads and
    writes to all files on the disk,

  - Distribute I/O (the following recommendations are valid if you
    have MORE THEN 3-5 disk drives!):

    - Separate data files and redo log files on different disks:

      - place frequently accessed data files on separate disks.

      - place each set of redo log files on a separate disks with
        little activity.

    - Separate (stripe) large table data on different disks:

      >>> Begin of script
          create tablespace STRIPEDTABLESPACE datafile
            '<file_on_disk_1>' size 500K,
            '<file_on_disk_2>' size 500K,
            '<file_on_disk_3>' size 500K,
            '<file_on_disk_4>' size 500K,
            '<file_on_disk_5>' size 500K;

          Better solution:

          create table STRIPEDTABLE (
             Col_1 number(5), Col_2 char(3), . . .
             tablespace STRIPEDTABLESPACE storage
               (  initial 495k
                  minextents 1
                  pcincrease 0);

          alter table STRIPEDTABLE allocate extent
            (size 495K file <file_on_disk_2);

            . . .
          rem
          rem     NOTES:
          rem Each extent of the striped table STRIPEDTABLE
          rem should be stored in a separate datafile.
          rem
          rem Small overhead (5k in this case) should be added
          rem for file headers.
          rem
          rem As the table grows another datafile should be added.
          rem
      >>> End of script

    - Separate tables and indexes on different disks.

    - Reduce disk I/O not related to the Oracle server.

    NOTE:

      Any disks operating at or near FULL CAPACITY are potential
      sites for disk contention.

  - Allocate space in data blocks:

    - Size of Oracle block DB_BLOCK_SIZE init.ora parameter

    - Data block space is used when row is inserted or updated

    - Data block space id freed when rows are deleted or updated

    - Migrated rows (which moved from one data block to another)
      or chained rows (which span more then one data block)
      DEGRADE performance.

      Collecting info about chained and migrating rows and
      attempting to make them "usual" rows:

      >>> Begin of script
       set echo off feedback off termout off
       set heading off verify off
       set pagesize 0 linesize 120 recsep off
       set tab on trimout on set newpage 0
       rem
       rem The name of this file: CHKMIGR.SQL
       rem
       rem This script requires an argument <table_name>
       rem representing a name of the analyzing table
       rem You should replace <ora_rdbms> by real rdbms
       rem directory
       rem
       rem     The first argument on command line must be a
       rem     table name  ([USER.]TABLE). Wildcards (%,_) are
       rem     accepted. If USER portion of a table name is
       rem     not specified the current username is assumed
       rem
       rem              Note:
       rem       I used command HOST DEL to delete file from the
       rem       current directory.
       rem       On unix system you should use HOST rm command
       rem       instead
       rem
       rem  Let's parse the parameter against user name
       rem
            col Lf Newline
            col T format a30 hea 'Table Name(s)' New_value Table_name
            col U format a30 hea 'User Name' New_value User_name
            col N New_value L
            select instr ('&1','.') N from DUAL;
            spool CHKMIGR.TMP
            select
             decode (&L, 0,
             'select User U, Upper(''&1'') T from DUAL;',
             'select upper(substr (''&1'', 1, &L - 1)) U,
                     upper(substr (''&1'', &L + 1)) T
                     from DUAL;') Lf
             from DUAL;
            spool off
            start CHKMIGR.TMP
            host del CHKMIGR.TMP
       rem Create table CHAINED_ROWS to store collecting info:
       rem  start <ora_rdbms>:utlchain.sql:
            drop table  CHAINED_ROWS;
            create table CHAINED_ROWS (
              owner_name    varchar2(30),
              table_name    varchar2(30),
              cluster_name  varchar2(30),
              head_rowid    rowid,
              timestamp     date
            );
       rem Store info about chained rows into table CHAINED_ROWS:
         analyze table &1 list chained rows
            into CHAINED_ROWS;
       rem Get ROWIDs of chained and migrated rows:
         col C format 99999999 hea "Chained|rows" new_value CRC
         select count (Head_rowid) C from CHAINED_ROWS
         where Table_name = upper('&table_name');
       rem
       rem Let's check chained rows counter CRC against 0 and
       rem proceed if CRC > 0
       rem
           spool chained.tmp
           select
              'set termout on' Lf,
              '-' Lf,
              'prompt There is no any chained/migrating rows' Lf,
              'prompt  in table &1' Lf,
              '-' Lf,
               'Exit' Lf
           from DUAL
           where &CRC = 0;
       rem
           spool off
           start chained.tmp
           host del chained.tmp
       rem
       rem      There is some chained rows and we now should:
           set termout on
           prompt There are &CRC chained rows in table &1
           set termout off
       rem
       rem  1. save migrating rows from the table &table_name
       rem     into an intermediate table.
       rem
            drop table WORKING_CHAINED_ROWS;
            create table WORKING_CHAINED_ROWS as
            select * from &1
            where Rowid in (select Head_rowid from CHAINED_ROWS);
       rem
       rem
       rem  2. Delete all the migrating rows from the table
       rem     &Table_name (this table may be a master of other
       rem     table and we have to disable all the constraints
       rem     and triggers before that).
       rem
         rem   To disable and then enable all the disabled
         rem   referential constraints we dynamically create two
         rem   scripts:  DISRCONS.TMP and ENARCONS.TMP. The first
         rem   one contains "alter table DISABLE constraint
         rem   <cons_name>" for all the enabled constraints and
         rem   the second one contains "alter table ENABLE
         rem   constraint <cons_name>" for all the previously
         rem   disabled constraints.
         rem
               set termout off
               spool DISRCONS.TMP
               select unique
                       'alter table '|| Y.Table_Name
                    || ' DISABLE constraint '
                    || X.R_Constraint_Name || ';'
               from DBA_CONSTRAINTS X, DBA_CONS_COLUMNS Y
               where     X.R_constraint_name = Y.Constraint_name
                     and X.Owner like '&User_name'
                     and X.R_Owner = Y.Owner
                     and Y.Table_Name like '&Table_name'
                     and X.Constraint_Type = 'R'
                     and X.Status= 'ENABLED';
               spool off
         rem
               spool ENARCONS.TMP
               select unique
                       'alter table '|| Y.Table_Name
                    || ' ENABLE constraint '
                    || X.R_Constraint_Name || ';'
               from DBA_CONSTRAINTS X, DBA_CONS_COLUMNS Y
               where     X.R_constraint_name = Y.Constraint_name
                     and X.Owner like '&User_name'
                     and X.R_Owner = Y.Owner
                     and Y.Table_Name like '&Table_name'
                     and X.Constraint_Type = 'R'
                     and X.Status= 'ENABLED';
                    spool off
         rem
         rem   Disable referential constraints and delete the file
               start DISRCONS.TMP
               host del DISRCONS.TMP
         rem Disable all the triggers for the table:
              Alter table &1 DISABLE all triggers;
         rem   and delete rows from the table:
               delete  &1  where
                  ROWID in (select Head_rowid from CHAINED_ROWS);
         rem
       rem
       rem  3. Insert all the deleted migrating rows back into
       rem     the original table (INSERT should insert the rows
       rem     as usual non-migrating rows).
       rem
            insert into &1
            select * from WORKING_CHAINED_ROWS;
            commit;
       rem
       rem  4. Drop the intermediate table and re-enabling
       rem     constraints and triggers for the original table
       rem
            drop table WORKING_CHAINED_ROWS;
            start ENARCONS.TMP
            host del ENARCONS.TMP
       rem
            Alter table &Table_name enable all triggers;
       rem
       rem  5. Delete the records from the CHAINED_ROWS table;
       rem
            truncate table CHAINED_ROWS;
       rem
       rem  6. Analyze the table against migrating or chained rows
       rem     again
       rem
            analyze table &1 list chained rows
                into CHAINED_ROWS;
            select count (Head_rowid) C from CHAINED_ROWS
            where Table_name = upper('&table_name');
            set termout off
            drop table CHAINED_ROWS;
            spool Fincrows.tmp
            select
             'set termout on' lf,
              '-' Lf,
             'prompt &CRC chained rows remain in table &1' lf,
             'prompt If migrating rows still exist then:' lf,
             'prompt  1. You may have very large PCTFREE.' lf,
             'prompt  .  and you can try to decrease PCTFREE' lf,
             'prompt  .  and  export/drop/re-create/import' lf,
             'prompt  2. You may need larger DB_BLOCK_SIZE' lf,
             'prompt  .        (a lot of work...)' lf,
             'set termout off' lf
            from DUAL
            where &CRC > 0;
            select
             'set termout on' lf,
              '-' Lf,
             'prompt Chained rows problem for &1 FIXED!!!' lf,
             'set termout off' lf
            from DUAL
            where &CRC = 0;
            spool off
            start Fincrows.tmp
            host del Fincrows.tmp
            exit;
      >>> End of script

    - select valid values for PCTFREE and PCTUSED

      - Choosing PCTFREE:

        - a low value for PCTFREE can improve the performance of
          full table scans, because there are fewer block to
          scan.  Choose a low value for tables that are only
          queried and never modified.

        - a high value for PCTFREE reduces the likelihood of
          migration.

      - Choosing PCTUSED:

        - A low value for PCTUSED reduces the likelihood of
          migration as blocks tend to have more free space to
          accept long updates

        - A high value for PCTUSED stores data in a more
          space-efficient manner, but it increases the likelihood
          of migrating rows. Blocks are added  to the free lists
          more frequently, which requires more processing time to
          manage the free lists.

          NOTE:
            To improve performance by allocating data block
            storage, it's necessary to know what a database
            operations are performed MOST OFTEN on EACH table.

  - Avoid dynamic space management.

    Dynamic extension occurs when an object exceeds the space
    allocated in it's initial extents. Oracle must then
    dynamically create more extents for that object.

    Dynamic extension:

      - causes issue of Recursive Calls which

      - reduce performance

     Situations which generate recursive Calls:

      - dynamic extension

      - misses on the data dictionary cache

      - firing of database triggers (for Oracle < 7.3)

      - execution of SQL statements within stored procedures,
        functions, packages and anonymous PL/SQL blocks.

      - enforcement of referential integrity constraints.

     Monitoring the number of recursive calls:

     >>> Begin of script
      select Name, Value
      from V$SYSSTAT
      where Name = 'recursive calls';
      rem
      rem  if recursive calls are caused by dynamic extension (to
      rem  determine this you should examine the NUMBER OF
      rem  EXTENTS in existing database tables) then allocate
      rem  larger extents.
     >>> End of script

  - Tune contention

    Contention occurs when multiple processes attempt to access
    the same resource simultaneously, and thus must wait to
    access resource.

    Possible contention problems:

       - Contention for rollback segments

       - Contention for redo log buffer latches

       - Contention for multi-threaded server (MTS) processes
       - Process contention.

    -  Reduce Rollback Segment Contention

       Block contention statistics is in V$WAITSTAT table.
       Column Class contains names of statistics:

         - 'system undo header' - number of waits for buffers
                   containing header blocks of SYSTEM rollback
                   segment.

         - 'system undo block' - number of waits for buffers
                   containing blocks other than header blocks of
                   the SYSTEM rollback segment.

        - 'undo header' - number of waits for buffers containing
                   header blocks of rollback segments other then
                   the SYSTEM rollback segment.

        - 'undo block'  - number of waits for buffers containing
                   blocks  other then header blocks of rollback
                   segments other than the SYSTEM rollback
                   segment

    Determining the contention by following script (it should be
    run AFTER SUBSTANTIAL TIME SINCE STARTUP):

     >>> Begin of script
       set echo off verify off
       column TW format 99999999999999 heading 'Total Waits'-
            New_Value TWV
       column Class format a18 heading 'Statistic''s Name'
       column Count format 99999999999 hea 'Count'
       column PT format 99999.99 hea '% of tot'
       rem Get Total Waits into TWV variable:
       select sum(nvl(Value,0)) TW
       from V$SYSSTAT
       where Name in ('db block gets', 'consistent gets');
       rem
       select Class, Count, nvl(Count,0)/&TWV PT
       from V$WAITSTAT
       where Class in ('system undo header',
                       'system undo block',
                       'undo header',
                       'undo block');
       set echo on
       rem
       rem   "% of total" should not be GREATER THEN 1% for ANY
       rem   CLASS. Otherwise you should INCREASE the number of
       rem   rollback segments.
       rem
     >>> End of script

  - Reduce Redo Log buffer contention.

   - Determine whether user processes wait for space in the redo
     log buffer by the following script:

     >>> Begin of script
       select Name, Value
       from V$SYSSTAT
       where Name = 'redo log space requests';

       rem The Value should be near 0. If this Value increments
       rem consistently, INCREASE THE SIZE OF LOG BUFFER BY
       rem INCREASING THE VALUE OF THE LOG_BUFFER INIT.ORA
       rem parameter.
     >>> End of script

  - Regulate access to the redo log buffer with latches.
      NORMALLY THIS IS NOT A PROBLEM and should not be tuned!

     -  Latches:

        Latches are the Oracle method of establishing process
        ownership of objects that the database needs to use.
        Latch is the a very efficient internal Oracle lock.

         Redo Allocation Latch - controls allocation of space for
               redo entries in the redo log buffer. All processes
               that write to the redo log buffer must obtain
               this latch.  There is ONLY ONE redo allocation
               latch. After allocation this latch the server
               process may copy  the entry into the buffer while
               holding the Redo Allocation Latch. After copying
               the server process releases the latch. This
               process is called "copying on redo allocation
               latch". A process may copy on the redo allocation
               latch if redo entry is smaller then threshold size
               (LOG_SMALL_ENTRY_MAX_SIZE in bytes). If greater
               then Oracle uses:

         Redo Copy Latch (for multiple_CPU machines ONLY) - if
               the redo entry is too large to copy on the redo
               allocation latch, the server process must obtain a
               redo copy latch before copying the entry into the
               buffer. While holding a redo copy latch, the
               server process copies the entry in the buffer and
               then releases the redo copy latch.  If you
               computer has multiple CPUs, your redo log buffer
               can have multiple redo copy latches. This allows
               multiple processes to copy entries to the redo log
               buffer concurrently.  The number of Redo Copy
               Latches - LOG_SIMULTENEOUS_COPIES (defaulted to
               CPU_COUNT which defaulted to real number of
               processors).  On a single-CPU computer.
               LOG_SIMULTENEOUS_COPIES should be set to TWICE the
               number of CPUs on multi-processor machines. This
               will reduce any potential contention for the redo
               copy latch.

         Latches are the Oracle method of establishing process
         ownership of objects that the database needs to use.
         Latch is the a very efficient internal Oracle lock.

         The objective is to have the HIT_RATIOs as high as
         possible: this means that when a latch is requested, it
         is available. The SLEEPS  should be  as low as possible:
         this means a latch was requested and could not be
         supplied. The process that requested the latch went into
         a sleep state until the needed resource was available.
         Sleeps should be avoided.

         If the process can't get the requested latch, rather
         than sleep, it TIMES OUT. The goal is to have
         NOWAIT_HIT_RATIO  as close to 1 as possible.

         Important: ACCESS TO THE REDO LOG BUFFER IS REGULATED BY
         LATCHES of two types: (1)redo copy latch and (2) redo
         allocation latch.

         As update occurs, space is allocated in the redo log
         buffer using REDO ALLOCATION LATCH. There is os only
         one redo allocation latch. Hence, only one user can
         allocate space in the redo log buffer at a time. The
         most information that can be copied on the redo
         allocation latch at a time is determined by init.ora
         parameter LOG_SMALL_ENTRY_MAX_SIZE. If the amount of
         information to be copied is greater then
         LOG_SMALL_ENTRY_MAX_SIZE then Oracle uses REDO COPY
         LATCH instead of redo allocation latch. While holding
         the redo copy the USER PROCESS will fill the redo log
         buffer with its information. When th proces is done, it
         then releases the redo copy latch.

         If you computer with multiple CPUs, then you are allowed
         to have one redo copy latch per CPU. Number of latches
         is set through LOG_SMULTANEOUS_COPIES INIT.ORA
         parameter (default is the number of CPUs).

         If you hit ratio of redo allocation (from REPORT.TXT)
         falls below 85%, then you have latch contention. In this
         case make the parameter LOG_SMALL_ENTRY_MAX_SIZE
         smaller. By making this value smaller you will cause
         more activity to happen on the redo copy latch.

         If you are having redo copy latch contention, then
         increase the init.ora parameter
         LOG_SIMULTANEOUS_COPIES.

         Another way to reduce contention on the redo log latches
         is to tell Oracle to prebuild the redo entries. Many
         times, an Oracle user's redo activity is made up of many
         small pieces. It can be much more efficient to instruct
         the database to put all the pieces together BEFORE
         requesting the redo copy latch. It can be done through
         LOG_ENTRY_PREBUILD_THRESHOLD init.ora parameter (default
         is 0).

  - Examining of redo log activity.

       Each row of V$LATCH table contains statistics for a
       different type of latch:

         - willing to wait (WTW)- if the latch requested with this
                type of request is not available, the requesting
                process waits a short time and requests the latch
                again and continues until the latch available

         - immediate (IMM)- if the latch requested with this type
                of request is not available, the requesting
                process does not wait and continues processing.

         Oracle decides INTERNALLY whether to issue WTW or an IMM
         requests.

         Monitor the statistics for the redo allocation latch and
         the redo copy latch with this script:

         >>> Begin of script
           rem MONITOR LATCH CONTENTION
           set echo off feedback off termout on
           set verify off
           col Name format a18
           col Gets format 999999999
           col Misses format 999999999
           col M_TO_GETS format 999.99 hea 'Rat_1|%'
           col Immediate_gets format 999999999 -
                hea  'Immediate|gets'
           col Immediate_Misses format 999999999 -
                hea  'Immediate|Misses'
           col M_TO_GM format 999.99 hea 'Rat_2|%'
           col Sleeps format 999999
           select Name,
                  Gets, Misses,
                  decode(Gets,0,Null,Misses/Gets*100) M_TO_GETS,
                  Immediate_gets, Immediate_misses,
                  Immediate_misses/
                  (Immediate_gets+Immediate_mises)*100 M_TO_GM,
                  Sleeps
                  from V$LATCH L, V$LATCHNAME LN
                  where LN.Name in ('redo allocation',
                                    'redo copy')
                        and LN.Latch# = L.Latch#;
           set echo on feedback on termout on
           rem
           rem If Rat_1 > 1% or Rat_2 > 1% then contention for a
           rem latches may affect performance and you should try
           rem to reduce contention for a latch.
           rem  See Server adm. Guide p. 23-13 "Reducing Latch
           rem  contention".
        >>> End of script

  - Reduce Multi-Threaded Server (MTS) contention

        This script taken from Steve Bobrowski's paper
        (OM 7-8.1995 p.85)

        >>> Begin of script
          rem LISTING 1: Use the GATHER_MTS_STATISTICS
          rem procedure to monitor the configuration of an
          rem Oracle server's MTS.
          Rem MTS Statistic Gathering
          Rem Author: Animated Learning V2. 0 1995
          Rem
          Rem
          The following script creates two tables and
          Rem a stored procedure that one can use to gather
          Rem statistics about an  Oracle server's MTS
          Rem configuration.
          Rem
          Rem *************************************
          Rem The following script must be run as SYS
          Rem *************************************
          Rem
          Rem The MTS_DISP_STSTS table stores data points for
          Rem MTS dispatchers
          Rem
          drop table MTS_DISPATCHER_STATS;
          create table MTS_DISPATCHER_STATS (
              Timepoint DATE, -- Time of data point
              Disp_name VARCHAR2 (10), --name of dispatcher
              Disp_prot VARCHAR2 (10), -- corresponding network
              Cumu_busy REAL, -- cumulative busy time
              Cumu_idle REAL, -- cumulative idle time
              Intv_load REAL -- interval load
          );
          Rem
          Rem The MTS_SERVER_STATS table stores data points for
          Rem MTS shared servers.
          Rem
          drop table MTS_SERVER_STATS;
          create table MTS_SERVER_STATS (
              Timepoint DATE, -- time of data point
              Server_ct INTEGER -- number of shared servers
          );
          create or replace procedure GATHER_MTS_STATISTICS
                  (Intervals in INTEGER) AS
          -- --------
          -- OVERVIEW
          --
          -- The GATHER_MTS_STATISTICS procedure gathers
          -- statistics for both -- dicpatchers and shared
          -- servers in an MTS -- configuration.  Run this
          -- procedure from a session -- that uses a DEDICATED
          -- server connection to avoid -- skewing statistics
          -- gathered.
          --
          --              ---------------
          --              INPUT PARAMETER:
          -- Intervals - the total number of 15-minute-sampling
          --      intervals to gather statistics for an MTS.
          --      For example, a value of 32 would gather 32
          --      data points over an 8-hour timespan.
          --
          --
          --              --------
          --              VARIABLES:
          --
          Prev_busy REAL; -- previous busy count
          Prev_idle REAL; -- previous idle count
          Intv_busy REAL; -- interval busy count
          Intv_idle REAL; -- interval idle count
          Load REAL;      -- interval load
          --
          -- -------
          -- CURSORS
          --
           cursor Server is  -- cursor for shared server lookups
              select Count(*) Servers from V$SHARED_SERVER;
           cursor Dispatcher is -- cursor for dispatcher lookups
              select Name, Network, Busy, Idle
              from V$DISPATCHER;
          --
          -- ------------
          -- PROGRAM BODY
          --
          begin
           -- loop to gather data points for the specified
           -- number of intervals
           for Counter in 1 .. Intervals loop
               -- open cursor FOR loop to gather count of
               -- shared servers for a data point
               for Status in Server loop     --  Each loop inserts
                 insert into MTS_SERVER_STATS -- only one record
                   values (Sysdate, Status.Servers);
               end loop;
               -- open cursor for loop to gather dispatcher
               -- load information for data point
               for status in Dispatcher loop
               -- special if case for first data points to avoid
               -- divide by zero errors when calculating loads
                if Counter = 1 then
                 insert into MTS_DISPATCHER_STATS
                  values(Sysdate, status.name, status.network,
                   status.Busy, Status.Idle, NULL);
                else
                -- get previous cumulative busy and idle
                -- information in preparation of calculating
                -- interval data
                select Cumu_busy, Cumu_idle
                  into Prev_busy, Prev_idle
                 from MTS_DISPATHER_STATS
                 where Disp_name = Status.Name
                 and Timepoint = (select max(Timepoint)
                         from MTS_DISPATCHER_STATS
                          where Disp_name = Status.Name);
                -- calculate interval busy and idle by
                -- subtracting previous data from current data
                Intv_busy := Status.Busy - Prev_busy;
                Intv_idle := Status.Idle - Prev_idle;
                -- special case where busy time is negative or
                -- zero to avoid divide by zero errors
                if intv_busy <= 0 then
                     intv_busy := 0.001; -- Very low number
                end if;
                 Load := Intv_busy / (Intv_busy + Intv_idle) * 100;
                 insert into MTS_DISPATCHER_STATS
                  values ( Sysdate, Status.Name, Status.Network,
                           Status.Idle, Load);
                end if;
               end loop;
               commit work;
               DBMS_LOCK.SLEEP(900); -- sleep for 900 seconds =
                                     --   15 minutes
           end loop;
          end GATHER_MTS_STATISTICS;
         /
        >>> End of script

     This procedure should be executed using dedicated server to
     prevent skewing the test results. To do this set
     USE_DEDCATED_SERVER = ON in sqlnet.ora file or
     SERVER=MULTI-THREADED in CONNECT_DATA clause of service
     name in tnsnames.ora.

     - How to tune  using this script:

       - Set MTS_SERVERS and MTS_DISPATCHERS in more or less
         acceptable values.

       - set MTS_MAX_SERVERS and MTS_MAX_DISPATCHERS  so they do
         NOT limit the number of shared server and dispatcher
         processes your Oracle server can start during times of
         heavy demand. Remember that Oracle server AUTOMATICALLY
         starts and stops shared servers in order to adjust to
         changing processing demands placed on an MTS. You can
         also use ALTER SYSTEM command to manually change the
         number of dispatchers per protocol.

      - you need to gather statistics in the two tables with
        EXECUTE GATHER_MTS_STATISTICS(n);

      - Analysis queries:

         For servers

         select * from MTS_SERVER_STATS;

           Timepoint               Server_ct
           ---------               ---------
           27-NOV-95                  3
           27-NOV-95                  5
           27-NOV-95                  2
           27-NOV-95                  1
            . . . . . . . . .

         select max(Server_ct), avg(Server_ct)
         from MTS_SERVER_STATS;

         Output represents time-stamp and number of shared
         servers on that moment. Keep in mind that the MINIMUM
         number of shared servers (MTS_SERVERS) should be fairly
         close to the AVERAGE number of shared servers to avoid
         unnecessary shared server process
         allocation/deallocation. The MTS_MAX_SERVERS shouldn't be
         so low that it limits the number of shared server needed
         during times of peak demand.

         For dispatchers:

         select * from MTS_DISPATCHER_STATS;

         select Disp_name, max(Int_load)
         from MTS_DISPATCHER_STATS
         group by Disp_name;

         Simply analyze the load for each dispatcher and define
         what an excessive load is (let's say 60% to 70%). When
         you find an excessive load for a dispatcher of
         particular network protocol, consider starting one or
         more additional dispatchers.

  - Reduce process contention.

    ALTERING PROCESS PRIORITIES MAY DEGRADE PERFORMANCE.

       - assigning different priorities to Oracle processes
         (background and user) may cause the operating system to
         grant uneven processing time, and thus exacerbate the
         effect of contention.

       - all user processes must have the default operating
         system priorities (on VAX/VMS).

  - Tuning free lists contention (when number of users
    simultaneously insert data in the same table and index).
    When tables are created, you can instruct Oracle how much
    information to keep in memory for blocks that are available
    for creating records. FREE LIST is the number of block IDs
    that Oracle will maintain in memory that are candidates for
    record insertion. After the segment header there is one block
    for EACH free list group specified in the storage clause of
    the CREATE TABLE/CREATE  INDEX statement. Each block contains
    information on the free lists in that particular free list
    group. This and Oracle block classes are presented in
    [PH95p#159v2/page 2].

    Transaction space per block are affected by INITRANS,
    MAXTRANS, PCTFREE create table and create index parameters
    for both DATA space and INDEX space.

    INITRANS, MAXTRANS, PCTFREE create table parameters control
    the amount of space that keeps track of transactions against
    the table. The initrans parameter sets aside 23 bytes for
    each transaction slot in each data block that resides in a
    table. The space left over for expansion of existing records
    of the block is called pctfree space. If your applications
    need to support concurrent access to a table by more then one
    user, you should increase INITRANS value. Oracle will
    allocate space in a data block for transactions in these
    23-byte slices, and will allocate UP to the number of slots
    delineated by MAXTRANS. If MAXTRANS set too low then
    free list BLOCK CONTENTION occurs - transaction space limited
    in data and index blocks!  When a new transaction need a
    23-bytes slot in a data block and other transactions have
    already occupy all the MAXTRANS slots, this new transaction
    will wait for previous transaction to terminate,

    Once 23-byte slot is allocated in the block (over INITRANS
    value) it is NOT DE-ALLOCATED from the block header space as
    unused data space for both data and index blocks.

    You may detect free list block contention by the following
    script :

       >>> Begin of script
       set echo off feedback off verify off
       column TW format 99999999999999 heading 'Total Waits'-
           New_Value TWV
       column Class format a18 heading 'Statistic''s Name'
       column Count format 99999999999 hea 'Count'
       column PT format 99999.99 hea '% of tot'
       rem Get Total Waits into TWV variable:
       select sum(Value) TW
       from V$SYSSTAT
       where Name in ('db block gets', 'consistent gets');
       rem
       select Class, Count, nvl(Count,0)/&TWV PT
       from V$WAITSTAT
       where Class = 'free list';
       set echo on
       rem
       rem   "% of total" should not be GREATER THEN 1%. If it
       rem   is higher, consider adding more free lists
       rem
       rem   FREELISTS should be equal to the number of users
       rem   which concurrently INSERTS data in the table.
       rem   You should detect the needed table by analyzing
       rem   ALL the SUSPECTING TABLES.
       rem
     >>> End of script

     This script don't get the names of the hot tables which give
     the maximal impact on free list contention. To eliminate
     this contention DBA should increase INITRANS and
     MAXTRANS for these hot tables. Usually, these tables
     are known for each application. Hence, it is important for
     DBA to KNOW the APPLICATIONS.

     FREELISTS entries for a table should accommodate the number of
     concurrent processes that will be inserting data into a
     table

     To change FREELISTS parameter for a table you need to do the
     following:

       1. Export the data from the table with grants and indexes.

       2. Drop the table.

       3. Recreate table with increased FREELISTS parameter.

       4. Import the data back into the table with IGNORE=Y

  - How to select candidates columns for indexing:

    - Low selectivity (percent of rows that have the same
      column's value).

    - The only candidates for indexing are columns that are
      mentioned in WHERE and AND select's clause.

    - Index is NOT USED in functions (Floor, To_Number, ...)

    - Composite index will ONLY be used to satisfy a query
      when the leftmost column is mentioned in WHERE  or AND.

    - Oracle provides two SQL*Plus scripts to assess the
      CANDIDATE columns for indexing:

      @OTLOIDXS.sql <table_name> <column_name>
      @OTLSIDXS.sql <table_name> <column_name>

      These scripts must be run one after another and they
      create several tables (their names begin with INDEX$).

      These scripts provide output on the screen (if you want
      to direct output to the file use pipes (>) or insert
      "Spool <file>" and "spool off" commands in the second
      script body). Th output contains useful information on
      underlining column to be concerned as a candidate for an
      index:

         - Various statistics (number of rows in the table,
           average number of rows per key, max and min number
           of rows per key, total distinct keys)

         - BADNESS - number of equal values in this column
                     per key value (selectivity).

         - KEY_COUNT - the number of keys with the same
                       BADNESS

         - ROW_PERCENT - percentage of rows with the same
                         key value in this column

         - KEY_PERCENT - percentage of column values with the
                         same key value.-

      If BADNESS is low then this column is a good candidate
      for an index.

  - Tune checkpoints.

    Checkpoint frequency can affect performance.

     Enabling checkpoint process (by CHECKPOINT_PROCES=TRUE):

         - reduces run-time performance impact of checkpoints.

         - CKPT updates the data file headers when checkpoint
           occurs, leaving LGWR free to write redo entries.

      Two parameters affect checkpoint frequency:

       LOG_CHECKPOINT_INTERVAL - measured in OPERATING SYSTEM
                 BLOCKS - the number of newly filled redo log
                 file blocks that are needed to trigger
                 checkpoint. If this value is larger then the
                 size of redo logs than a checkpoint only occurs
                 when there is a log switch.

                 The more this interval the more startup time
                 (time needed to recover after crash).

                 On the other hand, high frequency of checkpoints
                 may degrade the performance. Too frequent
                 checkpoints diminish the benefit of delayed
                 database file writes in a batch rather than
                 transaction mode.

       LOG_CHECKPOINT_TIMEOUT - seconds since the last checkpoint

       If performance is your concern, set LOG_CHECKPOINT_TIMEOUT
       to 0 (default) and LOG_CHECKPOINT_INTERVAL to a size
       greater then the physical redo log file - the checkpoint
       will occur only on log switching.

  - Monitoring and tuning using UTLBstat.SQL and UTLEstat.SQL.
    See also [COR95].

     UTLBstat runs at the beginning if the time interval and
     utlestat - at the end. You should carefully chose this time
     interval.

     UTLBstat gathers the initial performance statistics in 9
     special tables at the beginning of your observation period.

     UTLEstat:

        - gathers the final performance statistics in 7 special
          tables at the end of your observation period.

        - does comparison between UTLEstat and UTLBstat tables
          and stores the results into 7 difference tables

        - generate ASCII file called REPORT.TXT

     UTLBstat should not be run immediately after the database
     has been started because none of the system caches are
     loaded and this would not be a realistic picture of a
     running database.

     The database should NOT be went down between UTLBstat and
     UTLEstat.

     The correct time slice must be chosen, for example, the peak
     processing time.

     INIT.ORA must contain TIMED_STATISTIC = TRUE to run
     UTLBstat/UTLEstat.

     What is good about these utilities that there is no
     performance hit on the database except for a few minutes to
     run these program. All these programs do is gather a
     snapshot of the system at the start and a snapshot an the
     end.

     - Interpret the output.

       - Library cache.

         You should minimize RELOADS (see "Tuning of the library
         cache"). Also put the attention on PINHITRATIO and
         GETPIRATIO columns. Any value < 80% is NOT acceptable.
         The solution is to increase SHARED_POOL_SZ parameter.

       - System statistics

         The info shown in the UTLEstat output lists total system
         statistics for the time period, broken down by
         transaction average.

         Analyzing report UTLEstat system stat you should
         calculate the hit ratio:

         LOGICAL READS = CONSISTENT GETS + DB_BLOCK_GETS HIT
         RATIO = (LOGICAL_READS - PHYSICAL READS)/LOGICAL READS

         CONSISTENT GETS, DB_BLOCK_GETS and others should be
         taken from REPORT.TXT (output of UTLEstat).

         HIT RATIO must be greater then 80%. Otherwise, increase
         DB_BLOCK_BUFFERS INIT.ORA parameter

       - Events.

         The UTLEstat output shows the system events. You should
         analyze the following:

          - Buffer busy waits ratio (waits for resources).
            Logical reads is calculated based on system
            statistics in previous section, BUFFER BUSY WAITS -
            is the last line in the "Events" (this section)
            output:

            BUFFER BUSY WAITS RATIO=BUFFER BUSY WAITS/LOGICAL READS
                         where
            LOGICAL READS = CONSISTENT GETS + DB BLOCK GETS

            BUFFER BUSY WAITS RATIO should be < 4%. Otherwise, you
            should examine the V$WAITSTAT table to determine what
            kind of waits you encountered (from SYS account):

            >>> Begin of script
              select Class, sum(Count) 'Total Waits'
              from SYS.V$WAITSTAT
              where Class in ('undo segment header',
                              'undo block',
                              'data block')
              group by Class;

              rem If the majority of waits concerneD rollback
              rem segments activity ('undo segment header', undo
              rem block'), your should increase the NUMBER OF
              rem (private) ROLLBACK SEGMENTS. If the majority of
              rem waits concerned 'data block' you should increase
              rem FREELIST parameter (to 2 for example) on heavily
              rem inserted tables.
              rem
            >>> End of script

          - Sorts.

            UTLEstat system statistics (see "System statistics"
            above)  gives very good statistics on your sort
            usage.

            The "sorts (disk)" row tells you how many times you
            had to sort to disk. The disk sort could not be
            handled by the SORT_AREA_SIZE

            The "sorts (memory)" row tells you how many times you
            where able to complete the sort using just memory.

            90% or high should be done in memory. If <90%
            increase SORT_AREA_SIZE init.ora parameter. Sort area
            allocated per user and taken from available memory,
            not from SGA.  The drawback is the EACH user, even he
            or she does not need such amount of sort area will
            have large one.

       - Chaining (see also "Migrated rows and chained rows"
         below)

         The row beginning with "table fetch continued row" (see
         "System statistics" output above) gives the  number of
         chained and migrated rows. You better try to eliminate
         all the migrated and chained rows (if possible of
         course).

       - Enqueue waits (see "System statistics" output above)

         Enqueues are locks. When you see waits appearing this
         means that Oracle needed another lock but couldn't
         obtain it. To prevent the waiting increase the
         ENQUEUE_RESOURCES init.ora parameter.
         ENQUEUE_RESOURCES represents the number of resources
         that can be locked by lock manager.

       - DBWR process  (see "System statistics" output above)

         "DBWR buffer scanned" row from the statistics gives an
         excellent overview of the volume of information that has
         gone through the DBWR process. The "DBWR checkpoints"
         row gives an excellent indication of how many
         checkpoints have been done.

         If it is the bottleneck, increase value  of DB_WRITERS
         init.ora parameter (if your OS supports this facility).
         Recommended setting is 2 DBWR per database file.

         You can also start the checkpoint process (CKPT).
         Remember that checkpoint process only modifies the
         headers of database files.

         The other INIT.ORA parameter that has a great effect on
         DBWR performance is DB_BLOCK_WRITE _BATCH. The larger
         the value, the less often the database process is
         signalled to write. Adjusting this parameter affects how
         efficient DBWR will perform.

       -  Application effectiveness (see "System statistics"
          output above).

          By looking at the "table scans (short table)" and
          "table scans (long table)" rows of the statistics
          output, you can determine how many full table scans
          your database is doing. (A full table scan is the scan
          without an index). NON-INDEX LOOKUP RATIO is introduced
          to determine is full table scan seriously degrades the
          performance.

          NON-INDEX LOOKUP RATIO =  table scans (long table) /
                              ( table scans (long table) +
                              table scans (short table) )

         If NON-INDEX LOOKUP RATIO < 40% during the day time when
         many interactive users were lugged on, you should revise
         your index structure (clever add some indexes).

       - Average write queue length ("summed dirty queue length"
         in the "System statistics" output above).

         This is the average length of the dirty buffer write
         queue. This should be compared to the
         DB_BLOCK_WRITE_BATCH init.ora parameter. If the length
         of this queue is larger then DB_BLOCK_WRITE_BATCH value,
         increase the value of DB_BLOCK_WRITE_BATCH.

         Also the quick inspection of the I/O balance of your
         disk drives should be made. You may detect that one disk
         drive is doing most of work.

       - Tablespaces and file I/O.

         This section of REPORT.TXT gives info about physical
         reads and writes to tablespaces and files that makes
         them up.

         Based on this report you may distribute database files
         properly over available disk drives.

         Don't overlook OS files that can be on the same disks.

       - Latches

         This section of REPORT.TXT contains info about latch
         statistics ("Latch request output"). Latches are the
         Oracle method of establishing process ownership of
         objects that the database needs to use. Latch is the a
         very efficient internal Oracle lock.

         Put the attention to HIT_RATIO and SLEEPS columns.
         The objective is ho have the HIT_RATIOs as high as
         possible: this means that when a latch is requested, it
         is available. The SLEEPS  should be  as low as possible:
         this means a latch was requested and could not be
         supplied. The process that requested the latch went into
         a sleep state until the needed resource was available.
         Sleeps should be avoided.

         The section  of REPORT.TXT ""Latch request output -
         latches that will not wait" contains info on latches
         that processes are not allowed to wait for. If the
         process can't get the requested latch, rather than
         sleep, it TIMES OUT. The goal is to have
         NOWAIT_HIT_RATIO column in this report as close to 1 as
         possible.

         Important: ACCESS TO THE REDO LOG BUFFER IS REGULATED BY
         LATCHES of two types: (1)redo copy latch and (2) redo
         allocation latch.

         As update occurs, space is allocated in the redo log
         buffer using  REDO ALLOCATION LATCH. There is os only
         one redo allocation latch. Hence, only one user can
         allocate space in the redo log buffer at a time. The
         most information that can be copied on the redo
         allocation latch at a time is determined by init.ora
         parameter LOG_SMALL_ENTRY_MAX_SIZE. If the amount of
         information to be copied is greater then
         LOG_SMALL_ENTRY_MAX_SIZE then Oracle uses REDO COPY
         LATCH instead of redo allocation latch. While holding
         the redo copy the USER PROCESS will fill the redo log
         buffer with its information. When th proces is done, it
         then releases the redo copy latch.

         If you computer with multiple CPUs, then you are allowed
         to have one redo copy latch per CPU. Number of latches
         is set through LOG_SMULTANEOUS_COPIES INIT.ORA
         parameter (default is the number of CPUs).

         If you hit ratio of redo allocation (from REPORT.TXT)
         falls below 85%, then you have latch contention. In this
         case make the parameter LOG_SMALL_ENTRY_MAX_SIZE
         smaller. By making this value smaller you will cause
         more activity to happen on the redo copy latch.

         Another way to reduce contention on the redo log latches
         is to tell Oracle to prbuild the redo entries. Many
         times, an Oracle user's redo activity is made up of many
         small pieces. It can be much more efficient to instruct
         the database to put all the pieces together BEFORE
         requesting the redo copy latch. It can be done through
         LOG_ENTRY_PREBUILD_THRESHOLD init.ora parameter (default
         is 0).

       -  Rollback segment information.

          This section of REPORT.TXT gives info about efficiency
          of rollback segments

         If the
         Rollback Wait Ratio = TRANS_TBL_WAITS.TRANS_TBL_GETS*100
         IS GREATER THEN 4% you should create addition private
         rollback segment.

       -  INIT.ORA section (for info purposes only)

       -  Data dictionary statistics.

          It is the important part of REPORT.TXT.

          GE_REQS column shows how many times a particular
          dictionary type was requested.

          The GET_MISS column shows the number of times the
          requested item was not found in the cache; if it not in
          the cache, then Oracle must go out and get it.

          The CUR_USAG is the number of entries in the cache that
          are being used.

          The objects is to have GET_MISS and SCAN_MISS as LOW as
          possible.

          If the miss number is over 10% of gets then increase
          SHARED_POOL_SIZE in init.ora.

       -  Statistics Gather Times

          The last section of the REPORT.TXT gives the start
          (UTLBstat) and stop (UTLEstat) dates and times

       - Set TIMED_STATISTICS to FALSE along with other
         necessary modification to INIT.ORA after running
         UTLBstat/UTLEstat and analyzing the output.

- DML_LOCKS init.ora parameter

    DML_LOCKS is a maximum number of locks that can be placed on
    all tables by all users at one time.

    If this parameter is not set HIGH ENOUGH - processing STOPS
    and an Oracle error is issued.

    Increase this parameter at startup time. Better TOO MANY than
    NOT ENOUGH. Set DML_LOCKS to the number of CONCURRENT USERS
    TIMES 10.

- Managing instance

  - Managing internal errors

    ORA-00600: internal error code, arguments [2103],[900],[]...

    See paper by Dennis K. Murray OM 7-8.1995 p.57

  - Database connnection security (Oracle7 Server documentation
    addendum, pp.5-1 - 5-14:

    - encrypted password for connections to database servers

    - improved CONNECT INTERNAL

    - ORA_ENCRYPT_LOGIN (true/false) and DBLINK_ENCRYPT_LOGIN
      (true/false)

    - REMOTE_LOGIN_PASSWORDFILE INIT.ORA parameter.

    - CONNECT / as sysoper

    - Maintaining a password file

  - Monitoring processes:

      - SQL*DBA (Server Manager) PROCESS and SESSION menu items;

      - CATBLOCK.SQL + UTLLOCKT.SQL scripts

      - Dynamic performance tables and views:

        V$DATA_FILE - contains data-file information from Control
                      file

        V$VERSION - version numbers of core library.

        V$SGA     - summary info on the SGA.

        V$SGASTAT - summary info on the SGA.

        V$FIXED_TABLE - shows all fixed tables, views and
                        derived tables

        V$TYPE_SIZE - sizes of database components

        V$INSTANCE - state of the current instance

        V$PARAMETER - INIT.ORA parameters.
        ---------

        V$DATABASE - database info from the control file

        V$DBFILE   - all data files making up the database

        V$DATAFILE - datafile info from controlfile

        V$BGPROCESS - the background processes

        V$SESSION - info for each current session

        V$PROCESS - info on currently active processes

        V$TRANSACTION - info about transactions

        V$OPEN_CURSORS - opened and parsed cursors for all users

        V$SQLTEXT - text of SQL belonging to shared SQL in SGA

        V$ROLLNAME - names for all rollback segments

        V$RESOURCE  - info about resources

        V$DB_OBJECT_CACHE - DB objects cached in library cache

        V$LOG     - log file info from control file

        V$LOGFILE - info about redo log files status

        ---------

        V$SESSTAT - current stat. values for each current sess.

        V$SESS_IO - I/O statistics for each user session.

        V$STATNAME - decoded stat.names for stats. from V$SESSTAT

        V$SYSSTAT -current system-wide value for each
                    statistics in V$SESSTAT

        V$SQLAREA -stat. on the shared cursor cache(row<=>cursor)

        V$LIBRARYCACHE - statistics on library cache

        V$DB_OBJECT_CACHE - DB objects cached in library cache

        V$ROLLSTAT - stat. for all ONLINE rollback segs.

        V$ROLLNAME - names for all rollback segments

        V$ROWCACHE - statistics for data dict.activity
                     (1 row <=> 1 cache)

        V$FILESTAT - info about file read/write statistics

        V$WAITSTAT - block contention statistics.
                     TIMED_STATISTICS=TRUE

        ---------

        V$ACCESS - currently locked objects and the sessions that
                   are accessing them

        V$LOCK  - locks and resources. Does not include DDL locks

        V$SESSION_WAIT - resources or events that active session
                         is waiting for

        V$LATCH   - info about each type latch (1:1 to V$LATCHNAME).

        V$LATCHNAME   - the decoded latch names for V$LATCH

        V$LATCHHOLDER - info about latch holder.

        ---------

        V$LOG     - log file info from control file

        V$ACHIVE - info on archive log for each thread in the
                   database

        V$LOGFILE - info about redo log files status

        V$LOG_HISTORY - archived log names for all logs in the
                        log history

        V$RECOVERY_LOG -archived logs needed to complete media
                        recovery (derived form V$LOG_HISTORY)

        V$BACKUP - backup status of all online datafiles

        V$RECOVER_FILE - status of files needing media recovery

        ---------
        V$CIRCUIT - virtual circuits (user connections through
                    dispatchers and servers)

        V$QUEUE   -  info on the multi-thread message queue

        V$DISPATCHER - info on dispatcher processes

        V$SHARED_SERVER - info on the shared server processes

        ---------

        V$THREAD  - thread (instance) info from controlfile

        ---------

        V$ENABLEDPRIVS - which privileges are enabled

        ---------

        V$REQDIST - histogram of request times

        ---------

        V$LICENSE - license limits

        ---------

        V$LOADSTAT - SQL*Loader direct path loading statistics

        V$LOADCSTAT

        ---------

        V$NLS_PARAMETERS

        V$NLS_VALID_VALUES

        SYS.PROPS$

        ---------

        V$TIMER   - current time in hundreds of second

        ---------

                    3.2 Manage Database storage
                        -----------------------

- Logical and physical structure of database and their
  relationships. Tablespaces and OS files. Tables.

  - Data structures:

    Database - tablespaces - segments (files) - extents - blocks.

    Tablespaces:

      - each tablespace contains one or more operating system
        files;

      - each tablespace contains zero or more segments, each of
        them consists of one or more extents consisting of
        Oracle blocks.

      - tablespaces can go offline or online (except SYSTEM).
        any tablespace containing active rollback segments can
        not be taken offline. SYSTEM tablespace must always be
        online because the data dictionary must always be
        available.

      - each object is in only one tablespace.

     - useful commands:

        drop tablespace <tablespace> [including contents]
                                    [cascade constraints];
            cascade constraints -removes references to primary
                                 keys.

        alter tablespace <name> rename
           datafile '<file1>' to '<file2>';

        alter tablespace offline [normal|temporary|immediate]
            normal - performs checkpoint for all datafiles in
                     this tablespace
            temporary - performs checkpoint for all
                     online datafiles in the tablespace;
            immediate - does not ensure that tablespace files are
                     available and does not perform checkpoint;
              TEMPORARY AND IMMEDIATE ONLY IN ARCHIVELOG MODE!!!

  - When Oracle compacts data in block:

   Oracle will compact data blocks when Insert or Update statement
   attempts to use a block that contains enough free space to
   contain a new row piece, yet that space is fragmented so that
   the row piece cannot be inserted in a contiguous section of
   the block.  Oracle does not dynamically compress space at the
   tablespace level (dropping and recreating a tablespace and
   then importing in objects to de-fragment a tablespace does not
   count as dynamic compression.)

- Typical tablespace layout: SYSTEM, USER_DATA(+USER_INDX), TEMP,
  RBS, TOOLS.  Assigning default and temporary tablespace to
  users, distribution of tablespace across disks.

- Proper table and index sizing ([COR95] p.57).

  - Size in blocks required to hold table:

    greatest (4, ceil (ROW_COUNT /
              ((round (((AVAIL_IN_BLOCK - (initrans * 23)) *
                 ((100 - PCT_FREE) / 100)) / ADJ_ROW_SIZE)))) * 2)

                   where

        AVAIL_IN_BLOCK - actual number of bytes available in
                         block (1958 if DB_BLOCK_SIZE=2048)

        23 - number of bytes used in each initrans value

        PCT_FREE - percentage free specified for table

        ADJ_ROW_SIZE - estimated adjusted row size of each row in
                       the table

        ROW_COUNT - estimated number of rows for the table

        initrans - initial number of transactions for the table

  - Size in blocks required to hold index:

    greatest (4, 1.01 * ((ROW_COUNT /
          ((floor((( DB_BLOCK_SIZE - BLOCK_OVERHEAD -
            (initrans * 23)) *
            (1 - (PCT_FREE)/100.))) /
            ((10+uniqueness) + number_col_index +
            total_col_length))))) * 2 )

                where

        ROW_COUNT - estimated number of rows for the table

        DB_BLOCK_SIZE - database block size

        BLOCK_OVERHEAD - inventory space in each block (113 if
                        DB_BLOCK_SIZE = 2048

        initrans - initial number of transactions for the table

        PCT_FREE - percentage free specified for index

        uniqueness - 1 if index is unique, 0 - if not

        number_col_index - number of columns in the index

        total_col_length - estimated length of the index entry

- Monitoring space allocated to tables and indexes ([COR95]
  pp.59-62).

  - Space for tables

      To populate the statistic columns in the DBA_TABLES the
      following command should be used:

        analyze table <owner.table_name> compute statistics;

      Then number of empty blocks in the table's extent can be
      extracted by:

        select Empty_blocks from SYS.Dba_tables
        where Owner = <owner> and Table_name=<table>;

      To determine the percentage of blocks allocated, used and
      empty run the following script

      >>> Begin of script
         select
            A.Blocks "Total Blocks Allocated",
            ((A.Blocks - B.Empty_Blocks) / A.Blocks) * 100
                             "% Blocks Used",
            (B.Empty_blocks / A.Blocks) * 100 "% Blocks Empty"
         from SYS.DBA_SEGMENTS A, SYS.DBA_TABLES B
         where
           A.Owner = B.Owner and A.Segment_name = B.Table_name
           B.Owner = <table owner> and B.Table_name = <tablename>
      >>> End of script

  - Space for indexes

    To analyze and validate index:

      analyze index <index> validate structure;

      This command can also detect invalid index format or bad
      index structure and index entries that do not match row
      data, or a bad index.

      This commands also populate INDEX_STATS and INDEX_HISTOGRAM
      tables. INDEX_STATS tables should be used to determine the
      deleted space in the index. If deleted space is too big the
      index should be dropped and re-created again.

      - Free space in indexes.

       Oracle will NOT REUSE index space in blocks whose
       corresponding data rows have been deleted!!! This space is
       called as "dead entries".

       You should monitor each index by:

          1. validate index <index_name>

          2. Examine INDEX_STATS dictionary view:

             select Lf_rows, /* Leaf rows */
                    Lf_rows_len, /* Length of leafs */
                    Del_lf_rows, /* Deleted leaf rows */
                    Del_lf_rows_len /* Length */
                    from INDEX_STATS
                    where name = '<index_name>';

          3. If Del_lf_rows_len / Lf_rows_len * 100 > 20%
             then drop and re-create the index to reclaim unused
             space.

-  Minimizing fragmentation and disk contention.

   Extent fragmentation:

      - data dictionary segments have zero fragmentation
        propensity and never fragment free space;

      - application data segments have zero fragmentation
        propensity if properly designed. Keep production and test
        systems separately;

      - rollback segments usually have zero fragmentation
        propensity. It depends on number extends above OPTIMAL
        size;

      - temporary segments have high fragmentation propensity
        because they are dropped by SMON as soon as the sort they
        are associated with has completed. SMON regularly
        compacts contiguous entries in the free extent table
        (FET$).

   Monitoring of fragmentation:

      - via DBA_FREE_SPACE, DBA_SEGMENTS, DBA_EXTENTS tables.

      - third party tools (PLATINUM).

   To minimize contention:

      separate redo logs(!) on dedicated disk;

      separate data from indexes, ...

   You should separate groups of segments representing objects
   with different behavioral characteristics among different
   tablespaces. It's very difficult task!  Separate groups by:

      - separating segments with different backups needs;

      - separating segments with different day-to-day uses;

      - separating segments with different life spans;

- Storage objects: TABLES, CLUSTERS and INDEXES. Segments,
  extents, blocks. Allocation of extents in tablespace. Maximum
  number of extents per segment. Extent space utilization
  parameters:  INITIAL, NEXT, MINEXTENTS, MAXEXTENTS,
  PCTINCREASE.

  WARNING on cluster usage:
  -------
  Transactions on clustered tables are 6-8 TIMES SLOWER then the
  same transactions on unclustered tables.  Transactions on
  clustered tables generate ORDER OF MAGNITUDE greater rollback
  and redo information then un-clustered tables

- Extent space utilization.

     - Table needs more space and more extents then MAXEXTENTS
       cannot be allocated. If you have not yet reached the
       maximum extents for a table, and wish to increase
       MAXEXTENT parameter use:

         alter table <table_name> storage (maxextents 240);

     - Maximum possible number of extents per segment for
       versions below 7.3 depends on block size of database
       (version 7.3 has no limits):

         The first block of each segment contains an extent map
         for each extent.  The maximum number of extents allowed
         is therefore a function of the database block size of
         each extent map entry and can be calculated as:

             MAXEXTENT = ((DB_BLOCK_SIZE - 24) / 2 - 44) / 8

         This is a common extent mapping table:

         ORACLE BLOCK SIZE         GREATEST MAXEXTENT VALUE
             512                               25
             1K                                57
             2K                                121
             4K                                249
             8K                                505
            10K                                633

         Thereby, the MAXEXTENTS value may not be accurate
         because the MAXEXTENTS can not exceed the GREATEST
         MAXEXTENT VALUE.

     - Monitoring of extent allocation.

        What is the max. number of extents that MY_TABLE can
        occupy?:
             select Maxextents from SYS.DBA_TABLES
             where Table_name='MY_TABLE' and owner='ITSME';

        How many extents does it have now?:
             select count(*) from SYS.DBA_EXTENTS
             where Segment_name='MYTABLE'
               and Segment_owner='ITSME';

        DBA must track extent allocation and fix situations where
        tables have more then 10 extents allocated to them.

        DBA should monitor expansion of tables in your database
        and plan for table reorganization before you run out of
        extents.

        If the platform permits it, set your Oracle block size to
        4K (4,096 bytes) or even 8K (8,192). DBA should know well
        how to re-create the database. However, for some OLTP
        applications 2K block will give the better result.

     - Runaway size (PCTINCREASE)

       Oracle uses 50% as default for PCTINCREASE. The size of
       extents can grow EXPONENTIALLY.

       Always use PCTINCREASE 0 (alter table <tablename> storage
       (pctincrease 0) if you have no special ideas. For main
       table use manual adjusting of the extents by mean of alter
       table alter table <tablename> allocate extent rather then
       adjusting the STORAGE parameter for the table.

- INIT, NEXT ... parameters for temporary tables must be
  specified on tablespace level.

- Database blocks. Block layout.

- Control space usage. Block space utilization parameters
  (PCTUSED, PCTFREE, INITRANS, MAXTRANS). Row chaining and row
  migration. Effects of changing storage parameters.

  - Quotas for users on tablespaces:

    Q.: Assume that a user's temporary tablespace is SYSTEM; he
        has quota on two tablespaces USERS and DATA but not on
        SYSTEM.  Will he be able to do on-disk sorts ? If yes,
        explain how.  If no, explain the simplest way to "fix"
        this.

    A.: Yes, he will be able to do on-disk sorts since the
         temporary segments created for this are owned by SYS.

- Allocating extents in Oracle v7.1.x

   -  Without ALTERing PCTINCREASE:

      V71_RULE1: ORACLE v7.1.x will translate the specified
                 extent size to number of oracle blocks and
                 round them to the next multiplicand of 5.

      V71_RULE2: It will then try to allocate the number of
                 blocks resulting from V71_RULE1

      V71_RULE3: If it can't allocate the number of blocks
                 resulting from V71_RULE1, if will try to
                 allocate the exact number of blocks as
                 requested.

      V71_RULE4: If it can't allocate the number of blocks
                 resulting from V71_RULE1 and V71_RULE3, an error
                 ORA-1547 will be issued

      V71_RULE5: If V71_RULE2 or V71_RULE3 succeed, Oracle v7.1
                 checks if <5 blocks are left free in the
                 tablespace, and allocates them too in this case.

   - Allocating extents when altering PCTINCREASE.

     The algorithm for calculating the size of the new extent to
     be allocated in rdbms v7 uses simply the actual allocated
     size of previous extent:

     CALC_SIZE(ext#0) = round_to_5(INITIAL)

     CALC_SIZE(ext#1) = round_to_5(NEXT)

     CALC_SIZE(ext#n) = round_to_5(PHYS_SIZE(ext#n-1)*(1.0
                        + PCTINCREASE/100))
         where round_to_5(X) rounds X to next multiple of 5
               Oracle blocks

                  and

         PHYS_SIZE(ext#m) = select Blocks from DBA_EXTENTS
                          where Owner = Table_owner
                            and Segment_name = Table_name
                            and Extent_id = m

- Extent fragmentation and block fragmentation.

  - Monitoring rows

    Give SQL queries for the following -

      a) The actual number of rows beginning in each block of a
         table:

         >>> BEGIN OF SCRIPT
           Select substr(rowid,1,8)||substr(rowid,15,4) "BLOCK",
                count(*) "ROWS"
           from <table_name>
           group by substr(rowid,1,8)||substr(rowid,15,4)
         >>> END OF SCRIPT

      b) The number of blocks that actually contain data (as
         opposed to the number of blocks allocated for the
         table.):

         >>> BEGIN OF SCRIPT
           Select count
               (distinct(substr(rowid,1,8)||substr(rowid,15,4)))
           from <table_name>;
         >>> END OF SCRIPT

     You can also monitor these via ANALYZE TABLE statement
     which provides all the necessary information in a database
     table.

- monitoring views for storage parameters (dba_tablespaces,
   dba_tables, user_tables,...)

   - Managing datafiles

   Q: How do you tell what the MAXDATAFILES parameter was set to
      upon database creation? In V6 and Oracle7??

   A: Within V6, you can connect as 'sys' and do the following
      query to see the max datafiles parameter the database was
      created under:

      select count(*) from x$kcfio;

   Within Oracle7, you can dump the control file to a trace file
   and see what the create control file command has for
   maxdatafiles. To dump the control file use:

   ALTER DATABASE BACKUP CONTROLFILE TO TRACE

   To avoid recreating database always set MAXDATAFILES
   parameter to a very high number. Default is too low.

                      3.3 Create a database
                          -----------------

- Creation of database.

The steps required to create database:

      1. Organize the contents of the database.

      2. Design the structure of the database to reduce
         contentions and fragmentation. You should select
         right places for data, control and redo log files.

      3. Prepare the operating system environment for the
         database creation and decide on the unique instance
         name. Instance name is determined by an environment
         variable ORACLE_SID on UNIX boxes, logical ORA_SID.
         (SID stands by system identifier). For UNIX:
         $ ORACLE_SID=<instance_name>;export ORACLE_SID
         $ echo ORACLE_SID.
         Notes:
           - for some platforms like NetWare or OS/2 only
             one Oracle instance per server machine is allowed.
             For those platforms terms "instance"  and "SID" do
             not exist;
           - database name (DB_NAME INIT.ORA param.) and SID must
             be of the SAME VALUE for non parallel databases.

      4. Copy and edit the parameter file INIT.ORA. Actual name
         of this file usually INIT<instance_name>.ORA

         You should prepare:

         -  INIT.ORA parameters:

            DB_NAME and, optionally, DB_DOMAIN

            CONTROL_FILES

            DB_BLOCK_SIZE

            DB_BLOCK_BUFFERS (may be changed at any time)

            PROCESSES        (------- "" --------)

            DB_BLOCK_SIZE - size of the dbatabase block. If you
                            have no reasons to modify it leave it
                            default.  CREATE DATABASE parameter

            DB_NAME       - CREATE DATABASE parameter

            DB_BLOCK_BUFFERS - usually default value is
                               insufficient to run all the
                               installation script

            SHARED_POOL_SIZE - see discussion above

            DML_LOCKS        - number of simultaneous system-wide
                               locks (Max Number of DMLs per
                               transaction multiply by max number
                               of transactions in system

            OPEN_CURSORS     - the more the better if you use
                               Forms

            LOG_CHECKPOINT_INTERVAL - see checkpoints and log
                                      switch discussion

            LOG_CHECKPOINT_TIMEOUT - ---- "" ---------------

            SEQUENCE_CACHE_ENTRIES - the total number of sequence
                                     numbers that are cached in
                                     the SGA at one time.  if
                                     sequence is created with
                                     NOCACHE option, setting this
                                     parameter  will have no
                                     effect on the sequence

            PROCESSESS   - the number of processes (including
                           background proc) allowed at one time

            SESSIONS   - the number of user and system sessions
                         allowed at one time. Default value is
                         PROCESSES * 1.1.  Unless your
                         applications create concurrent recursive
                         sessions the default should be
                         sufficient.

            SORT_AREA_SIZE - will affect the performance of
                             select statements (see appropriate
                             section)

            ENQUEUE_RESOURCES - represents the number of
                                resources that can be locked by
                                lock manager

            It is convenient, especially in parallel server
            environment, to use IFILE=<filename> INIT.ORA
            param. which invoke file <filename>.

         - monitoring current INIT.ORA parameters:

           - SHOW PARAMETER command from SQL*DBA

           - examine V$PARAMETER tables

      5. Start the instance (CONNECT INTERNAL and STARTUP NOMOUNT
         command from SQL*DBA).

         Examples: $ sqldba mode=line (or lmode=y)

                     connect internal

                     sturtup nomount

      6. Execute the CREATE DATABASE which:

         - creating and/or initializing the control file and redo
           log files;

         - don't forget CHARACTER SET <charset> parameter, if
           character set differs from US7ASCII. Valid character
           sets for russian are CL8ISO8859P5, RU8PC866,
           CL8MSWINDOW31 (CL8MSWIN1251 - new name), RU8PC855,
           CL8MACCYRILLIC, CL8MACCYRILLICS, RU8BESTA for ASCII
           based platforms and CL8EBCDIC1025 and CL8BS2000 for
           EBCDIC based ones.

         - creating new data files or erasing data that existed
           in previous datafiles;

         - creating the data dictionary containing:

           - all the necessary dictionary tables in user schema
             SYS (script SQL.BSQ)

           - users: SYS (password change_on_install) and SYSTEM
             (manage
             r);

         - filling the data dictionary created: views, public
           synonyms and grants (CATALOG.SQL), procedural script
           (CATPROC.SQL). You can do this manually or CREATE
           DATABASE will do it for you if INIT.ORA parameters:

           INIT_SQL_FILES=
             (SQL.BSQ,... CATPROC.SQL, CATALOG.SQL,...,...).
                for  RDBMS < 7.1.3

           INIT_SQL_FILES is not longer supported for RDBMSs
               beginning with 7.1.3

           There may be a problem in some releases of Oracle with
           CATPROC.SQL script which creates the data dictionary
           catalog views that necessary for procedural
           objects and snapshots.  But attempting to run this
           script by itself will result  in errors. CATPROC.SQL
           is dependent on two other scripts: STANDARD.SQL
           (PL/SQL packages for procedural option) and
           DBMSSTDX.SQL (extensions to STANDARD package). The
           CATPROC.SQL script must be run after STANDARD.SQL and
           before DBMSSTDX.SQL. Rely upon INSTALLER to create the
           new instances and databases.

           In any case files crdb<instance_name>.sql and
           crdb2<instance_name>.sql (UNIX) or BUILD_DB.SQL
           (NetWare) generated by installer may be used.

           Other scripts which may need to be included in the
           list ($ORACLE_HOME/rdbms/admin):

               - CATALOG6.SQL - set up V6 comatable dictionary
                 views

               - EXPVEW5.SQL - EXP/IMP V5 views

               - CATEXP6.SQL - EXP/IMP V6 views

               - UTLXPLAN.SQL - PLAN_TABLE for EXPLAIN PLAN

               - UTLMONTR.SQL - grants access to the dynamic
                                performance tables to PUBLIC
               - UTLBSTAT.SQL - creates a set of statistics
                                tables at the start of
                                performance tests

               - DBMSLOCK.SQL - package allowing to create user
                                defined locks.

               - DBMSALTR.SQL - creates DBMS alerts procedures
                                which allow applications to be
                                notified when whenever values of
                                interest in the database change.

               - DBMSPIPE.SQL - Database pipes. Allows sessions
                                within the same instances to
                                communicate with each other.

               - DBMSOTPT.SQL - allows users to output messages
                                from procedures and triggers.

               - DBMSDESC.SQL - allows developers to describe
                                the arguments of a stored
                                procedure.

               - DBMSMAIL.SQL &  UTLMAIL.SQL- creates packages
                                needed to send mail via
                                Oracle*Mail.

               - UPBLD for product user profile (in SQL*PLUS
                   directory)

               - CATSTAT.SQL - executes ANALYZE command on data
                               dictionary for cost-based
                               optimization

               - UTLCHAIN.SQL - creates table to collect chained
                                rows after ANALYZE command

          These files are located in $ORACLE_HOME/dbs and
          $ORACLE_HOME/rdbms/admin on UNIX boxes.

         Notes:

         -  if database name is specified in CREATE DATABASE
            STATEMENT it must be the same as in INIT.ORA file.

         -  Usually an initial database is created as a part of
            the installation procedure by installer. In this case
            the installer issues CREATE DATABASE and build new
            database during installation (UNIX and other
            multiuser platforms).  However, on some platforms,
            for example NetWare or OS/2, installer does NOT
            create new "fresh" database, but instead it copies
            pre-created data files for SYSTEM tablespace, control
            and redo log files from distribution media. In this
            case DBA can't control some parameters, for example,
            CHARSET parameter. In this case (1) you better to
            create a new database after installation or (2) login
            as SYS and issue:

            update SYS.PROPS$ set Value$ = <charset_needed>
                   where Name = 'NLS_CARACTERSET';

            and restart instance.

            Be careful! If you specify wrong charset_needed, then NO
            WAY to restart database!

      7. Ensure the safety of the database by creating
         muliplexed redo log files and control files.

      8. Define the data dictionary and views to monitor
         database.

      9. Define and create tablespaces and rollback segments.

     10. Create user's schema objects.

          Example of BUILDALL.SQL file:

     >>> Begin of SQL*DBA script
       rem You should use  SQLDBA MODE=LINE
       rem
       rem  This script is used in Personal Oracle environment
       rem  and supposes that:
       rem
       rem   directory %rdbms71% contains INIT.ORA file.
       rem     Environment variable %rdbms71% from
       rem     \windows\ORACLE.INI references
       rem     directory \ORAWIN\rdbms71. INIT.ORA file
       rem     is located there and  that's why
       rem     PFILE parameter is not presented in
       rem     STARTUP NOMOUNT command below.
       rem   directory \orawin\dbs contains database,
       rem     redo log files and control file (value of
       rem     CONTROL_FILES parameter in INIT.ORA).
       rem     Environment %oracle_home% points to \orawin
       rem   directory %oarcle_home%\admin\ contains all the
       rem        needed system scripts.
       rem
       rem This file must be run out of the directory containing the
       rem initialization file.
       rem
       rem Instance startup (background processes + loading SGA)

       startup nomount

       rem Let's create log file
       set echo on
       spool %rdbms71%\trace\build.log
       rem Create database

       create database oracle
         controlfile reuse
         logfile '%oracle_home%\dbs\wglog1.ora' size 200K reuse,
                 '%oracle_home%\dbs\wglog2.ora' size 200K reuse
         datafile '%oracle_home%\dbs\wgsys.ora' size 10M reuse
         maxdatafiles 25
         maxlogfiles 21
         maxlogmembers 3
       /*  maxloghistory 200  for parallel server (Max # of
                 archived redo log */
          character set RU8PC866;

       rem   After creation database is MOUNTED and OPENED and
       rem   now we can run the necessary installation scripts:
       rem Two users exist now:
       rem system/manager
       rem sys/change_on_install

       rem For V7 catalog
       @%rdbms71%\admin\catalog

       rem For V6 catalog
       @%rdbms71%\admin\catalog6
       rem For procedural option
       @%rdbms71%\admin\catproc
       rem for snapshots
       @%rdbms71%\admin\catsnap
       @%rdbms71%\admin\dbmssnap
       rem Some additional views

         rem Export/import tables/views
         @%rdbms71%\admin\catexp6

         rem Vies for Oracle locks
         @%rdbms71%\admin\catblock

       rem  And now we will create synonyms of dictionary views
       rem  for the SYSTEM account. All the DBAs should have
       these synonyms.

       connect system/manager
       @%rdbms71%\admin\catdbsyn
       rem execute V7PUP.SQL for oracle tools
       rem To prevent message "product user profile is not loaded"
       rem upon SQL*Plus startup
       @%oracle_home%\dbs\v7pup.sql

       rem Create rollback segs
       rem     Let's create temporary rollback segment in SYSTEM
       rem     tablespace to allow to create other then SYSTEM
       rem     tablespaces:

       create rollback segment TEMP;
       rem     and make it active:
       alter rollback segment TEMP online;
       rem
       rem Let's create tablespace for rollback segments
       create tablespace RBS datafile '%oracle_home%\dbs\rbs.ora'
                                     size 18M;
       rem       Create two private rollback segments in RBS
       rem       tablespace and activate them (online):
       create rollback segment RBS1 tablespace RBS storage
              (initial 100k next 100k minextents 60 optimal 6M);
       alter rollback segment RBS1 online;

       create rollback segment RBS2 tablespace RBS storage
              (initial 100k next 100k minextents 60 optimal 6M);
       alter rollback segment RBS2 online;

       rem       Don't forget insert into INIT.ORA the line:
       rem       ROLLBACK_SEGMENTS = (RBS1, RBS2)
       rem       This line will alter  these two rollback
       rem       segments ONLINE at startup time as
       rem       ALTER ROLLBACK SEGMENT <name> ONLINE; command
       rem       does.
       rem
       rem Now we drop temporary rollback segment created before
       alter rollback segment temp offline;
       drop rollback segment temp;
       rem
       rem Let's create temporary tablespace. Put a special
       rem  attention to STORAGE clause.
       create tablespace TEMP datafile '%oracle_home%\dbs\temp.ora'
                                                   size 20M;
       alter tablespace TEMP default storage
                    (initial 200k next 300k pctincrease 50);
       rem
       rem Tablespace for Oracle tools (forms, browser ...)
       create tablespace TOOLS datafile
             '%oracle_home%\dbs\tools.ora' size 30M;

       rem  Tablespace Fors Users
       create tablespace USERS datafile
                       '%oracle_home%\dbs\users.ora' size 30M;
       rem
       rem Change SYSTEM's default and temporary tablespace
       Alter user SYSTEM default tablespace TOOLS
                         temporary tablespace temp;
       rem
       rem Change the password in SYS account
       alter user SYS identified by SYSTEM;

       rem And shutdown database
       connect internal
       shutdown
       rem Close log file
       spool off
     >>> End of SQL*DBA script

- Altering database

  Useful commands:

    ALTER DATABASE OPEN RESETLOGS - resets the current log
            sequence number to 1 and invalidates all the redo
            entries in the online and archived redo files.

- Tablespaces: creation, dropping, altering, enlarging, taking
               tablespace online/offline, renaming files in
               tablespace.

- Rollback segments: creation, altering, dropping, taking
  online/offline. Types of rollback segments. Using extent space
  utilization parameters for rollback segments. OPTIMAL storage
  parameter. Contention for rollback segments. Optimal number
  of extents in rollback segment. Optimal extent size of
  rollback segment.

              3.4 Start up and shutdown an instance
                  --------------------------------

- startup and shutdown steps: nomount, mount open. INIT.ORA file.

 - THREE STARTUP STAGES

 a) Startup Nomount

    The initSID.ora file is read.

    control files are determined at that time.

    size of the sga and pga is determined.

    Instance is started.

    SGA and background processes initialize.

 b) Startup Mount

    Opens control files defining database

    verifies that the control file is correct.

    Obtains a global "mount_db" instance lock for database.

    determines if this is the first instance mounting the
    database.

 c) Startup Open

    Opens "online" or "openable" database files defined in
    control file.

    Obtains global "startup" instance lock on the database.

    Is this the first instance opening this database?

    Verifies and opens the online redo logs.

    Instance locks are obtained for each "openable" data file.

  If first "startup" instance, perform crash recovery on all
  database files using current logs for all instances.

  Another instance can not start up during this time.
                If crash recovery is not possible because it needs older
logs,
                then media recovery is needed and an error is signaled.

 - SHUTDOWNS:

     - NORMAL

         a) stops access to database

         b) WAIT UNTIL USERS EXIT

         c) purge cache and redo log, forcing dirty
            buffers to DB files

         d) dropping file locks

         e) update file headers and ongoing transactions are
            complete

         f) closes thread

         g) drops database instance lock.

     The control file and the database files are synchronized.

    - IMMEDIATE

         a) stops access to database

         b) cancel current calls and rollback

         c) purge cache and redo log, forcing dirty
            buffers to DB files

         d) dropping file locks

         e) update file headers and ongoing transactions are
            complete

         f) closes thread

         g) dropping database instance lock

    -  ABORT

         a) stops access to database

         b) dropping file locks

         c) dropping instance lock

- SQL*DBA or server manager

- Altering the database. Changing parameters when database is
  online.

- Restricted sessions.

  You may want to restrict access to the database during special
  processing period to allow high profile jobs to have the
  machines to themselves. A lot of reasons may require restricted
  access.

  In Oracle7 you may issue command:

    alter system enable restricted session;

               and

    alter system disable restricted session;

  DON'T forget to take database out of restricted mode to allow
  other normal users to work.

  If the database access was already unrestricted and you issue
  alter system disable restricted session; command then NO HARM
  done.

             3.5 Prepare to install Oracle
                 -------------------------

- Oracle options: parallel server, parallel query, distributed,
  replication

- Oracle directory structure.

- Pre-installation tasks (See "Create a database" 3.3 above)

- Execute installation scripts.

- Checking the all installation logs.

- Post-installation task

        3.6 Introduction to backup and recovery mechanism
            ---------------------------------------------

- physical and logical backup/recovery

3.6.1 Physical backup/recovery
      ------------------------

- Archive log mode. Types of failures (user, statement, user
  process, instance (automatic), media).

- Physical backup methods: offline (cold), online (hot). Full and
  partial backups.

- Recovery without achieving. Recovery steps.

- Structures required for recovery.

- Operate with achieving.

- Archive modes overview.

- Display archive status.

- Specify and change archive modes.

- Enable and disable archive processing.

- Online partial database backup.

- Offline partial tablespace backup.

- Recover all transactions after media failure. Recover online
  redo log and control file.

- Incomplete recovery.

3.6.2 Logical backup/recovery & space compaction
      ------------------------------------------

- export/import.

- Export modes. Possible exports scenarios.

- Import modes.

- Questions and answers

  Q: What should you be aware of with a full import from one
     database to another?? Is it different if Oracle7?

  A: Be aware of export/import from one database into another on
     the same machine when running V6. You should be very
     careful, and pre-create all of the tablespaces with the same
     name regardless of whether you want them  or not. With a
     full import all the tablespaces are created and with the
     datafiles reused. The export file has the exact path to the
     datafiles. For example, although I am importing from my test
     database into my production, tablespaces will be
     initialized.  The production database will crash at the next
     file access, and the only way to recover is to go to a
     backup.

     This is how the create tablespace statement books in import:

      . importing user SYSTEM
        "CREATE TABLESPACE "TEST" DATAFILE
        'DISK$TEMP:[VMSSPT.VMSSPT1.DB_VMSSPT3]TES"
         "T.DBS' SIZE 2097152 REUSE DEFAULT STORAGE (INITIAL
         10240 NEXT 10240 MINEXTENTS 1 MAXEXTENS 99 PCTINCREASE
         50) ONLINE"

      A possible workaround is to rename the dbs files of all you
      tablespaces, precreate ALL the tablespaces, or don't issue
      a full import.

      Its natural to think, sure all you have to do is export from one and
      import into another, however, this could cause problems.

      This doesn't apply to the Oracle7 import utility, it has
      the DESTROY option added for this issue. DESTROY=N tells
      import that if the file exists, do not destroy it, instead,
      give an error on the create tablespaces statement.
      DESTROY=Y will reuse the tablespace file.

             3.7 Create and Manage Database Users
                 --------------------------------

- Controlling user access.

  - Monitoring user sessions

    - V$SESSION view

    - Monitoring idle time

      >>>> Begin of script

      rem****************************************************
      rem****************************************************
      rem              IDLE_TIME MONITOR SCRIPT
      rem****************************************************
      rem****************************************************
      rem
      rem
      rem   This script provides an easy way for dba to
      rem   monitor session idle times. The output of
      rem   the script shows the sid for each session
      rem   running against the database, the last time
      rem   this session was active (including date and
      rem   time), the current time and the amount of
      rem   time (in seconds as well as minutes) elapsed
      rem   since the session became inactive.  This
      rem   script is written to be run from the sys
      rem   account in sqlplus.
      rem
      column sid format 999
      column last format a22 heading "Last non-idle time"
      column curr format a22 heading "Current time"
      column secs format 99999999.999 -
             heading "idle-time |(seconds)"
      column mins format 999999.99999
             heading "idle-time |(minutes)"
      select sid,
             to_char((sysdate - (hsecs - value)/(100*60*60*24)),
                'dd-mon-yy hh:mi:ss') last,
             to_char(sysdate, 'dd-mon-yy hh:mi:ss') curr,
             (hsecs - value)/(100) secs,
             (hsecs - value)/(100*60) mins
             from V$TIMER, V$SESSTAT
             where statistic# =
                   (select statistic# from V$STATNAME
                    where Name = 'process last non-idle time');

      >>>> End of script

- Managing database security for users.

- Creating users. Altering users. Dropping an existing user.
  Monitoring users. Killing user's session.

- Controlling database system resources. Managing profiles. View
  profile information.

             3.8 Ensure database integrity
                 -------------------------

- Managing constraints.

   Q: What is a query that gives a DBA the primary/foreign key
      relationships of tables within the database.

   A:
     >>> Begin of script
     rem   ???????Check this script ?????????
     column For_owner format a10
     column For_table format a10
     column Pri_owner format a10
     column Pri_table format a10
     column For_col format a15
     column Pri_col format a15
     rem
     select A.Owner For_owner, A.Table_name For_table,
            C.Column_name For_col, B.Owner Pri_owner,
            B.Table_name Pri_table, D.column_name Pri_col
     from DBA_CONSTRAINTS A, DBA_CONSTRAINTS B,
          DBA_CONS_COLUMNS C, DBA_CONS_COLUMNS D
     where A.R_constraint_name = B.constraint_name
     and  A.Constraint_type = 'R'
     and  B.Constraint_type = 'P'
     and  A.R_owner         = B.owner
     and  A.Constraint_name = C.Constraint_name
     and  B.Constraint_name = D.Constraint_name
     and  A.Owner           = C.Owner
     and  A.Table_name      = C.Table_name
     and  B.Owner           = D.Owner
     and  B.Table_name      = D.Table_name;
     >>> End of script

- Data dictionary view for constraints.

             3.9 Overview of database privileges
                 --------------------------------

- System privileges and object privileges.

  !!! Important note:

         If user A has CREATE ANY object privilege (i.e.
         CREATE ANY TABLE) and creates an object in user B schema
         (for example, CREATE TABLE B.T (C char(1);) then the
         owner of object being created is user B rather then
         A (in out example user the owner of table T is B
         despite of this table  was created by user A). This
         rules are extended on all types of objects

   !!! End of important note

- Grant and revoke privileges.

- Managing roles.

  There is a very CONFUSING thing in Oracle 7.0, 7.1, ...

  Three pre-defined roles exist:
    CONNECT
    RESOURCE
    DBA.

  These roles were designed for compatibility with Oracle
  v6. Several privileges are assigned to them.  But there is one
  EXCEPTION:

   Statements:

       grant RESOURCE to <Object>;

                 and

       grant DBA to <Object>;

    implicitly execute the statement:

       grant UNLIMITED TABLESPACE to <Object>;

    Similarly, the statements:

       revoke RESOURCE to <Object>;

          and

       revoke DBA to <Object>;

        implicitly execute

        revoke UNLIMITED TABLESPACE to <Object>;

       If a user has UNLIMITED TABLESPACE privilege he can write
       to any tablespace IGNORING ANY QUOTAS.

       Granting CONNECT role does not causes any extra
       privileges!

               3.10 Networked Oracle
                    ----------------

- SQL*NET v1 and SQL*Net v2. Client-server and server-server
  connections. Muliti- and single-threading. Process-per-user
  (single-thread) and multi-thread architecture.
  Listener or server process.

  Important Notes:

   1. SQL*Net V1 on one side can ONLY work with V1 on other side
      and V2 can interact with V2 only. V1 <-> V2 and V2 <-> V1
      combinations are NOT ACCEPTABLE.

   2.Oracle has a VERY CONFUSING terminology in SQL*Net V1!
     There is a process who listens for connection requests from
     client.  This process is called "SERVER process" in SQL*Net
     V1 and "LISTENER process" in SQL*Net V2.  The valid name for
     this process is LISTENER of course, because server (again,
     sometimes called user) process is responsible for interaction
     with user machine in single-thread environment. Thing called
     "shared server" is SQL*Net V2 counterpart in multi-thread
     environment.

  - SQL*Net v1 connect string:

   usnm/pwd@PROTOCOL_PREFIX:SERVER_NAME:INSTANCE_NAME[,BUFFER_SIZE]

     where INSTANCE_NAME is system identifier (SID) and
     BUFFER_SIZE is the size of the context area used to
     pass data between SQL*NET and communication protocol. For
     TCP/IP default buffer size is 4096.
     For some platforms like Netware or OS/2 which do not
     support multiple instances on the same machine  connect
     string looks like:

           usnm/pwd@PROTOCOL_PREFIX:LISTENER_NAME

     where LISTENER_NAME - name of the listener process specified
     when listener starts.

     examples:

              sqlplus scott/tiger@t:vabank:test

              sqlforms dima/dima@x:ora7

  It is also possible to set DEFAULT connection and ALIASES on
  client side.

    Default connections are provided with environment var.
    TWO_TASK (UNIX), LOCAL and ORACLE.INI or CONFIG.ORA
    parameters for Windows, DOS(OS/2, NetWare).

      Example for UNIX client:

      $ TWO_TASK=T:hq:loc;EXPORT

      $ sqlplus scott/tiger   will try to connect to hq:loc

    ALAIASES can be specified in /etc/sqlnet file for UNIX
    platform or REMOTE and DB_NAMES parameters for desktop
    platforms.

     UNIX example: /etc/sqlnet may contain the line:

             hq    t:hq:loc

     then you could enter
      sqlplus scott/tiger@hq

  - SQL*Net V2 (see also APPENDIX A).

    Transparent Network Substrate (TNS) resolves server-server
    connectivity issues. Multiprotocol Interchange and the TNS
    allows SQL*Net v2 connection to be made independent of the
    communication protocol and operating system.

    - Connect descriptors:
      the server and instance portions of an object's Fully
      Qualified Object Name (FQON) are identified by CONNECT
      DESCRIPTOR:

          (DESCRIPTION =
                (ADDRESS=
                   (PROTOCOL=TCP)
                   (HOST=VABANK)
                   (PORT=1521) /* port for V1 is in /etc/services
                                  file (orasrv 1525/tcp) */
          (CONNECT DATA =
                   (SID= test))) - SID instance name.

      ADDRESS part of this descriptor is PROTOCOL DEPENDENT. For
      DECnet and IPX/SPX it will be different.

      To connect you should issue:

        sqlplus scott/tiger@(DESCRIPTION=(ADDRESS=(PROTOCOL=...

    - Service names. TNSNAMES.ORA file in TNS_ADMIN
      (oracle_home/network/admin).

      It's silly to type in connect descriptor each time a tool is
      invoked. You can assign so called SERVICE NAME or ALIAS to
      a connect descriptor, for example:

       HQ=(DESCRIPTION =          /* HQ - service name (Alias) */
                (ADDRESS=
                   (PROTOCOL=TCP)
                   (HOST=loc)
                   (PORT=1521) /* port for V1 is in /etc/services
                                  file (orasrv 1525/tcp) */
          (CONNECT DATA =
                   (SID= test))) - SID instance name.

      Now to connect you should only type:

             sqlplus scott/tiger@HQ

      Note: Service name is CASE INSENSITIVE

      Default connection mechanisms (TWO_TASK var, LOCAL=) are
      the SAME as ones described above for SQL*Net V1.
      Igor??

      Such service names are stores in file TNSNAMES.ORA. This
      file should locate on each client or server on network
      and should contain alaises for all servers avalable from
      this computer.

      An additional file SQLNET.ORA may be created to specify
      additional diagnostic beyond the default diagnostic
      provided.  SQLNET.ORA is user on both client and sever
      (during server-server communications only) side.

        - Interprocess communications (IPC) OM 5-6.1995 p.87

          When the client and server portions of an Oracle
          application run on the same computer, they can
          communicate efficiently without a network by IPC.
          But remote clients automatically attempt IPC to a TNS
          listener. This feature can be disabling by inserting
            AUTOMATIC_IPC = OFF
          in the client version of SQLNET.ORA

        - Detecting dead connections (OM 5-6.1995 p.87)

          SQL*NET 2.1 has a new feature that allows it to detect
          broken client connections and have the server
          automatically terminate them. This feature works by
          regularly sending a probe packet thorough a connection
          to determine its status. The interval for sending probe
          packets can by set by SQLNET.EXPIRE_TIME parameter in
          the SQLNET.ORA on client and server sides. For example,
           SQLNET.EXPIRE_TIME = 10
          set interval to 10 minutes.

             Important note:
                If this interval is set to low value then
                frequent probes sent to all clients generate lots
                of network traffic! Oracle recommends set 10
                minutes interval.

    - Listener.

      Only ONE listener per machine not depending on number
      of instances.

      Each server MUST contain a LISENER.ORA file, which lists
      names and address of all the TNS "listener" processes on
      the machine and the instances they support. Listener
      process receives connection from SQL*Net v2 clients.
      LISTENER.ORA contains:

        - a header section

        - an interprocess call (IPC) address definition section

        - instance definitions

        - operational parameters

    - Supporting the SQL*Net V2 configuration tool.

      TNSNAMES.ORA, LISTENER.ORA and others should be
      automatically  generated via the SQL*net v2 configuration
      tool. In order to use this tool account NET_CONT (with
      full DBA privileges) must be created. The script
      NCSSHEMA.SQL located in $ORACLE_HOME/network/config
      directory creates all the necessary tables which should be
      filled with data by user via special forms.

      The configuration tool called net_conf.

    - Multi Protocol interchange.

      A network community is a set of SERVERS that communicate
      with each other via a single protocol.

      MPIs may be physically configured so that multiple access
      paths are available between servers. The MPI will select
      the most appropriate path based on path availability and
      network load.

      Each MPI is comprised of three components:

      -- connection managers, which manages a Listener process to
         detect connection requests

      -- a Navigator which chooses the best possible path through
         the TNS network;

      -- the Interchange Control Utility.

      The NETWORK COMMUNITY in which server exists should be
      added to the connect descriptors for its databases:

       hq=(description =
                (address=
                   (COMMUNITY=TCP.HQ.COMPANY)
                   (protocol=tcp)
                   (host=loc)
                   (port=1521)
          (connect data =
                   (sid= test)))

       TNSNAMES.ORA is automatically generated via Configuration
       Tool. Other files are also generated:

         TNSNAV.ORA - describes the communities of each MPI on
                      the network
         TNSNET.ORA - contains an overview of the layout of the
                      network for ALL MPIs. Lists all the
                      communities on the network and relative
                      costs of traversing them. These costs
                      reflect the throughput capability of the
                      community. They are used to choose which of
                      multiple available access path should be
                      used.
         INTCHG.ORA - contains parameters that control the
                      behavior of each MPI.

    - Oracle Names
      is coming with SQL*Net v2.1. It users Distributed Option to
      manage the distribution of the network configuration files.

- Tuning SQL*Net.

  - the use of distributed objects, such as snapshots, to
    replicate static data to remote database;

  - the use of DB procedures to reduce the amount of data sent
    across the network;

  - the use SQL*Net V2 whenever applicable. V2 sends fewer network
    packets;

  - providing enough MPIs;

  - using homogeneous servers to eliminate the needs of MPI.

- Networking in UNIX

   - Identification of hosts in UNIX TCP/IP protocol for BOTH
     SQL*Net V1 and SQL*Net V2 ???Igor???

     A host is a server that is capable of communicating with
     another server via network. Each host maintains a list of
     hosts with which it can communicate in file:

                    /etc/hosts

     Sample of /etc/hosts
                  128.0.0.1 vabank VABANK
                  128.0.0.2 fors
                  128.0.0.3 docum archive

     The first field is an internet address, the second field - a
     host name and the third and subsequent fields - host name
     aliases. PC software emulates this mechanism. It's possible
     to use TCP/IP on NetWare, DOS, OS/2, Windows NT and
     MS-Windows. Location of file "hosts" depends on emulation
     package and platform. The current server (where the file
     "hosts" is located) should also be mentioned in this file.

     etc/hosts file is mandatory for both SQL*Net V1 and SQL*Net
     V2. Yes but IP address can be used instead of host name.

   - Identification of Databases. /etc/oratab file.

     All databases that are run on the on a host and ACCESSIBLE
     to the network MUST be listed  in a file named /etc/oratab.

     Oracle starter - dbstart (dbshut) from Oracle -> oratab

     /etc/oratab MUST contain 3 components, delimited by colon:

          ORACLE_SID  - Instance name (System ID)

          ORACLE_HOME - full path name of the Oracle software
                        used by database.

          Startup_Flag - flag to indicate whether the instance
                        should be started when the host is
                        started. (Y or N)

     Example:
                LOC:/orasw/v7016:Y
                CC1:/orasw/v713:N
                OLD:/orasw/v637:Y

     Ones an instance is listed in etc/oratab on an identified
     (via file /etc/hosts) host on the network, it can be
     accesses via SQL*Net connection string (if SQL*Net is
     enabled of course).

   - Identification of services for SQL*Net versions 1 and 2.

     - SQL*Net V1.

       The service for SQL*Net V1  must be listed in
       /etc/services file, named "orasrv" and should be assigned
       to port 1525:

                    . . .
                    orasrv 1525/tcp
                    . . .

     - SQL*Net V2.

       TNSNAMES.ORA file on each host will include listings of
       service names with their associated connect descriptors:

       hq=(description =
                (address=
                   (protocol=tcp)   /* Service's protocol */
                   (port=1521)      /* Service's port */
                   (host=loc)
                   . . .

       I thought that TNSNAMES.ORA is needed for remote servers
       only, not for the local host! ???Igor???.

   - Starting and controlling listener (called server in SQL*Net
     V1) processes.

     - SQL*Net V1.

       - Starting server process manually. Type "orasrv" on Unix
         prompt.  Format:

         orasrv file_1 flag_1 ... flag_n file_2

         Parameters' description:

           Mapfile- filename (without keyword) used to map
                    instance name (ORACLE_SID) to their Oracle
                    software root directory (ORACLE_HOME) If not
                    specified then defaults to \etc\oratab (see
                    above)

           I      - a flag to indicate if in-bank breaks are
                    used (default). <CONTROL+C> is sent in
                    ordinary sequence.

           O      - out-of-band breaks. (I and O are mutually
                    exclusive). <CONTROL+C> is sent by
                    separate request (out of sequence)

           logon  - SQL*Net activities should be written to
                    Logfile

           logoff - a flag used to turn logging OFF. logon and
                    logoff are mutually exclusive

           debugon- a flag indicates that debugging activities
                    should be started. This reports a greater
                    level of details on connection attempt and
                    writes to logfile

           debugoff- a flag used to turn debugging off

           dbaon  - a flag to indicate that remote users will be
                     able to CONNECT INTERNAL into local
                     database.

           dbaoff - reverse to dbaon

           opson  - remote users will be able to autologin (OPS$
                    account)

           opsoff - disables autologin

           -O<logfile_name - specifies the log file. Its default
                    value is $ORACLE_HOME/tcp/log/orasrv.log If
                    that file already exist then new log entries
                    will be APPENDED to it.

          Example:
          $ orasrv /etc/myoratab dbaoff opsoff

          orasrv process should be owned by root , with a setuid
          root bit set ("chown root orasrv", "chown 4555
          orasrv").

       - Starting SQL*Net with tcpctl utility (see below):

            $ tcpctl start

       - Starting SQL*Net automatically on system startup.

         The following commands should be added in /etc/rc file:

             ORACLE_HOME=/orasw/v7015
             su -oracle -c $ORACLE_HOME/bin orasrv

       - Controlling the server (listener) process.

         tcpctl utility monitors the orasrv process and has the
         ability to stop, start and modify the process.

         Parameter's description (start, stop, stat and version
         are mutually exclusive. "Log" and "debug" are used only
         in conjunction with "start"):

           start - starts the orasrv process

           stop  - stops the orasrv process

           stat  - displays the status of orasrv

           version - displays version of orasrv. This can be used
                   to query the status

           log   - a flag to indicate whether logging mode should
                   be enabled.

           debug - a flag to indicate whether debugging mode
                   should be enabled (it's equivalent to "orasrv
                   debugon")

               ****** undocumented parameters ********

           debug=<level> - in conjunction with debug parameter
                   this determines the level of trace info
                   written (for Oracle support)

           opsrooton - flag to indicate that the database should
                   allow remote access to a local account named
                   OPS$ROOT

           opsrootoff - no remote access to OPS$ROOT. This
                   parameter may be used in conjunction with
                   "opson" parameter to allow only non-root
                   remote autologins

           port=<port> - specifies the port to be used if other
                   then default

           listen=<queue size> - specifies the length of the
                   listen queue.

           timeout=<seconds> - specifies how long a handshake with
                   orasrv should be attempted before attempt
                   times out

           forkon, forkoff, detachon, detachoff - tell SQL*Net
                   how to run orasrv process. The default is
                   "detachon" which run orasrv as detached
                   process. "detachoff" will require separate
                   terminal.

         Examples:

                 tcpctl stop - stops the orasrv process

                 tcpctl stat - list orasrv status info

                 tcpctl stat @host_name - orssrv status for
                                        REMOTE host

                 tcpctl version @host_hname

                 tcpctl start

                 tcpctl start log debug

     - SQL*Net V2.

       - Starting listener. Listener Control Utility named
         LSNRCTL.

            LSNRCTL START  - Starts listener LISTENER

            LSNRCTL MY_LSNR - Starts listener called MY_LSNR.

            LSNRCTL STATUS - get listener status

         This utility runs $ORACLE_HOME/bin/tnslsnr executable.
         File LISTENER.ORA should be located by default in
         $ORACLE_HOME/network/admin or in directory pointed by
         TNS_ADMIN environment variable.

         To show if listener active your can issue:

             > ps -ef | grep tnslsnr

       - Controlling the listener process.

         LSNRCTL utility. Parameter's description:

           SET PASSWORD <password> - allows the user access to
                 administrative options within LSNRCTL. The
                 listener password is set via
                 PASSWORD_listener_name parameter in the
                 LISTENER.ORA.

           START - starts the listener

           STOP  - stops the listener.

           STATUS - provides status info for listener. Can be
                 used to query the status of a listener on a
                 remote server.

           VERSION - displays version info for the Listener, TNS,
                 and the protocol adapter

           SERVICES - displays services available, along with its
                 connection history. It also lists whether each
                 service is enabled for remote DBA or autologin
                 process

           RELOAD - allows to modify the listener services after
                 the listener has been started. It forces SQL*Net
                 to read and use the most current LISTENER.ORA
                 file

           TRACE - set trace level of the listener to one of
                  three choices: OFF, USER (limited tracing), and
                  ADMIN (high level of tracing)

         Examples:

                 > lsnrtctl
                 LSNRTCTL> set password <lsnr_password>
                 LSNRTCTL> stop

                 > lsnrtctl  status

                 > lsnrtctl  status hq - status on the another
                                   host. HQ - service neme

                 > lsnrtctl version

                 > lsnrtctl
                 LSNRTCTL> set password <lsnr_password>
                 LSNRTCTL> services

                 > lsnrtctl
                 LSNRTCTL> set password <lsnr_password>
                 LSNRTCTL> reload

                 > lsnrtctl
                 LSNRTCTL> set password <lsnr_password>
                 LSNRTCTL> trace user

                 > lsnrtctl
                 LSNRTCTL> set password <lsnr_password>
                 LSNRTCTL> start

           Most of these commands require password.
- Identifying of networked user and getting O/S server process ID
  in single-threaded mode (for SQL*Net V1 and V2).

 Q. Is there a way to identify a client connection if all
    clients are connecting with the same Oracle username?  What's
    the answer for a client that is a unix client on the same
    machine as the db, or for a unix client that is a remote
    client, or for a desktop client?  A customer would like to be
    able to identify a client connection by the tcp/ip ip address
    that it is coming from.

 A. Here is a mail to helpnet from Victor Grigorieff that can
    answer all of the above questions.  This is for identifying
    shadows from PC to UNIX.

    Here is how I figure these things out, using a Data Browser
    Query like this:

    select ALL SYS.V_$session.osuser, SYS.V_$session.username,
              SYS.V_$session.terminal, SYS.V_$session.program,
              SYS.V_$process.program
          FROM  SYS.V_$PROCESS, SYS.V_$SESSION
          WHERE SYS.V_$PROCESS.Addr=SYS.V_$SESSION.Paddr;

    The output looks like this:
                                   oracle@squid (PMON)
                                   oracle@squid (DBWR)
                                   oracle@squid (LGWR)
                                   oracle@squid (SMON)
                                   oracle@squid (RECO)
    Victor Gri  SYSTEM  OraUser    Oracle Data Browser
                                   (TCP Two-Task) orapid:6957
    vgrigori    SCOTT   Windows PC C:\ORAWIN\BIN\PLUS31.EXE
                                   (TCP Two-Task) orapid:6962
    usupport    SCOTT   pts/2      sqldba@tcaix TCP Two-Task)
                                   (TCP Two-Task) orapid:6968
    oracle      SCOTT   ttyp1      sqldba@fleetwoo
                                (Appletalk Two-Task) orapid:6998

   Notice that for the Mac Client:

        Victor Gri  SYSTEM  OraUser    Oracle Data Browser

        The "Victor Gri" is the first 10 letters of the name I
        entered in the File Sharing Setup control panel on my
        Mac.)

   Notice that for the Windows Client:

        vgrigori    SCOTT   Windows PC C:\ORAWIN\BIN\PLUS31.EXE

        The "vgrigori" is the USERNAME= parameter of the
        ORACLE.INI or CONFIG.ORA USERNAME= for DOS or OS/2

   For the UNIX client:

        usupport    SCOTT   pts/2      sqldba@tcaix

        The "pts/2" are the first 5 characters of the tty, which is
        not too helpful, as the real tty in this case was pts/26.

   In any case, this should give you enough information to tell
   which shadow process relates to which Windows-client.  You
   just need to set the USERNAME parameter properly on each of
   your Windows clients.
   This should work with SQL*Net v1 and V2, but I have not tried
   v2.

   To get a server (user) process operating system ID you should
   join V$SESSION table with V$PROCESS table over PADDR field.

                3.12 National Language Support
                     -------------------------

- How to set NLS support on client or server sides

  - Available character sets:

    CL8ISO8859P5     ISO for minis (HP, DEC, SUN ...)

    CL8MSWINDOW31    Microsoft Windows 3.1 8-bit Latin/Cyrillic

    CL8MSWIN1251     Microsoft Windows Code Page 1251 8-bit (New)

    RU8PC866         IBM-PC Code Page 866 8-bit (Alternate)

    CL8MACCYRILLIC   Mac 8-bit Latin/Cyrillic

    CL8MACCYRILLICS  Mac Server 8-bit Latin/Cyrillic

    CL8EBCDIC1025    EBCDIC IBM Code Page 1025 8-bit Cyrillic

    CL8BS2000        Siemens EBCDIC.EHC.LC 8-bit Cyrillic

    RU8PC855         IBM-PC Code Page 855 8-bit

    RU8BESTA         BESTA 8-bit

  - Server side

     To ensure automatic charset translations russian charset
     should be installed on server side by either:

     1. CREATE DATABASE  .... CHARSET <charset>

                          or

     2. Check table SYS.PROPS$ and update it if necessary.

        update SYS.PROPS$ set VALUE$ = '<charset>'
        where Name = 'NLS_CHARACTERSET';
        COMMIT;

        BE CAREFUL!!! If wrong <charset> is specified you will
        NOT BE ABLE TO STARTUP database!!!

        You can find correct  chaset_names with:

             select Value from SYS.V_$NLS_VALID_VALUES
             where Parameter = 'CHARACTERSET';

        Note: On UNIX platforms charset_name RU8PC866 is
              available only beginning with Oracle RDBMS 7.1.3 or
              later!!!.

  - Client side

        LANGUAGE   - v6 tools (SQL*FORMS,... SQL*PLUS 3.0)

        NLS_LANG   - v7 tools (ORACLE*FORMS,... SQL*PLUS 3.1)

    - UNIX & VMS

        NLS_LANG env. variable (logical).

        NLS_LANG=AMERICAN_AMERICA.RU8PC855;export

        NLS_LANG=RUSSIAN_CIS.RU8PC855;...

    - DOS.
                CONFIG.ORA

      V6 tools (SQL*FORMS, SQL*RW...):

          LANGUAGE=AMERICAN_AMERICA.RU8PC866

      SQL*PLUS 3.1,EXP/IMP & Loader 7.0.15 for DOS:

          NLS_LANG=RUSSIAN_CIS.RU8PC866

    - MS-WINDOWS

       C:\WINDOWS\ORACLE.INI

            NLS_LANG=RUSSIAN_CIS.CL8MSWIN1251

    - NetWare

             SYS:SYSTEM\CONFIG.ORA

        NLMs:  EXP/IMP, SQL*LOADER, SQL*DBA

         NLS_LANG=RUSSIAN_CIS.RU8PC866

    -  UNIX

        NLS_LANG=RUSSIAN_CIS.CL8ISO8859P5;export

        for V6 tools (FORMS,MENU,RW)
        LANGUAGE=RUSSIAN_USSR.CL8ISO8859P5;export

- Character set conversion

    Character conversion can be tested using SQL DUMP and CHR
    functions by specifying the numeric (binary) value of a
    character in a given encoding scheme. This method provides an
    important testing technique sins a terminal using a
    designated character encoding testing technique since a
    terminal using a designated character encoding scheme is not
    necessary.

      SELECT DUMP (CHR(x),'<target_char-set>','<source_char_set>')
      FROM DUAL;

      SELECT DUMP (CHR(246),'WE81SO8859P1','WE8PC850') FROM DUAL;

    This technique can also be test case conversion

      SELECT DUMP(UPPER(CHR(231))) FROM DUAL;

    and to directly insert and select character data independent of
    character set conversion.

      INSERT INTO <table> (<column>) VALUES (CHR(231)||CHR(232)||CHR(233));
      SELECT DUMP (<column>) FROM <table>;

    B4. Can character data be converted explicitly?

        Yes. Applications can use the CONVERT function to explicitly
convert
    data from one encoding scheme to another.

       CONVERT (<column>,<dest_char_set>,<src_char_set>)

    were <dest_char_set> is the encoding scheme to convert data to and
<src_char_set>
    is the encoding scheme of the data to be converted. For example,

       SELECT CONVERT (ENAME,'WE8HPHP','WE8DEC') FROM EMP;

    generates a report from data entered as DEC 8-bit character set for
    output to a laserJet printer using HP Wespten European 8-bit
    character set.

                3.13 Overview of additional database topics
                --------------------------------------

- Multi-threaded server configuration. (See Appendix A)

- Auditing.

- Database locking.
  Viewing and monitoring locks: CATBLOCK.SQL + UTLLOCKT.SQL

- Database tuning.

- Data loading.

- Distributed capabilities.

- Parallel query option.

- Parallel server capabilities.

                3.14 Managing distributed databases
                     ------------------------------

- Overview. Dynamic data replication.

- Remoute queries. Database links.

- Remote Data Manipulation: two-phase commit

- Managing distributed data.
  Enforcing location transparency. DB links versus  Oracle*Names.
  Managing database triggers. Managing snapshots.
  Remote procedure calls: synchronous and asynchronous.

- Replication facilities.

- Managing distributed transactions.

- Database domains and communities

- Monitoring and tuning distributed databases



                4. Oracle application tools concepts
                   ---------------------------------

 - FORMS 4GL. SQL*Forms/Menu, Oracle*Forms. Default behavior.
   Declarative programming. Blocks. Fields and items. Triggers.

 - SQL*ReportWriter and Oracle*Reports.

 - Oracle*Graphics



                     A P P E N D I X E S
                     - - - - - - - - - -

             Appendix A. Networked Oracle
                         ----------------

;                ORACLE.INI file for MS-WINDOWS
;
local=t:e35m:test
ORACLE_HOME=M:\ORAWIN
NLS_LANG=RUSSIAN_RUSSIA.CL8MSWINDOW31
.......
USERNAME=DBEZRUKO
TCP_VENDOR=NOVLWP        ; Novell Lan Workplace
TCP_HOSTS_FILE=M:\NET\TCP\HOSTS  ; Hosts file
NETWORK21=M:\ORAWIN\network
TCP_PORT=1525  ;  port for SQL*NET v1
. . . .

------------------------------------------------
rem             CONFIG.ORA for MS-DOS
ORACLE_HOME=C:\ORACLE6
MACHINE_TYPE=J
LOCAL=t:e35m:test        ; To fast switching between
rem LOCAL = x:ora7       ; between servers (by default)
remote=x:ora7
LANGUAGE=Russian_USSR.RU8PC866    ;v6 tools
NLS_LANG=RUSSIAN_CIS.RU8PC866     ;v7 tools
USERNAME=BEZRUKOV
TCP_HOSTS_FILE=C:\NET\TCP\HOSTS
TCP_SERVICES_FILE=C:\NET\TCP\SERVICES
utility_MSG=C:\ORACLE6\DBS
dynamic_memory=8000
ORAKITPATH=C:\ORACLE6\DBS
MULTIVERSION_ORACLE_HOME=TRUE
DOSMEM_SEARCH=PROT,REAL
RDBMS70=C:\ORACLE6\RDBMS70
PRO15=C:\ORACLE6\PRO15
RDBMS_DFLT=RDBMS70
. . .
------------------------------------------------
#  Oracle Corporation 1993
#  Sample SQLNET.ORA file.
#  This is a sample file for the client side.
#  This file should be changed appropriately
#  for any given network/machine.
#  5/15/93 - Created Mason Ng
#
# set level of "verbosity" of tracing
#

automatic_ipc = OFF
trace_unique_client = ON
trace_level_client  = OFF
trace_directory_client = C:\tmp

netman.trace_level=off
netman.trace_file=netman
netman.trace_directory=C:\ORAWIN\network\trace

-----------------------------------------------
#  Oracle Corporation 1993
#  Sample tnsnames.ora file.
#  This is a sample file for the client side.
#  This file should be changed appropriately for
#  any given network/machine.

test=(DESCRIPTION=
                 (ADDRESS=
                  (PROTOCOL=tcp)(PORT=1521)(HOST=e35m))
                  (CONNECT_DATA=(SID=test)
                 ))
vbnk=(DESCRIPTION=
                 (ADDRESS=
                  (PROTOCOL=tcp)(PORT=1521)(HOST=e35m))
                  (CONNECT_DATA=(SID=vbnk)
                 ))

sys4=(DESCRIPTION=
                 (ADDRESS=
                  (PROTOCOL=tcp)(PORT=1521)(HOST=e35m))
                  (CONNECT_DATA=(SID=sys4)
                 ))

sco=(DESCRIPTION =
                 (ADDRESS=
                  (PROTOCOL = tcp)(PORT = 1521 )(HOST = vabank ))
                  (CONNECT_DATA=(SID = bank)
                 ))
#
#      Dedicated server
#
#  Important Note #1: if (SERVER=DEDICATED) is not presented then
#                     (SERVER=MULTI-THREADED) is assumed.
#
#  Important Note #2: if MTS is not installed on the server side
#                     then dedicated server is ALWAYS used, even
#                     if (SERVER=MULTI-THREADED) is specified.
#
#   If you want to be sure to force a client to use the MTS,
#   include:
#   USE_DEDICATED_SERVER=OFF
#   in client side SQLNET.ORA file.
#
dnw=(DESCRIPTION=
                (ADDRESS=(PROTOCOL=SPX)
                 (SERVICE=NET2)
                )
                 (CONNECT_DATA=(SID=ORA7)
                 (SERVER=DEDICATED)
                )
          )
#
#      Multithread server
#
mnw=(DESCRIPTION=
                (ADDRESS=(PROTOCOL=SPX)
                 (SERVICE=NET2)
                )
                 (CONNECT_DATA=(SID=ORA7)
                 (SERVER=MULTI-THREADED)
                )
          )

-----------------------------------------------
#
# LISTENER.ORA file (from NetWare)
#
LISTENER = (ADDRESS_LIST=
             (ADDRESS=(PROTOCOL=SPX)(SERVICE=NET2))
           )
CONNECT_TIMEOUT_LISTENER=10
STOP_LISTENER=yes
TRACE_LEVEL_LISTENER = ON
SID_LIST_LISTENER=(SID_DESC=
                   (SID_NAME=ORA7)
                   (ORACLE_HOME=SYS:ORANW))

-----------------------------------------------
#
# INIT.ORA parameters for multithread server
#
# ORACLE_SID on UNIX platforms:
MTS_SERVICE=MTSSRV
MTS_DISPATCHERS="SPX,2"
MTS_LISTENER_ADDRESS="(ADDRESS=(PROTOCOL=SPX)(SERVICE=NET2))"
MTS_SERVERS=3
MTS_MAX_SERVERS=5
MTS_MAX_DISPATCHERS=4
#
#  Process names are d000 - dispatchers and S000 - servers
-----------------------------------------------
;
;     c:\windows\ORACLE.INI configuration file
;
[Oracle]
local=test
;local=t:e35m:test
ORACLE_HOME=Q:\ORAWIN
NLS_LANG=American_America.CL8MSWINDOW31
;NLS_LANG=Russian_CIS.CL8MSWINDOW31
;NLS_LANG=American_America.WE8ISO8859P1
ORACLE_TEMP=C:\TMP
ORACLE_LOCAL=C:\TMP
CA_UPREFS=C:\WINDOWS
CA_GPREFS=Q:\ORAWIN
ORAINST=Q:\ORAWIN\dbs
OB20=Q:\ORAWIN\BOOK20
OB21=Q:\ORAWIN\BOOK21
TCP_VENDOR=NOVLWP
TCP_HOSTS_FILE=C:\NET\TCP\HOSTS
;TCP_SERVICES_FILE=Q:\ORAWIN\TCP\SERVICES.HLP
TCP_PORT=1525
HAMMER=Q:\ORAWIN\BOOK21\HAMMER
VGS21=Q:\ORAWIN\CDE2
DE15=Q:\ORAWIN\CDE2
OCL25=Q:\ORAWIN\CDE2
TK21=Q:\ORAWIN\CDE2

TK21_ICON=Q:\ORAWIN\CDE2\ICONS;Q:\ORAWIN\FORMS45\DEMOS\ICONS;Q:\ORAWIN\GRAPH25\DEMOS\FORMS
MM20=Q:\ORAWIN\CDE2
RDBMS70=Q:\ORAWIN\RDBMS70
RDBMS70_CONTROL=Q:\ORAWIN\DBS
RDBMS70_ARCHIVE=Q:\ORAWIN\RDBMS70\ARCHIVE
RDBMS71=Q:\ORAWIN\rdbms71
PRO14=Q:\ORAWIN\PRO14
PRO15=Q:\ORAWIN\PRO15
PRO16=Q:\ORAWIN\pro16
SQLPATH=Q:\ORAWIN\DBS
PLUS31=Q:\ORAWIN\PLUS31
EXECUTE_SQL=PLUS31
VGS20=Q:\ORAWIN\dbs
DE1016=Q:\ORAWIN\DBS
OCL20=Q:\ORAWIN\dbs
TK20=Q:\ORAWIN\DBS
OH1010=Q:\ORAWIN\DBS
MM107=Q:\ORAWIN\DBS
SUPNOTES=Q:\ORAWIN\supnotes
CDHOME=S:
ORACLE_VOLUME=S:\
office20=Q:\ORAWIN\OFFICE20
username=BEZRUKOV
coauthor=Q:\ORAWIN\coauthor
cohome=Q:\ORAWIN\COAUTHOR
context=Q:\ORAWIN\context
coauthor20=Q:\ORAWIN\coauth20
oacore10=Q:\ORAWIN\oacore10
oahome=Q:\ORAWIN
NETWORK21=C:\ORAWIN\network
TNS_ADMIN=C:\ORAWIN\network\admin
SQLTR=Q:\ORAWIN\SQLTR
MSHELP=Q:\ORAWIN\MSHELP
TOOLS_GROUP=Personal Oracle7
VS10=Q:\ORAWIN\VS10
HELP_GROUP=Personal Oracle7
GLUEOFF=Q:\ORAWIN\GLUEOFF\MESG
EGLUE11=Q:\ORAWIN
OOMGRF20=Q:\ORAWIN\OOMGRF20
ORAKITPATH=Q:\ORAWIN\dbs
UTILITY_MSG=Q:\ORAWIN\dbs
ORACLE_TIO=VIDEO
PLSQL20=Q:\ORAWIN\PLSQL20
BROWSER10=Q:\ORAWIN\BROWSE10
SQLNET DBNAME oracle7=2:
FORMS45_DEFAULTFONT="Tms Rmn.10"
;FORMS45_DEFAULTFONT="Helv.10"

[CDE]
FORMS45=Q:\ORAWIN\FORMS45
GRAPHICS25=Q:\ORAWIN\GRAPH25
ORAPLSQLLOADPATH=Q:\ORAWIN\FORMS45\DEMOS\GRAPHICS
GRAPHICS_PATH=Q:\ORAWIN\FORMS45\DEMOS\GRAPHICS
RW25=Q:\ORAWIN\REPORT25
ODBC10=Q:\ORAWIN\OCA\ODBC10
UB=Q:\ORAWIN\DBS
PB15=Q:\ORAWIN\PB15

GRAPHICS25_PATH=Q:\ORAWIN\FORMS45\DEMOS\GRAPHICS;Q:\ORAWIN\GRAPH25\DEMOS\FORMS;Q:\ORAWIN\FORMS45\DEMOS\DYNCHART

FORMS45_PATH=Q:\ORAWIN\FORMS45\PLSQLLIB;Q:\BLACK.RCK\KUMARAN;C:\KUMARAN;Q:\ORAWIN\GRAPH25\DEMOS\FORMS

REPORTS25_PATH=Q:\ORAWIN\REPORT25\DEMO;Q:\ORAWIN\REPORT25\DEMO\REQFILES;Q:\ORAWIN\REPORT25\DEMO\BITMAP
BROWSER20=Q:\ORAWIN\BROWSE20
ODQ40=Q:\ORAWIN\ODQ40
;
RTWKB=Q:\ORAWIN\RTWKB

                  Appendix Y.  Abbreviations and terms.
                               ----------------------

ORDBMS      Object-Relational DBMS. Oracle approach is
            unification of Object and Relational principles.

DBA         Data Base Administrator

SNMP        Simple Network Management Protocol (Oracle high level
            protocol). Introduced in Oracle 7.2. Replication MIB
            for monitoring symmetric replications

SNAP-IN  console  ????  (7.3)

BOB         Big Oracle Blocks (on 64-bit Digital computers)

MIB         Management Information Basis

IETF        Internet Engineering Task Force

EIS         Executive Information System

DSS         Decision Support System

OLAP        OnLine Analytical Processing (in Oracle 7.3)
            Front-end tools from newly acquired by Oracle
            Information Resources Inc. (IRI). Product: Oracle
            Express. Informix acquired Stanford Technology Group
            (Kirill Shenkman) for $17M. Product: MetaCube
OLTP        OnLine Transactions Processing

BLOB        Binary Large Objects

GIS         Geo-Information System

FAQ         Frequently Asked Questions

SMP        Symmetrical Multiprocessor Architecture (shared
           memory, tightly coupled systems)

AMP        Asymmetrical Multiprocessors Architecture (clusters,
           loosely coupled systems). Used in Parallel Server
           Option.

MP         Massively Parallel Architecture (Transputers). Used in
           Parallel Server Option.

Loosely coupled processing - AMP

MPP        Massively Parallel Processing

Shared-everything architecture - SMP

Shared-nothing architecture - MPP and AMP

Shared-disk architecture - Oracle PQO (in contrast to
            shared-nothing  (PH95p#160v2)

PCM-Locks  - Parallel Cach Management locks (false ping
             prevention in Oracle 7.3)

DFS enqueue locks and DFS locks

DTP        Distributed Transaction Processing

TPM        Transaction Procesing Monitor (Tuxedo from Novell,
           CICS/AIX from IBM, TopEnd from AT&T. Oracle uses
           Tuxedo for mission critical applications

Three-tier Architectures - using TP monitors

Multi-threaded application support for TP monitors in Oracle 7.3

PSO        Parallel Server Option

PQO        Parallel Query Option

OCI        Oracle Call Interface

ETT        Extraction, Transformation, and Transportation -
           components of Oracle Warehouse

WTI        Warehouse Technology Initiative from Oracle

MTTR       Mean Time To Recover

MVP        Materialized View Principle (updatable join views)

OPP        Operation Preservation Principle (updatable join views)

HHCODE     new data type in Oracle 7.3 to store spatial
           N-dimentional data (gratitude, longitude ...)

Oracle Video Server Option - in 7.3.2

Oracle Spatial Data Option - in 7.3.2

Oracle Text Option  - in 7.3.3

Oracle7 WebServer Option  - in 7.3.2

OMX        Oracle media exchange

OCX        OLE custom extension

HTML      Format of documents on WWW.

MPEG decoders          ????

MPEG accelerator cards ????

                  Appendix Z.  References.
                               -----------

      Z.1   Oracle Servers
            --------------

  Oracle Server Story by Ken Jacobs (Part 1)   OM 1-2.1995 p.65,
                                    (Part 2)   OM 5-6.1995 p.57

           RDBMS, Development, SQL, PL/SQL...
           ----------------------------------

  Dynamic SQL Comes to PL/SQL                  OM 5-6.1995 p.85
  By Steve Bobrowski, Oracle
   (Using DBMS_SQL package
   Good paper but the scripts need
   be slightly corrected)

   Hints & Techniques: Exploiting PL/SQL         PH95p#82,v1
    Features/Enhancements (2.1/2.2) Which an
    Application Developer Must Know.-By
    Prabhaker Gonglor & Rakesh Birla, Oracle
    Useful. Cover PL/SQL 2.1 (7.1.x) and 2.2
    (7.2.x), dynamic SQL advices, debugging with
    procedure builder, PL/SQL Wrapper Utility,
    overloading(polymorphism), implicit datatypes
    conversion, cursor variables.

  PL/SQL Packages: The Mystery ad the Magic        PH95p#13,v1
   by Steven Feuerstein, SSC Inc.
   Packages' structures & shared memory
   architecture. (He also wrote [FEU95])

  Top twenty PL/SQL Programming Tips and           PH95p#37,v1
   Techniques by Steven Feuerstein, SSC Inc.
   Some useful practical advises.
   (He also wrote [FEU95])

  Avoiding Mutating Table Errors With              PH95p#33,v1
   Synthetic Constraints.-By Rory Darling,
   State of Alaska - Department of tranportations
   Very useful:
      How to develop:
    1. Primaty keys that can be updated.
    2. Foreign Key with NULL delete constraints
    3. A child table which updates the parent is
       related with a CASCADE delete constraint
       relationships

  PL/SQL Product Update.-By Peter Vasterd,         PH95p#149,v2
  Oracle.
   Very useful.
   Summary of new features in 7.2 (PL/SQL 2.2)
   and 7.3 (2.3) including (2.2 )new
   debugging features , wrappers, cursor
   variables, DBMS_JOB package, (2.3:)
   tables of records and operations on them
   (first, last, count...), PL/SQL tables
   memory enhacements (B*Tree rather then
   hashing), RPC dependency model,
   server-side file I/O, new syntax for
   recompiling triggers...

  Updadtable Join Views.-By Atif                   PH95p#158,v2
   Chaudhry, Oracle.
   Rools for updatable views in 7.3 are
   described.
   The first paper conserning details of
   updatable join views. Materialized View
   Principle (MVP) and Operation Preservation
   Principle are introduced.

  Using Oracle Event Processing to Faclitate       PH95p#93,v2
   Client Calls to Server Programs.-By
   Sandra K. Thrasher, Ogden Government
   Services.
   Small two-pages paper. General principles,
   Oracle DBMS_ALERT package.

                         Oracle 7.2
                         ----------

                New features in Oracle 7.2
                ---------------------------

  Oracle Introdiuces Oracle7 Release 7.2        OM 7-8.1995 p.15
    (Brief list of new 7.2 features)

                         Oracle 7.3
                         ----------

   See ORA7.3 file in Russsian.

   Server Techlology Directions by                   PH95p#132,v2
     Jnan R.Dash, Oracle
     So so. Small paper intrducing
     Oracle policy in OLTP, data warehousing,
     system management and object technology.

   Upcoming Oracle Server Releases Oracle           PH95p#139,v2
    Server Release 7.3 by Michael Hartstain, Oracle
    He lists the new facilities.

   Standby Database Feature of ORACLE7               PH95p#136,v2
     Version 7.3.- By Lynne C. Thieme,
     William H.Bridge, Oracle.
     New standby support in Oracle 7.3:
     archived redo logs apply from primary
     system to standby. New commands:
     ALTER DATABASE CREATE STANDBY CONTROL
         FILE AS 'filename';
     ALTER SYSTEM ARCHIVE LOG CURRENT;
     ALTER DATABASE MOUNT STANDBY DATABASE
          [EXCLUSIVE|PARALLEL];
     ALTER SYSTEM ARCHIVE LOG CURRENT;
     RECOVER [FROM 'location'] STANDBY DATABASE;
         Weak sides:
      1. Structural changes (add datafile ...) MUST
          be done MANUALLY on standby system.
      2. Standby system does NOT recognize appearance
         of the new archive log file in ARCHIVE_LOG_DEST
         directory.

                             DBA
                             ---

  Techniques For Fast Database Re-Organization.   PH95p#10,v1
    By Nick Strange and Nigel Noble.
    Churchill Insurance, Bromley, Kent, England
   Excellent! Bravo! They move 60Gb of data from
   Pyramid Nile to Cray Suprserver 6400. They
   reduced time from 12 days to 18 hours!
    - Import with COMMIT=NO works 50% faster
    - rollback segment space for insert calculation
    - NOT NULL is NOT the same as
       CHECK (col IS NOT NULL)!
    - how to build huge indexes
    - BLOCK CLEAN OUT feature in Oracle (first select
      after insert works much more longer then second)
    - Data transfer using Named Pipes and EXP/IMP in UNIX
    - Parallel create tablespace
    - SORT_AREA_SIZE and index building
    - other very useful things...

  Oracle Statistics - The Complete Reference.     PH95p#31,v1
    By Eyal Aronoff, R*Tech Systems, Inc., Head
    of development in AdHawk Oracle Performance
    Monitor.
   Very impressive. Lot of useful detail about
   very poor Oracle statistics.
   - 32-bit integers are used. Maximum 32-bit
     integer wraps to zero WITHOUT Warning
   - it's unclear when Oracle updates statistics
   - Table Scan (Short Tables),Table Scan (Long Tables),
     Table Scans(Cache Partitions), Table Scans(Rowid Ranges),
     Table Fetch by Rowid
   - Cluster Key Scans, Cluster Key Scan Block Gets
     Transactions on clustered tables are 6-8 TIMES SLOWER
     then the same transactions on unclustered tables.
     Transactions on clustered tables gererate ORDER OF
     MAGNITUDE greater rollback and redo information then
     un-clustered tables
   - DB Block Gets, Consistents Gets, Phisical Reads,
     Phisical Writes
     internal statistics.
   - Sorts (memory), Sorts(Disk), Sorts(Rows)
   - Oracle7 allocate memory in sort area in 8k
     increments. Memory can be reused after the LAST ROW
     is fetched (good algorithm)

  INITWHAT.ORA???. By Michael Abbey.              PH95p#5,v1
    Very useful discussion of INIT.ORA
    parameters. SHARED_POOL_SIZE,
    SHARED_POOL_RESERVED_SIZE,
    SHARED_POOL_RESERVED_MIN_ALLOCATION,
    SESSIONS, SORT_AREA_SIZE,
    REMOTE_LOGIN_PASSWORD_FILE,
    MAX_TRANSACTION_BRANCHES,
    INIT_SQL_FILES (obsolete). Some
    rules of thumb are presented.

  Unobtrusive ( Χèʩ) Audit Trail. By      PH95p#8v1
     Christopher Hamilton, SRA Technologies
     (How and why to audit).

  Optimizing Performance with PCTFREE and         PH95p#100v2
   PCTUSED. By Rick Brauen, ARIS Inc.
    (Nice little paper, V$PERFORMANCE)

  Troubleshooting Made Easy. By Dennis          OM 7-8.1995 p.57
   K. Murray, Oracle UK.
   (Excellent! How to tackle major
   problems and suggestions how to
   design procedures that anticipate
   and prevent those problems. Internal
   errors (ORA-00600) are analyzed.
   Performance problems.

  Jorney to the Center of Oracle: The             PH95p#77v1
   X$ Tables.-By Joseph C Trezzo, TUSK
   Mechanism of public V$ synonyms for
   V$ views.
   V$ vies + X$ tables Categories,
   SQL*DBA uses kbcus.ora files containing
   SQL scripts for monitoring. X$ tables
   can not be DESCRIBed in SQL*Plus,

  Optimal Database File Distribution             OM 1-2.1995 p.71
   Accross N Disks. By Kevin Loney
  (Several pages from his DBA Handbook)

  Object Tracking and Cross-Referencing         OM 3-4.1995 p.57
   By Scott Hollows, Oracle.
   (Good paper.
    Scripts and advices about objects'
    info stored in Oracle dictionary
    tables, FORMS, CASE
    applications. Scripts for IMPACT
    ANALYSIS are presented.)

  Rollback Segments in Oracle7                  OM 3-4.1995 p.104
   By Jay Mehta, GTE Spacenet

  Avoiding A Database Reorganization            OM Fall 1994 p.63
   By Craig Shallahammer, Oracle
   (Good big paper. Fragmentation taxonomy.
    Good Advizes)

  Optimal Flexible Archtecture (OFA).           OM 9-10.1995 p.65
   Oracle7 for Open Systems, Part 1
   By Cary V. Millsap.
   (For UNIX systems. Evident things
    described slightly vague. But papar is
    useful anyway.)

  Using Oracle7 Procedural Option.               OM 5-6.1995 p.65
    By Mike Ault.
   (Prebuilt packages. DBMS_PIPES,
        DBMS_SHARED_POOL.
     So so. Poor examples. But it worth
     to read.)

  Updatable Join Views. By Atif Chaudhry,     PH95p#158v2
   Oracle.
   Definitions of updatable views 7.3
   facilities

         Backup/Recovery, Stanby Support, Mirroring . . .
         -------------------------------------------------

   Standby Database Feature of ORACLE7               PH95p#136,v2
     Version 7.3.- By Lynne C. Thieme,
     William H.Bridge, Oracle.
     New standby support in Oracle 7.3:
     archived redo logs apply from primary
     system to standby. New commands:
     ALTER DATABASE CREATE STANDBY CONTROL
         FILE AS 'filename';
     ALTER SYSTEM ARCHIVE LOG CURRENT;
     ALTER DATABASE MOUNT STANDBY DATABASE
          [EXCLUSIVE|PARALLEL];
     ALTER SYSTEM ARCHIVE LOG CURRENT;
     RECOVER [FROM 'location'] STANDBY DATABASE;
         Weak sides:
      1. Structural changes (add datafile ...) MUST
          be done MANUALLY on standby system.
      2. Standby system does NOT recognize appearance
         of the new archive log file in ARCHIVE_LOG_DEST
         directory.

              Parallel Processing
              -------------------

  Oracle7 Parallel Server Tuning.-By          PH95p#159v2
   Anjo Kolk, Oracle
   Extremely useful! Locks, enques and
   latches are described. Pings and false
   pings are explained, parallel server INIT.ORA GC_*
   parameters are analyzed. Fixed views and parameters
   for monitoring and tuning.

  Parallel Query Architecture and Directions  PH95p#160v2
  By Andrew Mendelson, Oracle
  Good. PQO and its evilution is described.

  Oracle7 Release 7.1 Parallel Everything      OM 1-2.1995 p.38
   Architecture. By Kelly Wiseth.
   (Introduction, Tools of the Trade,
    Warehouses)

  Parallel Query Option Performance            OM 9-10.1995 p.111
    by Barry Perkins, Oracle
     (IBM MVS oriented)

               Distributed Processing, Replication
               -----------------------------------

  Database Mirroring Under Software Control    OM 1-2.1995 p.91
   Using Oracle7. By Hugb Ujbazy.
   (Methodology to replicate the database
   without modifying application code.
   Useful paper. Some examples of PL/SQL code)

  Strategies and Techniques for Using         OM 9-10.1995 p.97
   Oracle7 Replication: Part 1
   by Dominic J. Delmolino, Oracle
   Consulting

               Text Server
               -----------

  Oracle Text Server3 with ConText              OM 7-8.1995 p.20
   (1 page features' description)

               Media server
               ------------

            Z.2  Technical Support
           ----------------------

  Extending the Reach of Oracle7.              OM 9-10.1995 p.133
   By Jay Mehta, Mobil Oil Corporation
   (Excellent. Communication with application
    server via DBMS_PIPE package. Good examples
    clear lagnguage)

  Function Point Analysis (FPA) with            OM 1-2.1995 p.115
  Oracle CASE Dictionary
     (New aproach to software sizing &
      estimating)

  Supportlink: Online Information Service       OM 5-6.1995 p.120
   by John Costiner

  The physical Design of Oracle Databases           PH95p#56v1
   for storage arrays. By Bruce T. Driver,
   Fusion Systems Group.
   1. Physical and logical constructions of the
      SPARCsrorage Array Model 101 and  SPARCstorage
      volume manager 1.3 (RAID)
   2. Recomendations on physical layout of an Oracle
      database on a storage array.

      Z.4   Oracle and Third Party Products
            -------------------------------

                 Tools: Comparative Analysys
                 ---------------------------

  Join the Club and Check Out These Tools  DPD 11.1995 pp.57-59
  New tools easy the heavy lifting of SQL
  joins.- By Doug Thomson.
   Comparative analyzis several respects
   of joins in:
   EDA/SQL from Information Builder,
   Developer/2000 from Oracle,
   Visual Works from ParcPlace Systems (SmallTalk-based),
   Uniface v6 from Compuware,
   SQLWindows 5.0 from Gupta,
   PowerBuilder from PowerSoft.
    Disigner/2000 & PowerBuilder - hard wired to DBMS.
    PowerBuilder is natural-join client modelled also
    EDA/SQL helps to bring together data from systems of
      various ages and architectures.

  Evaluating Windows-based Tols for Oracle7.    PH95p#2v1
   By Earl D. Shaffer, Barron Collier
   partnerships.
   Very nice paper. How to evaluate a GUI tool
   for Oracle7 with different point of views.

  Designing Client/Server Applications - A      PH95p#83v2
   Comparison of PowerBuilder and CDE2.
   By Yori Lavi & Galia Kaufman, Levi Tech. Ltd
   Useful and nice paper. Write positions
   were chosen comparison!

                Product Overviews and News
                --------------------------

  Dicoverer/2000, OLAP (Online Analytical      OM 9-10.1995 p.22
    processing) tools, MultiDimension,
    Oracle7 TextServer, Designer/2000,
    Developer/2000

  WEB-Based support, Oracle/Rdb, Oracle        OM 9-10.1995 p.30
   Magazine Interactive                        OM 9-10.1995 p.92

  Oracle Introdiuces Oracle7 Release 7.2        OM 7-8.1995 p.15
    (Brief list of new 7.2 features)

  Developer/2000 & Designer/2000                OM 7-8.1995 p.17
   Software Suits

  Oracle Power Objects                          OM 7-8.1995 p.18

  Oracle Media Objects                          OM 7-8.1995 p.18

  Oracle Text Server3 with ConText              OM 7-8.1995 p.20
   (1 page features' description)

  Oracle Gateways Enable Applications           OM 7-8.1995 p.22
  to Access All Data
  (EDA/SQL, DB2/400

  RDB and related Business                      OM 7-8.1995 p.22

  Oracle Announces Availability of              OM 7-8.1995 p.24
  Oracle7 Workgroup Server for UNIX

  Oracle Workgroup/2000                         OM 3-4.1995 p.13
   (OO for OLE, Personal Oracle,
      SmartStart ...)

  Oracle Book 2.2, SDK                          OM 5-6.1995 p.16

  New Transparent Gateways for Sybase,          OM 5-6.1995 p.16
  Informix and Ingres

  SNMP - Simple Network Management Protocol     OM 5-6.1995 p.16

  Oracle Workgroup Server for Solaris X86       OM 5-6.1995 p.18

  Expanding Internet Distribution Service       OM 5-6.1995 p.18
    for WG/2000 Products

  Oracle MultiDimension Extension to Oracle7    OM 5-6.1995 p.20

  Oracle Office 2.1                             OM 5-6.1995 p.24

  Server manager replces SQL*DBA                OM 5-6.1995 p.24

  Oracle Data Query 4.0                         OM 5-6.1995 p.26

               Networking and client/server
               ----------------------------

  Oracle Client/Server: Configuring            OM 5-6.1995 p.87
    Your System, Part 1. By Steve Bobrowski,
    president of Animated Learning
     (Very good paper. Network Manager is
      described. Disabling automatic interprocess
      communication is mentioned.)

  Oracle Client/Server: Configuring            OM 7-8.1995 p.81
    Your System, Part 2. By Steve Bobrowski,
     (Very good paper. Multi-threaded server.
     Troubleshooting tips. The procedure to
     monitor the configuration of Oracle Server
     MTS is given)

               Developer's Materials
               ---------------------

  Oracle Developer from PINNACLE               OM 9-10.1995 p.73
  Newsletters on Developer/2000 and
   Designer/200, PL/SQL ....
   Media - discettes, $149 (+$50 later on)

                Developer/2000
                --------------

  Calling Stored Procedures from Oracle        OM Fall 1994 p.125
  Forms. By Cris Lunt, Oracle
   (Very useful small paper. Covers
    forms 3 & 4)

  FAQ on FORMS 4.0.13 by Ben Wright            OM 1-2.1995 p.113

  FAQ on Oracle Graphics 2.5 by                OM 3-4.1995 p.83
   Ivan Chong, Oracle
   (Ansvers to 2 questions)

  FAQ on Oracle Reports.                       OM 5-6.1995 p.119
   (conversion utilities from
    SQL*ReportWriter to Oracle Reports)

  FAQ on Oracle's Procedure Builder            OM 7-8.1995 p.96
    and Developer/2000 (4 answers)

  FAQ on Developer/2000 (Libraries)           OM 9-10.1995 p.141
   (1 question)

  Oracle Forms 4.0: Minimizing                 OM 3-4.1995 p.65
  Client/Server Network Traffic

  Creating Flex Fields in Oracle Forms.        OM Fall 1994 p.119
  By Chuck Weiss, Oracle

  Object-Oriented Programming in                OM 5-6.1995 p.75
   Oracle Forms 4.5. By Ben Wright, Oracle
   (It should be called "Pseudo Object
    Oriented". How to emulate all the
    object-oriented properties (inheritance,
    object reuse, encapsulation, modularity
    and polymorphism) in Oracle Forms 4.5)

                Procedure Builder
                -----------------

  FAQ on Oracle procedure Builder              OM 3-4.1995 p.89
   by Ivan Chong, Oracle
    (3 answers)

  FAQ on Oracle's Procedure Builder            OM 7-8.1995 p.96
    and Developer/2000 (4 answers)

  Oracle Server Manager by Steve Bobrowski,     OM 1-2.1995 p.97
        Oracle (The overview)

                Oracle Network Manager
                ---------------------

  Oracle Client/Server: Configuring            OM 5-6.1995 p.87
    Your System. By Steve Bobrowski,
    president of Animated Learning
     (Very good paper. Network Manager is
      described. Disabling automatic interprocess
      communication is mentioned.)

                Designer/2000
                -------------

  A Simple Case for Designer/2000              OM 9-10.1995 p.105
   by Steve Bobrowski, Oracle
   (Good introduction to CASE)

  Bringing th Gap. By Mary Ann Conroy           OM 7-8.1995 p.65
   (How to manage and improve existing
    application developed without Oracle
    Case using Oracle Case. Technique to
    re-engineering existing packaged
    applications into case repository.
    Good paper.)

  Function Point Analysis (FPA) with            OM 1-2.1995 p.115
  Oracle CASE Dictionary
     (New approach to software sizing &
      estimating)

              Oracle Power Objects
              --------------------

  Developing Applications Using Oracle         OM 9-10.1995 p.81
    Power Objects by Matthew Bennett
    (Good beginner's guide)

                Microsoft Access
                ----------------

  How Sweet It Is - Microsoft Access and          PH95p#6v1
   Oracle.-Bt Bradley D.Brown, The Ultimate
   Software Consultunts
   Very useful practical guide on connections and
   features Access and Oracle via ODBC

              PowerBuilder (Powersoft)
              ------------------------

  Power Builder and Oracle - The inside Story    PH95p#52v1
   by Bill Rabkin & Scott Marlow, Powersoft
   Corporation.
   Very interesting for powersoft developers

              Oracle World Wide Web
              ---------------------

  Oracle WWW Interface kit. Tools for          OM 5-6.1995 p.15
   Harnessing the Internet.

  WWW Address: http://wwworacle.com/           OM 5-6.1995 p.54

              Oracle Media Objects
              --------------------

  Oracle Media Objects Available over the      OM 9-10.1995 p.28
     Internet

              Oracle Objects for OLE
              ----------------------

                   ODBC
                   ----

  Understanding ODBC Beyond What the Acronim     PH95p#4,v1
  Means. By John Molinaro, Sargent & Lundy L.L.C
   ODBC Features, differencecs between ODBC
   drivers.

  ODBC ABC's - Connecting Windows to Oracle     PH95p#124,v2
   By Thomas Harleman, Paradigm Consulting, Inc.
    Nice and clear paper.

                Oracle in Motion
                -----------------

                Oracle Media Objects
                --------------------

                    SIDONA
                    ------

                    Gateways
                    --------

  Oracle Transparent Gateway for EDA/SQL        OM 5-6.1995 p.116
    by Sherie Bernandez, Oracle
    (The gaytway provides read access to
     data residing in 15 mainfraim
     datastores including ADABAS, CA-DATACOM,
     CA-IDMS, FOCUS, IMS, Infoman, ISAM,
     Model 204, QSAM, SAP, SUPRA, SYSTEM 2000,
     Teradata, TOTAL, and VSAM)

                   Oracle RDB
                   ----------

  Oracle RDB: Round-the-clock Reliability. By   OM 5-6.1995 p.109
   Ian Smith

         Third Party (Value Added) Tools
         --------------------------------

  Oracle Value Added Initiatives ----------------------------

  Oracle Warehouse Technology Initiative       OM 9-10.1995 p.150
   (WTI)

CASE & Data modelling------------------------------------------

  ERwin from Logic Works                        OM 3-4.1995 p.109
   (ERwin for Oracle CASE and                   OM 9-10.1995 p.95
   ERwin/ERX 2.0)                               OM 5-6.1995 p.21
                                                OM 7-8.1995 p.23

  Silverrun from Computer Systems Advisers     ARGUSSOFT
   Topcase. Special Support for Oracle

  S-Designor from SDP Technologies, Inc        OM 9-10.1995 p.34
                                               OM 5-6.1995 p.42
                    One page advertising       OM 7-8.1995 p.29

  INCOME from PROMATIS Informatik              OM Fall 1994 p.137
    Tools for modelling, simulating and
    developing business processes

  DBStar Migration Architect from DBStar, Inc. OM Fall 1994 p.137
   Transforms any legacy databese model to
   mathematically correct relational model

  Database Illustrator from Ray Ontko & Co.    OM 5-6.1995 p.126

DBA----------------------------------------------

  Platinum DBA Tools from Platinum             OM 1-2.1995 p.122
   Technology
    - EasyDba UNIX, VMS, OS/2                  OM 9-10.1995 p.25
    - Desktop DBA (cross-server
       administration)                         OM 9-10.1995 p.37
    - DBVision (Monitoring & performance       OM 9-10.1995 p.153
       tuning, problem prevention
       automated storage management
       cross pltform support, graphical
       interface)
    - DB-Alert (alarm notification)            OM 9-10.1995 p.153
    - SQL Spy (Monitoring)                     OM 9-10.1995 p.31
    - SQL Coder (development environment)      OM 9-10.1995 p.103
                                               OM 7-8.1995 p.105
    - UniVision 2.1, AutoSys, AutoSys/Xpert    OM 9-10.1995 p.148
        UNIX Environment performance &
        monitoring tools

  MetaSUITE from BMC Software                  OM 9-10.1995 p.21
  MetaDesk &M etaManage                        OM 7-8.1995 p.100

  SQL*Trax from TurnStone Software,            OM 9-10.1995 p.85
   The Database Utility Company.
   (Various statistics, Debug PL/SQL and       OM 7-8.1995 p.55
    triggers, UNDELETE ROWS,...)
    Lg Analyzer for Oracle7 (SQL*Trax)         OM 5-6.1995 p.27

  DFRAG from ARIS corporation                  OM Fall 1994 p.143
   Defragmentation and space management
   utility for Oracle7 including index         OM 9-10.1995 p.63
   deragmentation

  MetaCube Family of Products                  OM 5-6.1995 p.125
   from Stanford Technology Group.
   OLAP, Tracking quueries and so on
   (President - Kirill Shenkman)
   SQL*Studio V2.0                             OM 7-8.1995 p.10

  Job Control System (JCS) from Redwood        OM Fall 1994 p.135
  Business Group.
   Can be intergated in Oracle Forms
   applications

  LDM/XP - Migration and Management            OM Fall 1994 p.135
    tools from Legent
  ACE*Insight (Tuning)        FORMS 4 Based    OM 9-10.1995 p.154
  ACE*Tester                  FORMS 4 Based    OM 9-10.1995 p.154
  ACE*Timer (Response time) - FORMS 4 Based    OM 9-10.1995 p.154

  WatchWorks from MITI                         OM Fall 1994 p.139
   Monitoring and alerting tools and
   resources accountong

  HighView from Highland Technologies          OM Fall 1994 p.135
   Imaging and workflow appl. system           OM 7-8.1995 p.103
   for Win

  SQL/ADMIN. Three utilities for DBA            OM 1-2.1995 p.127
  from Mike Parker & Associates

  OpenV*SecureMax from OpenVision              OM Fall 1994 p.133
   (Security management tool)

  Password Management Software                 OM Fall 1994 p.133
  from BrianTree
  SQL*Secure by BrianTree                       OM 3-4.1995 p.95

  DISKEEPER from Exclusive Software             OM 3-4.1995 p.109
    Online disk defragmenter for
    OpenVMS and Windows NT on
    DEC's VAX and Alpha

  Alexus Vehicle Maintenance System             OM 3-4.1995 p.109
  from Support Technologies
    (Win based network cost tracking          Fall 1994 p.133
     system).

  Safe C Rutime Analyzer and QA C               OM 3-4.1995 p.110
    from ASTA
    (Software -alalyzis and -testing system)    OM 3-4.1995 p.110

  COPERNCUS. Data-format translation            OM 1-2.1995 p.124
    program from New Paradigm Software

  AdHawk from EVENTS software                  OM 9-10.1995 p.27
   (Database monitoring and problem's          OM 7-8.1995 p.43
    identification)

  DBTUNE (Automatic tunung tool) and            OM 9-10.1995 p.29
  Control Center                                OM 5-6.1995 p.123
  from The Database Solutions Company           OM 7-8.1995 p.61

  EcoTools from Compuware                      OM 9-10.1995 p.33
    Tools for database, network and
    OS balancing in client/server env.
  DBA-XPERT from Compuware                     OM 9-10.1995 p.145
    (control of development, maintenance
     and securitty)

  Inspect/SQL, Analyze/SQL from Precise        OM 9-10.1995 p.83
    Software Slutions, Inc.
     (Database performance analysis and
     tuning)

  WatchWorks from Eventus Software              OM 5-6.1995 p.125
    (Event monitoring and expert-advice
     software)
  AdHawk Spacer (Tablespace reorganization)     OM 7-8.1995 p.103

  Network General with Expert Sniffer          OM 9-10.1995 p.69
    from Network General
    Can fin out network problems
    including SQL*Net

  SQL-BackTrack from DataTools                 OM 9-10.1995 p.148
   Backup & Recovery tool

  DBGENERAL's Table Space manager for           OM 5-6.1995 p.126
  Oracle from Bradmark Computer Systems

  CheckMate from Bit by Bit                    OM 5-6.1995 p.21
   (archiving and more)

Development Tools-----------------------------------------

  KUMARAN Systems, Inc.
    GUI*Converter from forms2/3                OM 9-10.1995 p.107
      to forms 4.5

  SQR3 Workbench from MITI                      OM 3-4.1995 p.110
    (Report writer)                             OM 9-10.1995 p.64

  Impromptu from Cognos                        OM 9-10.1995 p.75
    (Report writer)

  Forest & Trees from Trinzic Corporation      OM 9-10.1995 p.89
    Simple ad hoc query & reporting
  InfoPump (replication Software)               OM 7-8.1995 p.47

  OSF/Motif based products from                OM Fall 1994 p.137
  Bluestone Consulting
   (Development environment, Cross
    platform Toolset)
  dbViewer - a browsing and editing tool        OM 5-6.1995 p.126

  Pro/Helpdesk from Prolin                      OM 3-4.1995 p.111
   (FORMS base problem solving,
   improving management application)

  R&R report writer from Concentric
   Data Systems

  Parasoft's BenchWorks                         OM 1-2.1995 p.121

  Acu4GL from Acucobol

  IntelligenceWare products                     OM 1-2.1995 p.127

  NeXT's Enterprise Objects Framework           OM 3-4.1995 p.108
   (object-building software)                   OM 7-8.1995 p.103

  SQL Power Plan Analyzer from                 OM 9-10.1995 p.135
    SQL Power Tools
   (Explain Plan graphic analyzer
    Statistics analyzer
    Index Analyzer
    SQL Impact Analysis Browser,
    Documenter ... )

  SQL Power Tools from the Development          OM 5-6.1995 p.105
   Group for Advanced Technology, Inc.
   (Documentor, Impact Analysis, Migratior,
    Deadlock predictor)

  SQL-BackTrack from Admiral Software          OM 5-6.1995 p.112
  Limited.
    (Backup/recovery tools)

  GEO/SQL from Geo/SQL corporation             OM 5-6.1995 p.125
   (GIS, database management and
    mapping software)

  RunHost from Useful Technology              OM 5-6.1995 p.126
   (Permits Forms 4.x based application
    initiate and get return from UNIX
    processes performes on the server

  WorkTech Reporter from Work                 OM 5-6.1995 p.126
    Technology
    (internal-report-writing-tool)

  Application Manager from MORE Systems       OM 7-8.1995 p.77
   (managing database and Application
   Security)

  DemoSHIELD from Stirling Technologies         OM 5-6.1995 p.126
   (Visual development tool)

  7000 Series Network Probe from
  Network Communications
   (A point-to-point protocol option)

  NetMagic from NetMagic Systems                OM 7-8.1995 p.101

  SequeLonk from TechGnosis                     OM 7-8.1995 p.49
   (high performance connection to
   many databases, middleware)

  Oracle Mobile Agent & Radio Agent             OM 7-8.1995 p.101
   from RAM Mobile Data

  Actio Request (AR) from Remedy                OM 7-8.1995 p.102
   Corporation

 DSS tools-----------------------------------------------------

  Data Query & Data Pivot from                 OM Fall 1994 p.139
    Brio Technology

  BrioQuery from Brio Technology                OM 3-4.1995 p.108
     (Win and MAC based full-featured query,    OM 9-10.1995 p.23
     analysis and reporting tools.
     ODBC, SQL*Net)

  Intelegent Query (IQ Software)                OM 1-2.1995 p.121
   IQ Access. Permit user managing.             OM 9-10.1995 p.6
                                                OM 5-6.1995 p.48

  SQLASSIST from Software Interfaces, Inc.     OM Fall 1994 p.141
      X/Motif Based
  ProReport & ProServer from Platinum          OM 9-10.1995 p.148
    Software Interfaces, Inc.

  FindOut! fromOpen Data Corporation           OM Fall 1994 p.137
    DSS (Desision Support) products

  ImageNet from Technology, MAnagement, &      OM Fall 1994 p.139
  Alalysis.
   Document Management products

  PLEXUS XDP. Imaging software from             OM 1-2.1995 p.124
     RECOGNITION INTERNATIONAL.

  SQL*VOX. Voice processing system from         OM 1-2.1995 p.124
    Minglewood Data Systems

  EMPOWER/CS - New client/server load           OM 1-2.1995 p.125
    testing tool from Performix

 ZDC from Zitel                                 OM 5-6.1995 p.126
  (an intelligent data-management subsystem)

  Packaged Applications-------------------------------

  Siebel Sales Information System               OM 1-2.1995 p.122
                                                OM 9-10.1995 p.10

  MCS-II from HSBRT                            OM Fall 1994 p.131
    Maintenance Management system
    for large industrial plants)

  Nordic Charts from Nordic Systems             OM 3-4.1995 p.110
    (Oracle Base accounting and
    manufacturing system. Runs on
    more then 100 platforms)

  SQL*TIME. Financial application from          OM 1-2.1995 p.124
  Minglewood Data Systems
  Relese for Oracle Forms 4.5                   OM 5-6.1995 p.125

  CONTROL from KCI Computing                    OM 3-4.1995 p.110
   (Tool for Oracle Financials)

  Voyant from Brossco                          OM Fall 1994 p.131
     DSS system

  SupportTrak from Aurum Sofware               OM Fall 1994 p.137
    Customer service and quolity management     OM 7-8.1995 p.103
    TeleTrak, QualityTrak
    (customer support, quolity tracking)

  EMIS - Enterprise Management Information      OM 1-2.1995 p.125
  system from ESI/Technologies

  Vantive System 3 fromVantive                 OM Fall 1994 p.141
    Customer services software

  SQL*Operator from ISA Corporation            OM 9-10.1995 p.116
   (Output management, batch scheduling,
    backgroung processing)

  ALCIE from ORANGE Systems                    OM 9-10.1995 p.119
   (CASE based Financial Software)

  Enterprize MPAC from TSW international       OM 5-6.1995 p.102

  Chalkboard from Breckenridge Software        OM 7-8.1995 p.100
   (Brokiring service)

  SPS/Oracle7 from Saratoga Systems           OM 7-8.1995 p.102
   (account-management and sales
    information software)

  Vantive Rnterprise from Vantive             OM 7-8.1995 p.102
   Corporation (Sales)

  CustomerFirst from Repository               OM 7-8.1995 p.102
  Technologies
   (Inventory-management)

  Executive Information System from           OM 7-8.1995 p.103
   Staffware

  Visual Information Manager for              OM 7-8.1995 p.103
  Macintosh from Apreture Technologies

  ClearExpress WebSupport from Clarify        OM 7-8.1995 p.103

  VentoMap and VentoSales from                OM 7-8.1995 p.103
   Vento Software

  GENESYS Enterprise Series from              OM 7-8.1995 p.103
   GENESYS Software Systems
   (Human Resource management)

Multi-vendor DBMS Intergration----------------------------------

  Replic-Action from Csahl Technology         OM 9-10.1995 p.145
    Composer + server. Bidirectional
    synchronization on a scheduled basis

  Prism Warehouse Manager and Directory        OM 9-10.1995 p.146
   Manager from Prism Solutions
   Generates codeto extract data from
   sourvces databases, Integrates data
   from various sources and then transforms
   and loads to the target database

  ReTarGet from Rankin Companies               OM 9-10.1995 p.146
    Data-migration from legacy.

  macControl from Breitschwerdt                 OM 5-6.1995 p.123
   Information Systems
   EIS/DSS on Apple Macintosh

  TOP OF MIND from The Molloy Group            OM 5-6.1995 p.126
    ( A customer-service application)

  Catalyst Warehouse Management System         OM 5-6.1995 p.126
  (WMS) from Catalyst

  Notrix Composer from Percussion              OM 7-8.1995 p.101
    Software

      Z.3   Platforms
            ---------

                   DESKTOP
                   -------

  Data Warehousing for Workgroups. By           OM 1-2.1995 p.109
  Marilyn Schneider, Oracle
   (How to snapshots, links in workgroup
    servers. Object manger. Useful paper)

  The Enterprize on Your Desctop (Part 1)      OM 9-10.1995 p.127
   By Mani Subramanyam, Oracle Desctop
   (Interesting. Using teleprocessing
    monitors (TP Monitors)

  Integrating Desktop Into the                  OM 5-6.1995 p.9
    Enterprize. By Timoty O'Brien
    General Drections

   From the Desktop Working Concert             OM 5-6.1995 p.30
    to the Data Center By Kelli Wiseth
    (Future of enterprize organizational
     structure)

                   UNIX
                   ----

  UNIX Clustering Technology. By Hrishi         OM 1-2.1995 p.103
  Kamat, Oracle.
    (Using OPO. Sequent, Piramid, IBM, AT&T,
     DG, HP, SUN, DEC support clusters on
     UNIX. Using for fault tolerance)

                 Personal Oracle
                 ---------------

  Personal Oracle7 for Windows                  OM 3-4.1995 p.100
   By Liff Stengaard Thomas, Oracle
    (Overview)

  Personal Oracle Enterprise Edition            OM 9-10.1995 p.32
   7.1.4. Includes distributed option,
   requires 12M RAM, Extension of
   PO Standard Edition. Contains
   OO for OLE, ODBC, SQL*Plus,
   SQL*Net, GUI Administration Tools
   (DB Manager, User MAnager, Object
    Manager, Session Manager, DB
    Expander, Backup & Recovery Manager,
    SQL*Loader, EXP/IMP)

                   NetWare
                   -------

  Configuration and Tuning Oracle7              OM 5-6.1995 p.95
   for NetWare. By Bryon Georgson, Compaq
   (Excellent! Plenty of useful advices!)

  Managing Oracle7 in NetWare Environment       OM 7-8.1995 p.92
   By Earl Shaffer, DBA at Barron Collier
   Partneships.
     (Good complement to previous paper,
     backup/recovery, mirroring...)

  Oracle Server for NetWare - the Next          OM 5-6.1995 p.103
   Generation. By Larry Robinson, Oracle
   (SMP, AMP for NetWare, Good paper,
   Very Interesting).

                   OS/2
                   ----

  IBM's OS/2 WARP Is Truly Client/Server        OM 7-8.1995 p.95
  By Jagdish Pammani, Oracle.
    (WARP architecture and future are
     described. Interesting.)

                   UnixWare
                   --------

                     SCO
                     ---

  Configuring and Tuning Oracle7 for           OM 9-10.1995 p.117
  SCO UNIX on Compaq Servers.

            Digital Platforms
            -----------------

  Digital's AdvantageCluster Architecture       OM 3-4.1995 p.98

  Oracle Takes Advantage of 64-bit              OM 5-6.1995 p.111
   Technology. By John Nordlinger, Oracle
   (Small but very informative
    1 page paper)

  Oracle7 64-Bit VLM Capability Makes           OM 7-8.1995 p.89
   Digital UNIX Transactions Blazingly
   Fast. By John Nordlinger, Oracle
      (Large system global area and big
       Oracle blocks (BOB) on 14Gb Alpha)

                   OpenVMS
                   -------

               Digital UNIX (OSF/1)
               --------------------

  DEC OSF/1 Cluster Configurations.            OM Fall 1994 p.103
   By Bonnie Crater, Oracle

                     IBM S/390, MVS Mainframes
                     -------------------------

  The Mainfraim As Data Warehouse               OM 1-2.1995 p.100
      The third paper in S/290 series

  Oracle7 and OLTP on the S/390
  Parallel Mainframe                            OM 3-4.1995 p.90

  Parallel Systems Management                   OM Fall 1994 p.97
   By Bruce Olsen, Oracle
      (Oracle7 on S/390)

  Oracle for MVS Client Solution               OM 9-10.1995 p.32

  Parallel Query Option Performance            OM 9-10.1995 p.111
    by Barry Perkins, Oracle
     (IBM MVS oriented).

  Opening Up the Mainframe by Bruce           OM 7-8.1995 p.86
   Olsen, Oracle.
   (MLS in Arabic, I18N + L10N)

      Z.5  Scientific and General Issues, Methodology
           ------------------------------------------

                Development
                -----------

  Rapid Application Development. By Dennis      OM 1-2.1995 p.79
     Moore.
     (Consepts, advices...)

  Extending the Reach of Oracle7.              OM 9-10.1995 p.133
   By Jay Mehta, Mobil Oil Corporation
   (Excellent. Communication with application
    server via DBMS_PIPE package. Good examples,
    clear lagnguage)

                Methodology
                ------------

  Total Performance Management (Part 1)        OM 5-6.1995 p.83
  by Craig Shallahammer.
    (Holistic approach to performance
    management. Too mach water.)

  Total Performance Management (Part 2)        OM 7-8.1995 p.75
  by Craig Shallahammer.
   (Several tool that help UNIX
    performance specialist are mentioned)

                Applications
                ------------

  Data Conversion from Legacy Systems to        OM 1-2.1995 p.85
  Oracle Applications. By Anne Blaylock &
  Walt Zerkow, Oracle
   (Big technical paper, some good advices)

  The Logical Shell Games                       OM 3-4.1995 p.75
   By Michael Sanders, Oracle
    (A new world of IS Management and control

   Using Global Registers to Integrate          OM 9-10.1995 p.87
   Your Enterprise Information System
   By Cedric Dettmar, Oracle.
    (Some useful recommendations how
     to develop enterprise's shared master
     list of reference information helping
     to intergate legacy applications into
     Oracle and intelligently manage the
     enterprise)

  Strategies and Techniques for Using         OM 9-10.1995 p.97
   Oracle7 Replication: Part 1
   by Dominic J. Delmolino, Oracle
   Consulting

    SMP, Clusters, Messively Parallel Archtectures
    ----------------------------------------------

  SMP or Massively Parallel Processors         OM 3-4.1995 p.94
  By Henry Robinson, Pyramid Technology

                 Object Orientation
                 ------------------

  Relational Object Bases.                      OM 1-2.1995 p.81
   By Hugh UJhazy, Malaysia

                Operation Environment
                --------------------

  How Big Is your Database?                    OM 5-6.1995 p.113
   (Surveys of  Oracle based production
   system sizes, number of user ...)

                 Miscellaneous Issues
                 ---------------------

  A Strategic Approach to Mission-Critical     OM 9-10.1995 p.37
   Applications by Philip J. Gill, San Diego
   freelance writer.
   (A big clever paper)

  Distributed-Enterprise Chellenges            OM 7-8.1995 p.11
   (Scalability, Productivity,...)

  Distributed Enterprise, Thorny Problems.     OM 7-8.1995 p.36
   Intellegent Solutions. By Paula
   Borsook.
   (Network topologies, tactics for managing
   replicated data, mobil lines security
   problems. There are many interesting clips inside
   the paper)
  Data Warehousing: Rehashing Desigion Support. OM 1-2.1995 p.9
       By Kevin Strange, Gartner Group.

  The True Nature of a Data Warehouse. By       OM 1-2.1995 p.128
     Wayne Eckerson, Patricia Seybold Group.
        (Small and clear paper).

  Oracle7 Release 7.1 Parallel Everything       OM 1-2.1995 p.38
  Architecture. By Kelly Wiseth.
  (Introduction, Tools of the Trade,
   Warehouses)

  Beyond the Data Warehouse. By Robert          OM 1-2.1995 p.40
    Tuttle, AT&T.
    (Concepts and future, small paper)

  Seven Steps to Building a Data Warehouse      OM 1-2.1995 p.43
   (Guide to planning and building
    the data warehouse)

      Z.7 Oracle Applications
          -------------------

Oracle Application Release 10SC                 OM 9-10.1995 p.19

      Z.8 Banking and Other Success Stories
          ---------------------------------

  Banking on Quality                            OM 7-8.1995 p.52
  Belgian Kreditbank choose Oracle
  for customer support)

  Money in the Bank                             OM 1-2.1995 p.52
   (CDE in the First National Bank of Chicago
         (FNBC)).

  The French Collection
     (Oracle in french tax services)           OM 9-10.1995 p.58

  Tracking Crime on the Desktop                OM 5-6.1995 p.40
    Oracle Based Police System

  The House that Mosprivat Built               OM 5-6.1995 p.51
    Oracle in Mosprivatizaciya, Moscow

  Equipping Contractors in the                 OM 5-6.1995 p.38
    Heartland. By Joel N. Sburkin
    An Oracle Based desctop system matches
    buyers and sellers of heavy equipment
    at Heartland Communications.

  The Rebuilding Mission: Ahlsell AB          OM 9-10.1995 p.46
    Oracle in Sweden largest building-supply
    wholesaler company.

 Driving the Open Road.                        OM 9-10.1995 p.52
  Ford Motor Company uses Oracle7 based
  software to build a technology infrastructure.

 World Watch. By Michael Miley.                  OM 7-8.1995 p.44
  (Oracle based system in INTELSAT. the largest
  commercial satellite service provider)

                Z.9 User Groups
                    -----------

   Integration Oracle User Groups Add Service  OM Fall 1994 p.129
   and Support

   News from he Americas, Asia/Pacific, and     OM 3-4.1995 p.107
   Europe

   News from Groups Around the Globe            OM 5-6.1995 p.128

   Users Gather from Around the World           OM 7-8.1995 p.99
     (Calemdar of Events)

   Coming Up: International Oracle User Week   OM 9-10.1995 p.143

    Z.10  Hardware
          --------

                Miscellenious
                -------------

  Tadpool Notebook Computer                     OM 3-4.1995 p.111

  Storage Co-Processor from ZITEL              OM 9-10.1995 p.146
                                                OM 7-8.1995 p.94

                    Bibliography
                    ------------

OM            Oracle Magazine

PH95          Proceedings of IOUW'95. "Building Independence With
              Oracle".-Pennsylvania Convention Center,
              Philadelphia, Pennsylvania USA. September 17-22,
              1995. Two volumes.

DPD           Database Programming & Design

  Oracle Press ----------------------------

[COR95]       Tuning Oracle. By Michael Corey, Michael Abby,
              Daniel J. Dechichio, Jr. McGrow Hill, Inc., 1995.
              544 pages

[ABB95]       Oracle: a Beginner's Guide, 1995,
              650 pages

[LON94]       DBA Handbook by Kevin Loney. McGrow Hill, Inc.,
              1994, 608 pages.

[FEU95]       Oracle PL/SQL by Steven Feuerstein, O'Reilly &
              Associates, 1995, 800 pages

[VEL95]       Oracle Bachup and Recovery Handbook. By Rama
              Velpuri, 1995. 452 pages
              Refereced: OM 9-10.1995 p.141

[MCL95]       Oracle Developer's Guide. By David McClanahan,
              1995. 608 pages.

[KOC95]       Oracle: The Comlete reference, 3rd edition. By
              Georg Koch and Kevin Loney, 1995., 1104 pages

[COX95]       Oracle Workgroup Server Hanbook. By Thomas B. Cox,
              1995, 320 pages. Covers Personal Oracle7 for
              Windows.

[KOL95]       Oracle Power Object Handbook. By Bruce Kolste and
              David Petersen, 1995, 512 pages.

[FIN95]       Oracle Power Objects Developer's Guide. By Richard
              Finkelstein, and Rick Greenwald, 1995

+ Be well,right now. ------------------------------------------------+
OpenVMS System Manager,Delta Telecom JSC       Cel:  7+ (901) 971-3222
191119,Russia, St.Petersburg,                  Fax:  7+ (812) 115-1035
Transportny per. 3                   http://WWW.Levitte.ORG/~RLaishev/
+ MailTo:Laishev@SMTP.DeltaTel.RU ----------- SysMan rides HailStorm +
