Monday, June 21, 2010

Data Mart

Data Mart:
A data mart is a simple form of a data warehouse that is focused on a single subject (or functional area), such as Sales, Finance, or Marketing. Data marts are often built and controlled by a single department within an organization. Given their single-subject focus, data marts usually draw data from only a few sources. The sources could be internal operational systems, a central data warehouse, or external data.

Another definition of Data Mart:
A data mart is a particular subject oriented block of data in the data warehouse in the business line like production, sales, marketing etc. they are two kinds of data marts are there, one is Independent & Dependent data marts.
What is Metadata?
Metadata is information about the data. For a data mart, metadata includes:

01. A description of the data in business terms
02. Format and definition of the data in system term
03. Data sources and frequency of refreshing data

How Is It Different from a Data Warehouse?

A data warehouse, unlike a data mart, deals with multiple subject areas and is typically implemented and controlled by a central organizational unit such as the corporate Information Technology (IT) group. Often, it is called a central or enterprise data warehouse. Typically, a data warehouse assembles data from multiple source systems.

Dependent and Independent Data Marts:

There are two basic types of data marts: dependent and independent. The categorization is based primarily on the data source that feeds the data mart. Dependent data marts draw data from a central data warehouse that has already been created. Independent data marts, in contrast, are standalone systems built by drawing data directly from operational or external sources of data, or both.

The main difference between independent and dependent data marts is how you populate the data mart; that is, how you get data out of the sources and into the data mart. This step, called the Extraction-Transformation-and Loading (ETL) process, involves moving data from operational systems, filtering it, and loading it into the data mart.

With dependent data marts, this process is somewhat simplified because formatted and summarized (clean) data has already been loaded into the central data warehouse. The ETL process for dependent data marts is mostly a process of identifying the right subset of data relevant to the chosen data mart subject and moving a copy of it, perhaps in a summarized form.

With independent data marts, however, you must deal with all aspects of the ETL process, much as you do with a central data warehouse. The number of sources is likely to be fewer and the amount of data associated with the data mart is less than the warehouse, given your focus on a single subject.


Implementing Steps of a Data mart:
Steps are:

 Designing

 Constructing

 Populating

 Accessing

 Managing


Designing:
The design step is first in the data mart process. This step covers all of the tasks from initiating the request for a data mart through gathering information about the requirements, and developing the logical and physical design of the data mart.
The design step involves the following tasks:

01. Gathering the business and technical requirements.
02. Identifying data sources.
03. Selecting the appropriate subset of data.
04. Designing the logical and physical structure of the data mart.


Constructing:
This step includes creating the physical database and the logical structures associated with the data mart to provide fast and efficient access to the data.
This step involves the following tasks:
01. Creating the physical database and storage structures, such as tablespaces, associated with the data mart.
02. Creating the schema objects, such as tables and indexes defined in the design step.
03. Determining how best to set up the tables and the access structures.

Populating:
The populating step covers all of the tasks related to getting the data from the source, cleaning it up, modifying it to the right format and level of detail, and moving it into the data mart.

The populating step involves the following tasks:

01. Mapping data sources to target data structures.
02. Extracting data.
03. Cleansing and transforming the data.
04. Loading data into the data mart.
05. Creating and storing metadata.

Accessing:
The accessing step involves putting the data to use: querying the data, analyzing it, creating reports, charts, and graphs, and publishing these. Typically, the end user uses a graphical front-end tool to submit queries to the database and display the results of the queries.

The accessing step requires that you perform the following tasks:

01. Set up an intermediate layer for the front-end tool to use. This layer, the metalayer, translates database structures and object names into business terms, so that the end user can interact with the data mart using terms that relate to the business function.
02. Maintain and manage these business interfaces.
03. Set up and manage database structures, like summarized tables that help queries submitted through the front-end tool execute quickly and efficiently.

Managing:
This step involves managing the data mart over its lifetime. In this step, you perform management tasks such as the following:

01. Providing secure access to the data.
02. Managing the growth of the data.
03. Optimizing the system for better performance.
04. Ensuring the availability of data even with system failures.

Tuesday, April 20, 2010

INTERVAL Problem with DBMS_JOB Package

The interval part of the job is analyzed AFTER the job is executed. If you want the job to run at the top of the hour, and you specified an interval of 'sysdate+1/24'... recognize that the value for sysdate is going to be evaluated AFTER the job completes... so if the elapsed time of the job is 5 minutes, the job will be executed at times like 1:00, 2:05, 3:10, 4:15, 5:20, etc. If you want the job to run on the hour, every hour, then you need to account for a possible delay in the start of the job, as well as the elapsed run time of the job in the value of sysdate.Just make sure that expression used for the interval results in the value that you want e.g. to schedule a job to run at the top of each hour, specify interval as 'trunc(sysdate,''HH24'')+1/24'.

Examples:

Run every day at 8.00 in the morning:

dbms_job.submit(:jno,'xyz;',trunc(sysdate)+1/3,'trunc(sysdate+1)+1/3') ;
Example For Choosing date & time:

---------------------------------

select trunc(sysdate+1)+1/24*8 "08:00 AM",

trunc(sysdate+1)+1/24*9 "09:00 AM",

trunc(sysdate+1)+1/24*10 "10:00 AM"

from dual;





Sample Output:

==========



08:00 AM                          09:00 AM                             10:00 AM
-----------                          -----------                             -----------
4/20/2010 8:00:00 AM       4/20/2010 09:00:00 AM       4/20/2010 10:00:00 AM

For more see the following link :
http://www.pafumi.net/dbms_job.htm

Wednesday, March 24, 2010

What is SQL?

What is SQL?

SQL is pronounced “Ess-Queue-Ell” or “Sequel,” depending on the source and location of your training. The meaning of the acronym SQL is Structured Query Language. SQL is the language used to access data from a relational database, such as Oracle Database. SQL was originally devised as a computer language with the development of the IBM relational database called System R. The idea of SQL originates from the relational data model. SQL is used to access subsets of data, in the form of rows or tuples from a larger data set. A data set is represented by a related group of entities, in a data model of a set of tables, in an Oracle Database schema.


What is DUAL Table?

Every DML statement creates an implicit cursor. A cursor is an area in memory allocated for the results of an SQL statement. SELECT statements require a source table for the implicit cursor to operate on. The DUAL table is required as a dummy table, to allow the SELECT statement to retrieve data not stored in a table, view, or other database object. The DUAL table is a repository for an expression result applied to a single value. The DUAL table acts as a temporary repository for the result of an expression. The result of the expression is selected from the DUAL table, as in the following examples:

SELECT * FROM DUAL;

SELECT SYSDATE FROM DUAL;

SELECT USER FROM DUAL;

SELECT 'This is a string' FROM DUAL;


What is NULL value?

There are several things to remember about NULL:

Ø  NULL represents nothing.
Ø  Not even a space character is NULL.
Ø  NULL is not the same as zero.
Ø  NULL values are not always created in indexes.
Ø  Most functions return NULL when passed a NULL value.
Ø  Test for NULL using IS [NOT] NULL.
Ø  An expression containing a NULL returns NULL.
Ø  Use the NVL(value, replace) function to account for NULL values in expressions.
Ø  NULL values sort as the highest value.


  • What Is the Parser?

  • The meaning of the word parsing is that of syntactical and perhaps grammatical or semantic analysis of a sentence. In programming terms, parsing is a syntax check on a line of program code and between related lines. SQL statements are lines of program code, so the parser is effectively a compiler of SQL statements; much like a C compiler is a compiler or syntax analyzer of C programming language commands.

    Parses can be split into two categories:

    Hard parse: No match is found in the shared pool for a SQL statement never before submitted to the database engine.

    Soft parse: A match is found for a SQL statement, perhaps submitted to the database engine by a different session.

    Note: Hard parses are more expensive in system resources than soft parses, but reparsing is best avoided altogether if possible.


Why is a COMMIT statement faster than a ROLLBACK statement?
What occurs in the database during the processing of these statements?

Ø  Before COMMIT or ROLLBACK
§  Redo logs are written.
§   The database is changed.
§   Rollback is written.
Ø  On COMMIT
§  Rollback is deleted.
Ø  On ROLLBACK
§  Rollback is recorded in the redo logs.
§  Rollback is applied to the database.
§  Rollback is deleted.


How to tune a JOIN?

There are several factors to consider:

  • Use equality first.
  • Use range operators where equality does not apply.
  • Avoid use of negatives in the form of != or NOT.
  • Avoid LIKE pattern matching.
  • Try to retrieve specific rows, and in small numbers.
  • Filter from large tables first to reduce the number of rows joined. Retrieve tables in order from the most highly filtered table downward, preferably the largest table, which has the most filtering applied.
  • Use indexes wherever possible, except for very small tables.
  • Let the optimizer do its job.
  • Materialized views and query rewrite. Materialized views are somewhat out of the scope of OLTP databases, but perhaps not so in the future.


  • Ref: Oracle® Performance Tuning for 10gR2 (Second Edition)
    By Gavin Powell
     

Thursday, March 11, 2010

To Get User Environment Value From ORACLE Database Using USERENV Parameter.

Select SYS_CONTEXT('USERENV','TERMINAL') terminal,
SYS_CONTEXT('USERENV','LANGUAGE') language,
SYS_CONTEXT('USERENV','SESSIONID') sessionid,
SYS_CONTEXT('USERENV','INSTANCE') instance,
SYS_CONTEXT('USERENV','ENTRYID') entryid,
SYS_CONTEXT('USERENV','ISDBA') isdba,
SYS_CONTEXT('USERENV','NLS_TERRITORY') nls_territory,
SYS_CONTEXT('USERENV','NLS_CURRENCY') nls_currency,
SYS_CONTEXT('USERENV','NLS_CALENDAR') nls_calendar,
SYS_CONTEXT('USERENV','NLS_DATE_FORMAT') nls_date_format,
SYS_CONTEXT('USERENV','NLS_DATE_LANGUAGE') nls_date_language,
SYS_CONTEXT('USERENV','NLS_SORT') nls_sort,
SYS_CONTEXT('USERENV','CURRENT_USER') current_user,
SYS_CONTEXT('USERENV','CURRENT_USERID') current_userid,
SYS_CONTEXT('USERENV','SESSION_USER') session_user,
SYS_CONTEXT('USERENV','SESSION_USERID') session_userid,
SYS_CONTEXT('USERENV','PROXY_USER') proxy_user,
SYS_CONTEXT('USERENV','PROXY_USERID') proxy_userid,
SYS_CONTEXT('USERENV','DB_DOMAIN') db_domain,
SYS_CONTEXT('USERENV','DB_NAME') db_name,
SYS_CONTEXT('USERENV','HOST') host,
SYS_CONTEXT('USERENV','OS_USER') os_user,
SYS_CONTEXT('USERENV','EXTERNAL_NAME') external_name,
SYS_CONTEXT('USERENV','IP_ADDRESS') ip_address,
SYS_CONTEXT('USERENV','NETWORK_PROTOCOL') network_protocol,
SYS_CONTEXT('USERENV','BG_JOB_ID') bg_job_id,
SYS_CONTEXT('USERENV','FG_JOB_ID') fg_job_id,
SYS_CONTEXT('USERENV','AUTHENTICATION_TYPE') authentication_type,
SYS_CONTEXT('USERENV','AUTHENTICATION_DATA') authentication_data,
SYS_CONTEXT('USERENV','CURRENT_SQL') current_sql,
SYS_CONTEXT('USERENV','CLIENT_IDENTIFIER') client_identifier,
SYS_CONTEXT('USERENV','GLOBAL_CONTEXT_MEMORY') global_context_memory
From Dual;

For more information, please Click Here.

Thursday, March 4, 2010

Sequence, Nextval & Currval

Sequence:

A sequence is a database object that generates unique numbers, mostly used for primary key values. One can select the NEXTVAL and CURRVAL from a sequence. Selecting the NEXTVAL will automatically increment the sequence.

Another Definition,
A sequence is a highly scalable, non-blocking, generator that generates unique numbers.

Nextval:
If NEXTVAL is invoked on a sequence, it makes sure that a unique number is generated.

Currval:
CURRVAL can only be used if a session has already called NEXTVAL on a trigger or by sql, CURRVAL will then return the same number that was generated with NEXTVAL.


Create a simple sequence:

CREATE SEQUENCE empseq;
 Sequence created.

Selecting from the sequence:
SQL> select empseq.nextval from dual;
   NEXTVAL
----------
         1

SQL> select empseq.nextval from dual;
   NEXTVAL
----------
         2

Note that nextval and currval returns the same value for each row of a select:

SQL> select empseq.nextval, empseq.currval from dual;
   NEXTVAL    CURRVAL  
   ----------           ----------
3                                            3         

In PL/SQL, up to Oracle 10g, nextval and currval have to be selected from dual:

declare
  n number;
begin

  select sequence_test.nextval into n from dual;

  dbms_output.put_line(n);

end
/


Apparently, in Oracle 11g, one is not required anymore to select from dual:

declare
  n number;
begin

  n := sequence_test.nextval;

  dbms_output.put_line(n);

end
/

Wednesday, March 3, 2010

Index, Bitmap Index, B-Tree Index

What is Index?

An index is a performance-tuning method of allowing faster retrieval of records. An index creates an entry for each value that appears in the indexed columns. By default, Oracle creates B-tree indexes.

Oracle's two major index types are Bitmap indexes and B-Tree indexes. B-Tree indexes are the regular type that OLTP systems make much use of, and bitmap indexes are a highly compressed index type that tends to be used primarily for data warehouses.

What is Bitmap Index?

A bitmap index is an indexing method that can provide both performance benefits and storage savings. Bitmap indexes are particularly useful for data warehousing environments because data is usually updated less frequently and ad hoc queries are more common.

Bit-mapped indexes are best for “low-cardinality” data (such as a column to indicate a person’s gender, which contains only two possible values: MALE and FEMALE).

Characteristic of Bitmap Indexes:
01.  For columns with very few unique values (low cardinality)
02.  Tables that have no or little insert/update are good candidates (static data in warehouse)
03.  Stream of bits: each bit relates to a column value in a single row of table.

What is B-Tree Index?
B-tree indexes are usually created on columns containing the most unique values. It is an ordered set of entries in which each entry a search key value and a pointer to a specific row with that value. When a server finds a search key value matching a constraint, the pointer is used to fetch the row.


Difference between B-Tree & Bitmap Index

Table Function in Oracle

Table Function:

Table functions are functions that produce a collection of rows (either a nested table or a varray) that can be queried like a physical database table or assigned to a PL/SQL collection variable. You can use a table function like the name of a database table, in the FROM clause of a query, or like a column name in the SELECT list of a query.

A table function can take a collection of rows as input. An input collection parameter can be either a collection type (such as a VARRAY or a PL/SQL table) or a REF CURSOR.

What Are Table Functions?

Table functions return a collection type instance representing rows in a table. They can be queried like a table by calling the function in the FROM clause of a query, enclosed by the TABLE keyword. They can be assigned to a PL/SQL collection variable by calling the function in the SELECT list of a query.


For Example:

The collection type BookSet_t is defined as:

CREATE TYPE Book_t AS OBJECT
( name VARCHAR2(100),
  author VARCHAR2(30),
  abstract VARCHAR2(1000));
CREATE TYPE BookSet_t AS TABLE OF Book_t;

The CLOBs are stored in a table Catalogs:

CREATE TABLE Catalogs
( name VARCHAR2(30),
  cat CLOB);

Function GetBooks is defined as follows:

CREATE FUNCTION GetBooks(a CLOB) RETURN BookSet_t;

The query below returns all the catalogs and their corresponding book listings.

SELECT c.name, Book.name, Book.author, Book.abstract
  FROM Catalogs c, TABLE(GetBooks(c.cat)) Book;


See another Link for Table Function: Link-1 or Link-2 



Pipelined Function:
A pipelined function is one that returns a result set as a collection, but does so iteratively.

Example:

CREATE TYPE pet_t IS OBJECT (
   NAME                          VARCHAR2 (60),
   breed                         VARCHAR2 (100),
   dob                           DATE);
/

CREATE TYPE pet_nt IS TABLE OF pet_t;
/

CREATE OR REPLACE FUNCTION pet_family (dad_in IN pet_t, mom_in IN pet_t)
    RETURN pet_nt
    PIPELINED
IS
    l_count PLS_INTEGER;
    retval   pet_nt := pet_nt ();
BEGIN
    PIPE ROW (dad_in);
    PIPE ROW (mom_in);
 
    IF mom_in.breed = 'RABBIT'
    THEN
       l_count := 12;
    ELSIF mom_in.breed = 'DOG'
    THEN
       l_count := 4;
    ELSIF mom_in.breed = 'KANGAROO'
    THEN
       l_count := 1;
    END IF;
 
    FOR indx IN 1 .. l_count
    LOOP
       PIPE ROW (pet_t ('BABY' || indx, mom_in.breed, SYSDATE));
    END LOOP;
 
    RETURN;
END;


SELECT *
  FROM TABLE (pet_family (
                    pet_t ('Bob', 'KANGAROO', SYSDATE),
                    pet_t ('Sally', 'KANGAROO', SYSDATE)
                 )
              );



OutPut:

NAME     BREED    DOB

Bob      KANGAROO    3/3/2010 11:24:41 AM
Sally    KANGAROO    3/3/2010 11:24:41 AM
BABY1    KANGAROO    3/3/2010 11:24:41 AM
 

Tuesday, February 23, 2010

Compile report rdf into rep with a batch.

REM WINDOWS COMPILE REPORT
::compile_report.bat
cls
Echo compiling Report .....
rem ---------------------------
rem begin command line argument
rem ---------------------------
for %%f IN (*.RDF) do rwconverter userid=stlbas/stlbas@stlbas_11 batch=yes source=%%f stype=rdffile DTYPE=repFILE compile_all=yes OVERWRITE=yes logfile=log.txt dest='D:\Report\REP\%%f'
rem -------------------------
rem end command line argument
rem -------------------------
ECHO FINISHED COMPILING

Monday, February 8, 2010

How to Connect with Access Database from Forms Developer?

Environment:
WIndows XP
Oracle 10g XE
MS Access 2003
Forms 10g Rel-1


STEP I:
Create a simple access database. In this example, it is "C:\orafaq\Orafaq.mdb". This database contains a single table, called "emp" with the columns "ename" and "empno".

STEP II:
In the ODBC Data source administrator, you need to add a System DSN pointing to this database. So you select "MS Access Driver" and add the name "OrafAccess" in the data source name. Make sure you've added the correct .mdb file (orafaq.mdb)

STEP III:
Now we set up the HS init file: it's name is fixed (init.ora). It contains only two lines of code:

$ORACLE_HOME/hs/admin/initOrafAccess.ora:

#
# HS init parameters
#
HS_FDS_CONNECT_INFO = OraFaccess
HS_FDS_TRACE_LEVEL = 0



STEP IV:
Next, I changed the listener.ora of my XE database:
$ORACLE_HOME/network/admin/listener.ora:
added in SID_LIST:

    (SID_DESC =
         (SID_NAME = OrafAccess)
         (ORACLE_HOME = )
         (program = hsodbc)
        )


STEP V:
$ORACLE_HOME/network/admin/tnsnames.ora:
The setup of the HSODBC is done, and we need to make it accessible for our Oracle users. Like any other database there has to be an entry in the TNSNAMES.ORA file:

ORAFACCESS =
   (description =
    (address = (protocol=tcp)(host=localhost)(port=1521))
     (connect_data = (sid=orafaccess))
     (hs=ok)
   )


STEP VI:
bounce the listener. You can do this befor step V, but I just did it in this order.

STEP VII:
SQL*Plus:


SQL: create database link orafaccess using 'ORAFACCESS';

Database link created.

SQL: desc emp@orafaccess
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------

 empno                                              NUMBER(10)
 ename                                              VARCHAR2(50)

SQL: create synonym access_emp for emp@orafaccess;

Synonym created.

SQL: desc access_emp;
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------

 empno                                              NUMBER(10)
 ename                                              VARCHAR2(50)

SQL:


STEP VIII:
In Forms we make a basic form:
- data block wizard
- table Access_emp
- key mode UPDATEABLE (rowid won't work)
- empno = primary key
- layout wizard
- tabular design

I added double quotes (") around the column names in the property palette. I also added the following triggers:
ON-COMMIT:

BEGIN
   COMMIT_FORM;
END;

ON-LOCK:

BEGIN
   NULL;
END;

ON-DELETE:

BEGIN
   DELETE_RECORD;
END;

--- Step Complete ---

Ref For More : Click Here or  Click Here 


Saturday, February 6, 2010

Calling .Net Web Services from Oracle PL/SQL

declare
     http_req  utl_http.req;
     http_resp utl_http.resp;
     request_env varchar2(32767);
     response_env varchar2(32767);
begin

request_env:=

'
 
   
      This is my message
   
  
'; 
dbms_output.put_line('Length of Request:' || length(request_env));
dbms_output.put_line ('Request: ' || request_env);

http_req := utl_http.begin_request('http://wsXXXX/Test_WebService/Service.asmx', 'POST', utl_http.HTTP_VERSION_1_1);
utl_http.set_header(http_req, 'Content-Type', 'text/xml; charset=utf-8');
utl_http.set_header(http_req, 'Content-Length', length(request_env));
utl_http.set_header(http_req, 'SOAPAction', '"http://tempuri.org/LogMessage"');
utl_http.write_text(http_req, request_env);

dbms_output.put_line('');

http_resp := utl_http.get_response(http_req);
dbms_output.put_line('Response Received');
dbms_output.put_line('--------------------------');
dbms_output.put_line ( 'Status code: ' || http_resp.status_code );
dbms_output.put_line ( 'Reason phrase: ' || http_resp.reason_phrase );

utl_http.read_text(http_resp, response_env);
dbms_output.put_line('Response: ');
dbms_output.put_line(response_env);

utl_http.end_response(http_resp);

end;

Ref: http://www.lostechies.com/blogs/joshua_lockwood/archive/2007/09/14/calling-net-web-services-from-oracle.aspx 

Wednesday, January 27, 2010

Shrink a Table

CREATE OR REPLACE PROCEDURE  .dpr_table_shrink
IS
CURSOR c_table_name
IS
SELECT table_name
FROM user_tables
WHERE TEMPORARY = 'N' AND row_movement = 'DISABLED';

CURSOR c_table_shrink
IS
SELECT table_name
FROM user_tables
WHERE TEMPORARY = 'N' AND row_movement = 'ENABLED';

table_id INTEGER;
table_status INTEGER;
owner_name VARCHAR2 (200);
BEGIN
SELECT SYS_CONTEXT ('USERENV', 'SESSION_USER')
INTO owner_name
FROM DUAL;

FOR i IN c_table_name LOOP
table_id := DBMS_SQL.open_cursor;
DBMS_SQL.parse (table_id,
'ALTER TABLE '
|| i.table_name
|| ' ENABLE ROW MOVEMENT',
DBMS_SQL.native
);
table_status := DBMS_SQL.EXECUTE (table_id);
DBMS_SQL.close_cursor (table_id);
END LOOP;

FOR i IN c_table_shrink LOOP
BEGIN
table_id := DBMS_SQL.open_cursor;
DBMS_SQL.parse (table_id,
'ALTER TABLE ' || i.table_name || ' SHRINK SPACE',
DBMS_SQL.native
);
table_status := DBMS_SQL.EXECUTE (table_id);
DBMS_SQL.close_cursor (table_id);
DBMS_OUTPUT.put_line ('Table shrink completed: ' || i.table_name);
EXCEPTION
WHEN OTHERS THEN
IF DBMS_SQL.is_open (table_id) THEN
DBMS_SQL.close_cursor (table_id);
END IF;

DBMS_OUTPUT.put_line ('Table can not shrink: ' || i.table_name);
END;
END LOOP;
END;
/

Wednesday, January 20, 2010

Compile a Invalid Object

CREATE OR REPLACE PROCEDURE Dpr_Compile_Obj
IS

CURSOR obj_cur IS
SELECT object_type,object_name
FROM user_objects
WHERE status = 'INVALID'
AND object_type IN ('PROCEDURE','FUNCTION','PACKAGE','PACKAGE_BODY','VIEW','TRIGGER')
ORDER BY object_type,object_name;

BEGIN
FOR obj_rec IN obj_cur LOOP
DBMS_DDL.ALTER_COMPILE (obj_rec.object_type,user,obj_rec.object_name);
END LOOP;
END;
/

Generate a CSV file from ORACLE Database

CREATE OR REPLACE Procedure Get_Csv( p_query in varchar2,
p_dir in varchar2 ,
p_filename in varchar2 )
Is
l_output utl_file.file_type;
l_theCursor integer default dbms_sql.open_cursor;
l_columnValue varchar2(2000);
l_status integer;
l_colCnt number default 0;
l_separator char(1);
l_cnt number default 0;

Begin

l_output := utl_file.fopen( p_dir, p_filename||'.csv', 'w' );

dbms_sql.parse( l_theCursor, p_query, dbms_sql.native );

For i in 1 .. 255 Loop
Begin
dbms_sql.define_column( l_theCursor, i, l_columnValue, 2000 );
l_colCnt := i;
Exception
when others then
if ( sqlcode = -1007 ) then exit;
else
raise;
end if;
End;
End Loop;

dbms_sql.define_column( l_theCursor, 1, l_columnValue, 2000 );

l_status := dbms_sql.execute(l_theCursor);

Loop
Exit when ( dbms_sql.fetch_rows(l_theCursor) <= 0 );

l_separator := '';

for i in 1 .. l_colCnt loop
dbms_sql.column_value( l_theCursor, i, l_columnValue );
utl_file.put( l_output, l_separator || l_columnValue );
l_separator := ',';
end loop;

utl_file.new_line( l_output );
l_cnt := l_cnt+1;

End loop;

dbms_sql.close_cursor(l_theCursor);

utl_file.fclose( l_output );

End Get_Csv;
/

Monday, January 18, 2010

Analyze Table & Indexes

CREATE OR REPLACE PROCEDURE ANALYZE_TABLE_INDEXES IS
CURSOR c_table_name
IS
SELECT table_name
FROM user_tables
WHERE TEMPORARY = 'N';

CURSOR c_index_name (table_name1 IN VARCHAR2)
IS
SELECT index_name
FROM user_indexes
WHERE table_name = table_name1 AND status = 'VALID';

owner_name VARCHAR2 (200);
BEGIN
SELECT SYS_CONTEXT ('USERENV', 'SESSION_USER')
INTO owner_name
FROM DUAL;

FOR i IN c_table_name LOOP
DBMS_STATS.unlock_table_stats (ownname => owner_name,
tabname => i.table_name,
stattype => 'ALL'
);

BEGIN
DBMS_STATS.gather_table_stats
(ownname => owner_name,
tabname => i.table_name,
estimate_percent => DBMS_STATS.auto_sample_size,
CASCADE => DBMS_STATS.auto_cascade,
DEGREE => 2,
no_invalidate => DBMS_STATS.auto_invalidate,
granularity => 'AUTO',
method_opt => 'FOR ALL COLUMNS SIZE AUTO'
);

DBMS_OUTPUT.put_line ( 'Statistics gathered for table: '
|| i.table_name
);

EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.put_line
( 'Statistics can not gathered for table: '
|| i.table_name
);
END;

FOR j IN c_index_name (i.table_name) LOOP
BEGIN
DBMS_STATS.gather_index_stats
(ownname => owner_name,
indname => j.index_name,
estimate_percent => DBMS_STATS.auto_sample_size,
DEGREE => 2,
no_invalidate => DBMS_STATS.auto_invalidate,
granularity => 'AUTO'
);

DBMS_OUTPUT.put_line ( 'Statistics gathered for index: '
|| i.table_name
|| '.'
|| j.index_name
);

EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.put_line
( 'Statistics can not gathered for index: '
|| i.table_name
|| '.'
|| j.index_name
);
END;
END LOOP;

DBMS_STATS.lock_table_stats (ownname => owner_name,
tabname => i.table_name
);
END LOOP;
END;
/

Wednesday, January 13, 2010

How to create TNS_ADMIN environment variable?

01. First, you have to create a directory whose name is "0_library", which is on a "C" disk.

02. Then you'll have to copy one of TNSNAMES.ORA files into that directory

03. Now it makes sense to point the TNS_ADMIN environment variable to the "c:\0_library" directory             because, if it doesn't exist or if it is empty, the whole TNS_ADMIN story doesn't make any sense

04. When necessary, maintain this TNSNAMES.ORA file; the rest of them (which are in their \network\admin           directories) should be renamed into, for example, TNSNAMES_OLD.ORA.

For furthure more
http://www.orafaq.com/forum/m/376523/72104/?srch=lengthy#msg_376523