Basic Concepts of XML

What is XML?
  • XML stands for eXtensible Mark up Language. It is classified as an extensible language because it allows its users to define their own tags.
  • XML was developed to provide a universal format for describing structured documents and data.
  • There are no fixed tags for XML. Any user can add his own set of tags. The tags though are similar to HTML, they do differ by the way it is presented.
  • Unlike HTML, which tags elements in Web pages for presentation by a browser, e.g. Oracle, XML tags elements as data, e.g. Oracle. In this example HTML identifies as a command to display the data within as Bold. But in case of XML, the company for instance can be a column name in a database and Oracle is the column value.
Why do we use XML?
  • As XML is W3C(World Wide Web Consortium) standard, various software companies have openly accepted and implemented it in their operations.
  • It is a fee-free open standard.
  • It is platform-independent, language-independent, textual data.
  • XML can be used with existing web protocols (such as HTTP and MIME) and mechanisms (such as URL's ), and it does not impose any additional requirements.
  • XML can handle any kind and high volumes of information especially over the internet and WWW.
  • It is Unicode compatible, means it can handle UTF ready languages.
  • It is used as an interface touch-point between majority of applications. XML is replacing the age-old flat file system to send and receive data between applications.
Building blocks of XML

XML documents are made up by the following building blocks:
  • Elements
  • Attributes
  • Entities
  • PCDATA
  • CDATA
What are Elements?

Elements are the main building blocks of XML documents.

XML elements could be "my_body" and "message" in the following example. Elements can contain text, other elements, or be empty.
<my_body>some text</my_body>
<message>some other text</message>

What are Attributes?

Attributes provide extra information about elements.
Attributes are always placed inside the opening tag of an element. Attributes always come in name/value pairs. The following "images" element has additional information about a source file and its name:
Example:
<images location="computer.gif" name="some image name"/>

In the above example, images is called as an Element; whereas location and name are called as Attributes.

What are Entities?

Some characters have a special meaning in XML, like the less than sign (<) that defines the start of an XML tag. The following entities are predefined in XML:
lt;
gt;
amp;
quot;
apos;

Add and & mark before this special character.

What is PCDATA?

PCDATA means parsed character data. Think of character data as the text found between the start tag and the end tag of an XML element.

PCDATA is text that WILL be parsed by a parser. The text will be examined by the parser for entities and markup. Tags inside the text will be treated as markup and entities will be expanded.

However, parsed character data should not contain any &, <, or > characters; these need to be represented by the amp, lt; and gt; entities, respectively.

What is CDATA?

CDATA means character data. CDATA is text that will NOT be parsed by a parser. Tags inside the text will NOT be treated as markup and entities will not be expanded.

Sample XML
<?xml version="1.0"?>
<!DOCTYPE note [
<!ELEMENT note (to,from,heading,body)>
<!ELEMENT to (#PCDATA)>
<!ELEMENT from (#PCDATA)>
<!ELEMENT heading (#PCDATA)>
<!ELEMENT body (#PCDATA)>
]>
<note>
<to>Tove</to>
<from>Jani</from>
<heading>Reminder</heading>
<body>Don't forget me this weekend</body>
</note>


In the above example XML file the string starting from <!DOCTYPE note [ upto ]> is called as DTD (Document Type Definition).


What is DTD?

DTD (Document Type Definition) is a set of rules or grammar that we define to construct our own XML rules (also called a "vocabulary"). In other words, a DTD provides the rules that define the elements and structure of our new language.

This is comparable to defining table structures in Oracle for a new system. As we define the columns of a table, determine the datatypes of the columns, determine if the column is 'Null' allowed or not, the DTD defines the structure for the XML document.

A DTD can be declared inline inside an XML document (as in the previous slide), or as an external reference(as in the below example).

Example of external DTD:
<?xml version="1.0"?>
<!DOCTYPE note SYSTEM "note.dtd">
<note>
<to>Tove</to>
<from>Jani</from>
<heading>Reminder</heading>
<body>Some data</body>
</note>

The contents of note.dtd file is as below:
<!ELEMENT note (to,from,heading,body)>
<!ELEMENT to (#PCDATA)>
<!ELEMENT from (#PCDATA)>
<!ELEMENT heading (#PCDATA)>
<!ELEMENT body (#PCDATA)>

Why use a DTD?

With a DTD, each of your XML files can carry a description of its own format.

With a DTD, independent groups of people can agree to use a standard DTD for interchanging data.

Your application can use a standard DTD to verify that the data you receive from the outside world is valid. You can also use a DTD to verify your own data.

Interval datatypes in Oracle

Starting from Oracle 9i Release 1, there are datatypes available in Oracle which can store period intervals. The differences or intervals can be specified in years, months, days, hours, minutes and seconds. The new datatypes where introduced to comply with SQL 99 standards. Till Oracle 9i programmers had to convert the interval to some milliseconds/seconds format and store it in NUMBER datatype.

Why do we use the INTERVAL datatypes introduced? We already have DATE datatype which can store DATE and TIME information. The answer is that the INTERVAL datatype stores an interval of time and not exactly one time at one point of time. Isn’t this a good feature?

In Oracle 9i, the time interval datatypes INTERVAL YEAR TO MONTH, and INTERVAL DAY TO SECOND were added along with several other datatypes to deal more precisely with points in time. The TIMESTAMP, TIMESTAMP WITH TIME ZONE, and TIMESTAMP WITH LOCAL TIME ZONE datatypes all express time to fractions of a second, and the last two accounts for changes geographical location as well.

You can use the interval datatypes in both SQL and PL/SQL. They are specified the same way:

INTERVAL YEAR [(year_precision)] TO MONTH

INTERVAL DAY [(day_precision)] TO SECOND [(fractional_seconds_precision)]

There are defaults for the precision values: two digits for the year and day, and six digits for fractions of seconds. Interval literals are expressed by the word INTERVAL, followed by an expression in single quotes, and words that interpret the expression. YEAR TO MONTH interval literals use a hyphen (-) between the year and month. DAY TO SECOND interval literals use a space between the number of days and time.

For example, this is a time interval of two years and six months:

INTERVAL '2-6' YEAR TO MONTH

This covers three days, 12 hours, 30 minutes, and 6.7 seconds:

INTERVAL '3 12:30:06.7' DAY TO SECOND (1)

Intervals can be positive or negative. They can be added to or subtracted from the various TIMESTAMP datatypes; the result is a new TIMESTAMP. They can be added or subtracted from each other as well, resulting in a new interval.

The following example shows how to create a table to record the time interval, for instance for a test.

CREATE TABLE test
(test_id NUMBER(9),
test_desc VARCHAR2(80),
test_start TIMESTAMP,
test_duration INTERVAL DAY(1) TO SECOND(4)
);

Table created.

INSERT INTO test
VALUES
(
1, 'MATHEMATICS TEST', '23-AUG-2007
02:00:00 PM',
INTERVAL '0 3:00:0' DAY(0) TO SECOND(0)
);

1 row created.

SELECT * FROM test;

TEST_ID TEST_DESC TEST_START TEST_DURATION
--------------------------------------------------
1 MATHEMATICS TEST
23-AUG-07 02.00.00.000000 PM +0 03:00:0


Now we can compute the ending time like below:

SELECT test_id, test_start,
Test_start + test_duration test_end

FROM
test;

TEST_ID TEST_START TEST_END
-----------------------------------------------
1
23-AUG-07 02.00.00.000000000 PM 23-AUG-07 05.00.00.000000000 PM

Unfortunately, the TO_CHAR function doesn't contain any format models that map to the individual pieces of interval datatypes. Instead, you can use the new EXTRACT function to extract pieces and combine them. The syntax is as below:

EXTRACT(timepart FROM interval_expression)
The example for the usage of EXTRACT function is as follows:
SELECT EXTRACT(DAY FROM test_duration) ||
' days, ' ||
EXTRACT (HOUR FROM test_duration) ||
' hours, ' ||
EXTRACT (MINUTE FROM test_duration) ||
' minutes'
Duration
FROM
test;

DURATION
--------------------------
0 days, 3 hours, 0 minutes

First, the number of days is extracted from the column test_duration, and the literal "Days" is concatenated to it. The same is done for the hours and minutes portions of the test’s duration.

Disabling a Command for a User

Let’s say we have an instance where in there are 100 tables in a schema and that Schema Owner is not supposed to use delete on any of those tables.

Then instead of getting into the mess of Grants / revokes, we can use one of the facilities that are provided by Oracle.

There is a table called as product_profile and it can be created (if not present) by executing the PUBBLD.SQL

The PRODUCT_PROFILE table is owned by SYSTEM and has the following structure:

Name         Null?       Type
------------------------------------

PRODUCT      NOT NULL    VARCHAR2(30)
USERID                   VARCHAR2(30)
ATTRIBUTE                VARCHAR2(240)
SCOPE                    VARCHAR2(240)
NUMERIC_VALUE            NUMBER(15,2)
CHAR_VALUE               VARCHAR2(240)
DATE_VALUE               DATE
LONG_VALUE               LONG


To disable a command for a user, insert a row into the PRODUCT_PROFILE table. You should normally log in as SYSTEM, and your INSERT statement should look like this:

INSERT INTO product_profile (product, userid, attribute, char_value) VALUES ('SQL*Plus','USERNAME','COMMAND_NAME','DISABLED');

Example:

INSERT INTO product_profile (product, userid, attribute, char_value) VALUES('SQL*Plus','SCOTT','DELETE','DISABLED');

Now, SCOTT cannot use the DELETE command on any table. IF you want to allow him to use the delete command then, simply login as system again and drop the record from product_profile.

Example: DELETE FROM product_profile;

 

Distinguish SYS and SYS as SYSDBA

Have you ever wondered after connecting to the database as SYS you wanted to check whether you are connected as regular user SYS or as SYS AS SYSDBA? Well the information is available in view V$SESSION_CONNECT_INFO. The following are the fields available in this view:

 

Name                       Type

------------------------   ------------------

SID                         NUMBER

AUTHENTICATION_TYPE         VARCHAR2(26)

OSUSER                      VARCHAR2(30)

NETWORK_SERVICE_BANNER     VARCHAR2(4000)

 

 

You can execute the following query to know the connection type:

SQL> SELECT sid, authentication_type, osuser FROM v$session_connect_info WHERE sid IN

      (SELECT sid FROM v$session WHERE username=USER);

SID    AUTHENTI      OSUSER
----   --------      --------------
4      OS            Administrator
5      DATABASE      Administrator


If the authentication type is DATABASE, you logged in as the regular SYS user, but if the type is OS, it means you logged in as SYS AS SYSDBA.

 

The following five operations on Oracle require the user to have SYSDBA privileges in order to perform the operation:

  • startup a database,
  • shutdown a database,
  • backup a database,
  • recover a database and
  • create a database

V$PWFILE_USERS view lists all users who have been granted SYSDBA or sysoper privileges. The SYSDBA privilege can not be granted to public.

 

Note, SYSDBA is not a role, it is a privilege. You'll find it in system_privilege_map, not in dba_roles.

 

Anytime, someone connects as SYSDBA, it turns out it's being SYS. That is, if SYSDBA is granted to JOHN and John connects as SYSDBA and select user from dual, it reveals he's actually SYS.

 

SYS is also special in that it is not possible to create a trigger in the sys schema. Also, a logon trigger is not executed when sys connects to the database.

 

 

Index skip scan

The index skip scan is a new execution plan in Oracle9i whereby an Oracle query can bypass the leading-edge of a concatenated index and access the inside keys of a multi-values index.

For example, consider the following concatenated index:

create index sex_emp_id on emp (sex, emp_id);

Prior to Oracle9i, this index could only be used with both sex and emp_id were present in the SQL query, or when the sex column was specified.  The following query would not be able to use the concatenated index:

select emp_id from emp where emp_id = 123;

The Oracle9i skip scan execution plan allows for the concatenated index to be used, even though sex is not specified in the SQL query.  This feature promises that there is no need to provide a second index on the emp_id column.  Oracle acknowledges that the index skip scan is not as fast as a direct index lookup, but states that the index skip scan is faster than a full-table scan (otherwise why would oracle introduce such a feature).

What Oracles does not mention is that the cardinality of the leading column has a direct impact on the speed of the index skip scan.  In our example, the first column, sex has two records (‘F’, and ‘M’).

While Oracle does not publish the internals of the index skip scan, we can infer from the execution plan that Oracle is internally generating multiple queries, thereby satisfying the query with multiple sub-queries:

SELECT STATEMENT Optimizer=CHOOSE (Cost=6 Card=1 Bytes=5)

   0 SORT (AGGREGATE)

        1 INDEX (SKIP SCAN) OF 'SEX_EMP_ID' (NON-UNIQUE)

Internally, Oracle9i is probably generating two queries and joining the resulting Row ID lists:

select emp_name from emp_where sex = ‘F’ and emp_id = 123

UNION

select emp_name from emp_where sex = ‘M’ and emp_id = 123;

The implications of using the index skip scan are clear:

  • Oracle skip scan execution plan performance will decrease according to the number of unique values in the high order key.   If the leading column were “state” with 50 values, Oracle would be issuing 50 index probes to retrieve the result set.

 

By Donald K. Burleson

Article courtesy: http://www.praetoriate.com/oracle_tips_skip_scan.htm

 

Comments from the author: If you like Oracle tuning, you might enjoy my latest book “Oracle Tuning: The Definitive Reference” by Rampant TechPress.  It’s only $41.95 (I don’t think it is right to charge a fortune for books!) and you can buy it right now at this link: http://www.rampant-books.com/book_2005_1_awr_proactive_tuning.htm

 

 

Oracle Hints

/*+ ALL_ROWS */

Explicitly chooses the cost-based approach to optimize a statement block with a goal of best throughput (that is, minimum total resource consumption)

/*+ CHOOSE */

Causes the optimizer to choose between the rule-based approach and the cost-based approach for a SQL statement based on the presence of statistics for the tables accessed by the statement

/*+ FIRST_ROWS */

Explicitly chooses the cost-based approach to optimize a statement block with a goal of best response time (minimum resource usage to return first row). It will also force the optimizer to make use of index, if available. There are other versions of FIRST_ROWS hints. This hint is useful in an OLTP environment when the user cannot wait till the last row is fetched. This is mainly used in Java lookup screens. If there are some calculations then this hint should not be used.

Test your PL/SQL knowledge, Which code runs faster?
/*+ RULE */

Explicitly chooses rule-based optimization for a statement block

/*+ AND_EQUAL(table index) */

Explicitly chooses an execution plan that uses an access path that merges the scans on several single-column indexes

/*+ CLUSTER(table) */

Explicitly chooses a cluster scan to access the specified table

/*+ FULL(table) */

Explicitly chooses a full table scan for the specified table

/*+ HASH(table) */

Explicitly chooses a hash scan to access the specified table

/*+ HASH_AJ(table) */

Transforms a NOT IN sub query into a hash anti join to access the specified table

/*+ HASH_SJ (table) */

Transforms a NOT IN sub query into a hash anti-join to access the specified table

/*+ INDEX(table index) */

Explicitly chooses an index scan for the specified table

/*+ INDEX_ASC(table index) */

Explicitly chooses an ascending-range index scan for the specified table

/*+ INDEX_COMBINE(table index) */

If no indexes are given as arguments for the INDEX_COMBINE hint, the optimizer uses whatever Boolean combination of bitmap indexes has the best cost estimate. If particular indexes are given as arguments, the optimizer tries to use some Boolean combination of those particular bitmap indexes.

/*+ INDEX_DESC(table index) */

Explicitly chooses a descending-range index scan for the specified table

/*+ INDEX_FFS(table index) */

Causes a fast full index scan to be performed rather than a full table scan

/*+ MERGE_AJ (table) */

Transforms a NOT IN sub query into a merge anti-join to access the specified table

/*+ MERGE_SJ (table) */

Transforms a correlated EXISTS sub query into a merge semi-join to access the specified table

/*+ ROWID(table) */

Explicitly chooses a table scan by ROWID for the specified table

/*+ USE_CONCAT */

Forces combined OR conditions in the WHERE clause of a query to be transformed into a compound query using the

UNION ALL set operator

/*+ ORDERED */

Causes Oracle to join tables in the order in which they appear in the FROM clause

/*+ STAR */

Forces the large table to be joined using a nested-loop join on the index

/*+ DRIVING_SITE (table) */

Forces query execution to be done at a different site from that selected by Oracle

/*+ USE_HASH (table) */

Causes Oracle to join each specified table with another row source with a hash join

/*+ USE_MERGE (table) */

Causes Oracle to join each specified table with another row source with a sort-merge join

/*+ USE_NL (table) */

Causes Oracle to join each specified table to another row source with a nested-loops join using the specified table as the inner table

/*+ APPEND */ , /*+ NOAPPEND */

Specifies that data is simply appended (or not) to a table; existing free space is not used. Use these hints only following the INSERT keyword.

/*+ NOPARALLEL(table) */

Disables parallel scanning of a table, even if the table was created with a PARALLEL clause

/*+ PARALLEL(table, instances) */

This allows you to specify the desired number of concurrent slave processes that can be used for the operation. DELETE, INSERT, and UPDATE operations are considered for parallelization only if the session is in a PARALLEL DML enabled mode. (Use ALTER SESSION PARALLEL DML to enter this mode.)

/*+ PARALLEL_INDEX */

Allows you to parallelize fast full index scan for partitioned and non-partitioned indexes that have the PARALLEL attribute

/*+ NOPARALLEL_INDEX */

Overrides a PARALLEL attribute setting on an index

/*+ CACHE */

Specifies that the blocks retrieved for the table in the hint are placed at the most recently used end of the LRU list in the buffer cache when a full table scan is performed

/*+ NOCACHE */

Specifies that the blocks retrieved for this table are placed at the least recently used end of the LRU list in the buffer cache when a full table scan is performed

/*+ MERGE (table) */

Causes Oracle to evaluate complex views or sub queries before the surrounding query

/*+ NO_MERGE (table) */

Causes Oracle not to merge mergeable views

/*+ PUSH_JOIN_PRED (table) */

Causes the optimizer to evaluate, on a cost basis, whether or not to push individual join predicates into the view

/*+ NO_PUSH_JOIN_PRED (table) */

Prevents pushing of a join predicate into the view

/*+ PUSH_SUBQ */

Causes non merged sub queries to be evaluated at the earliest possible place in the execution plan

/*+ STAR_TRANSFORMATION */

Makes the optimizer use the best plan in which the transformation has been used.

Bind variables - The key to application performance

If you've been developing applications on Oracle for a while, you've no doubt come across the concept of «Bind Variables». Bind variables are one of those Oracle concepts that is key to application performance.

To understand bind variables, consider an application that generates thousands of SELECT statements against a table; for example:

SELECT fname, lname, pcode FROM cust WHERE id = 674;
SELECT fname, lname, pcode FROM cust WHERE id = 234;
SELECT fname, lname, pcode FROM cust WHERE id = 332;

Each time the query is submitted, Oracle first checks in the shared pool to see whether this statement has been submitted before. If it has, the execution plan that this statement previously used is retrieved, and the SQL is executed. If the statement cannot be found in the shared pool, Oracle has to go through the process of parsing the statement, working out the various execution paths and coming up with an optimal access plan before it can be executed. This process is know as a «hard parse» and for OLTP applications can actually take longer to carry out that the DML instruction itself.

When looking for a matching statement in the shared pool, only statements that exactly match the text of the statements are considered; so, if every SQL statement you submit is unique (in that the predicate changes each time, from id = 674 to id=234 and so on) then you'll never get a match, and every statement you submit will need to be hard parsed. Hard parsing is very CPU intensive, and involves obtaining latches on key shared memory areas, which whilst it might not affect a single program running against a small set of data, can bring a multi-user system to it's knees if hundreds of copies of the program are trying to hard parse statements at the same time.

The extra bonus with this problem is that contention caused by hard parsing is pretty much immune to measures such as increasing available memory, numbers of processors and so on, as hard parsing statements is one thing Oracle can't do concurrently with many other operations, and it's a problem that often only comes to light when trying to scale up a development system from a single user working on subset of records to many hundreds of users working on a full data set.

The way to get Oracle to reuse the execution plans for these statements is to use bind variables. Bind variables are «substituion» variables that are used in place of literals (such as 674, 234, 332) and that have the effect of sending exactly the same SQL to Oracle every time the query is executed. For example, in our application, we would just submit

SELECT fname, lname, pcode FROM cust WHERE id = :cust_no;

and this time we would be able to reuse the execution plan every time, reducing the latch activity in the SGA, and therefore the total CPU activity, which has the effect of allowing our application to scale up to many users on a large dataset.

Bind Variables in SQL*Plus

In SQL*Plus you can use bind variables as follows:

SQL> variable deptno number
SQL> exec :deptno := 10
SQL> select * from emp where deptno = :deptno;

What we've done to the SELECT statement now is take the literal value out of it, and replace it with a placeholder (our bind variable), with SQL*Plus passing the value of the bind variable to Oracle when the statement is processed. This bit is fairly straighforward (you declare a bind variable in SQL*Plus, then reference the bind variable in the SELECT statement)

Bind Variables in PL/SQL

Taking PL/SQL first of all, the good news is that PL/SQL itself takes care of most of the issues to do with bind variables, to the point where most code that you write already uses bind variables without you knowing. Take, for example, the following bit of PL/SQL:

create or replace procedure dsal(p_empno in number)
as
begin
update emp
set sal=sal*2
where empno = p_empno;
commit;
end;
/

Now you might be thinking that you've got to replace the p_empno with a bind variable. However, the good news is that every reference to a PL/SQL variable is in fact a bind variable.

Dynamic SQL

In fact, the only time you need to consciously decide to use bind variables when working with PL/SQL is when using Dynamic SQL.

Dynamic SQL, allows you to execute a string containing SQL using the EXECUTE IMMEDIATE command. For next example would always require a hard parse when it is submitted:

create or replace procedure dsal(p_empno in number)
as
begin
execute immediate
'update emp set sal = sal*2 where empno = '||p_empno;
commit;
end;
/

The way to use bind variables instead is to change the EXECUTE IMMEDIATE command as follows:

create or replace procedure dsal(p_empno in number)
as
begin
execute immediate
'update emp set
sal = sal*2 where empno = :x' using p_empno;
commit;
end;
/

And that's all there is to it. One thing to bear in mind, though, is that you can't substitute actual object names (tables, views, columns etc) with bind variables - you can only subsitute literals. If the object name is generated at runtime, you'll still need to string concatenate these parts, and the SQL will only match with those already in the shared pool when the same object name comes up. However, whenever you're using dynamic SQL to build up the predicate part of a statement, use bind variables instead and you'll reduce dramatically the amount of latch contention going on.

The Performance Killer

Just to give you a tiny idea of how huge of a difference this can make performance wise, you only need to run a very small test:

Here is the Performance Killer ....

SQL> alter system flush shared_pool;
SQL> set serveroutput on;

declare
type rc is ref cursor;
l_rc rc;
l_dummy all_objects.object_name%type;
l_start number default dbms_utility.get_time;
begin
for i in 1 .. 1000
loop
open l_rc for
'select object_name
from all_objects
where object_id = ' || i;
fetch l_rc into l_dummy;
close l_rc;
-- dbms_output.put_line(l_dummy);
end loop;
dbms_output.put_line
(round((dbms_utility.get_time-l_start)/100, 2) ||
' Seconds...' );
end;
/
101.71 Seconds...

... and here is the Performance Winner:

declare
type rc is ref cursor;
l_rc rc;
l_dummy all_objects.object_name%type;
l_start number default dbms_utility.get_time;
begin
for i in 1 .. 1000
loop
open l_rc for
'select object_name
from all_objects
where object_id = :x'
using i;
fetch l_rc into l_dummy;
close l_rc;
-- dbms_output.put_line(l_dummy);
end loop;
dbms_output.put_line
(round((dbms_utility.get_time-l_start)/100, 2) ||
' Seconds...' );
end;
/
1.9 Seconds...

That is pretty dramatic. The fact is that not only does this execute much faster (we spent more time PARSING our queries then actually EXECUTING them!) it will let more users use your system simultaneously.

Bind Variables in VB, Java and other applications

The next question is though, what about VB, Java and other applications that fire SQL queries against an Oracle database. How do these use bind variables? Do you have to in fact split your SQL into two statements, one to set the bind variable, and one for the statement itself?

In fact, the answer to this is actually quite simple. When you put together an SQL statement using Java, or VB, or whatever, you usually use an API for accessing the database; ADO in the case of VB, JDBC in the case of Java. All of these APIs have built-in support for bind variables, and it's just a case of using this support rather than just concatenating a string yourself and submitting it to the database.

For example, Java has PreparedStatement, which allows the use of bind variables, and Statement, which uses the string concatenation approach. If you use the method that supports bind variables, the API itself passes the bind variable value to Oracle at runtime, and you just submit your SQL statement as normal. There's no need to separately pass the bind variable value to Oracle, and actually no additional work on your part. Support for bind variables isn't just limited to Oracle - it's common to other RDBMS platforms such as Microsoft SQL Server, so there's no excuse for not using them just because they might be an Oracle-only feature.

Conclusion

Lastly, it's worth bearing in mind that there are some instances where bind variables are probably not appropriate, usually where instead of your query being executed many times a second (as with OLTP systems) your query in fact actually takes several seconds, or minutes, or hours to execute - a situation you get in decision support and data warehousing. In this instance, the time taken to hard parse your query is only a small proportion of the total query execution time, and the benefit of avoiding a hard parse is probably outweighed by the reduction in important information you're making available to the query optimizer - by substituting the actual predicate with a bind variable, you're removing the ability for the optimiser to compare your value with the data distribution in the column, which might make it opt for a full table scan or an index when this isn't appropriate. Oracle 9i helps deal with this using a feature known as bind variable peeking, which allows Oracle to look at the value behind a bind variable to help choose the best execution plan.

Another potential drawback with bind variables and data warehousing queries is that the use of bind variables disallows the potential for star transformations, taking away this powerful option for efficiently joining fact and dimension tables in a star schema.


Information retrieved from www.akadia.com

Oracle 11g New Features

Oracle 11g PL/SQL New Features

o PL/SQL "continue" keyword - This will allow a "C-Like or PB" continue in a loop, skipping an iteration to bypass any "else" Boolean conditions. A nasty PL/SQL GOTO statement is no longer required to exit a Boolean within a loop.

begin
for i in 1..3 loop
dbms_output.put_line('i='||to_char(i));
if ( i = 2 ) then
continue;
end if;
dbms_output.put_line('Only if i is not equal to 2′);
end loop;
end;

o Disabled state for PL/SQL - Another 11g new feature is a "disabled" state for PL/SQL (as opposed to "enabled" and "invalid" in dba_objects).

o Easy PL/SQL compiling - Native Compilation no longer requires a C compiler to compile your PL/SQL. Your code goes directly to a shared library.

o Improved PL/SQL stored procedure invalidation mechanism - A new 11g features will be fine grained dependency tracking, reducing the number of objects which become invalid as a result of DDL. Fine Grained Dependency Tracking (FGDT?). This means that when you add a column to a table, or a cursor to a package spec, you don't invalidate objects that are dependant on them! The coolest feature

o Scalable PL/SQL - The next scalable execution feature is automatic creation of "native" PL/SQL (and Java code), with just one parameter for each type with an "on/off" value. This apparently provides a 100% performance boost for pure PL/SQL code, and a 10%-30% performance boost for code containing SQL.

o Enhanced PL/SQL warnings - The 11g PL/SQL compiler will issue a warning for a "when others" with no raise.

o Stored Procedure named notation - Named notation is now supported when calling a stored procedure from SQL.

o New Data Type: simple_integer. Always NOT NULL, wraps instead of overflows and is faster than PLS_INTEGER

o Specify Trigger firing order.

Oracle 11g SQL New Features

o New "pivot" SQL clause - The new "pivot" SQL clause will allow quick rollup, similar to an MS-Excel pivot table, where you can display multiple rows on one column with SQL. MS SQL Server 2005 also introduced a pivot clause.

o The /*+result_cache*/ SQL hint - This suggests that the result data will be cached in the data buffers, and not the intermediate data blocks that were accessed to obtain the query results. You can cache SQL and PL/SQL results for super-fast subsequent retrieval. The "result cache" ties into the "scalable execution" concept. There are three areas of the result cache:

  • The SQL query result cache - This is an area of SGA memory for storing query results.
  • The PL/SQL function result cache - This result cache can store the results from a PL/SQL function call.
  • The OCI client result cache - This cache retains results from OCI calls, both for SQL queries or PL/SQL functions.

o Scalable Execution - This 11g feature consists of a number of features, the first of which is query results caching; this feature automatically caches the results of an SQL query as opposed to the data blocks normally cached by the buffer cache, and works both client (OCI) and server side - this was described as "buffer cache taken to the next level". The DBA sets the size of the results cache and turns the feature on at a table level with the command "alter table DEPT cache results", the per-process cache is shared across multiple session and at the client level, is available with all 11g OCI-based clients.

o XML SQL queries - Oracle11g will support query mechanisms for XML including XQuery and SQL XML, emerging standards for querying XML data stored inside tables.

o SQL Replay - Similar to the previous feature, but this only captures and applies the SQL workload, not total workload.

o Improved optimizer statistics collection speed - Oracle 11g has improved the dbms_stats performance, allowing for an order of magnitude faster CBO statistics creation. Oracle 11g has also separated-out the "gather" and "publish" operations, allowing CBO statistics to be retained for later use. Also, Oracle 11g introduces multi-column statistics to give the CBO the ability to more accurately select rows when the WHERE clause contains multi-column conditions or joins.

o SQL execution Plan Management - Oracle 11g SQL will allow you to fix execution plans (explain plan) for specific statements, regardless of statistics or database version changes.

o Dynamic SQL. DBMS_SQL is here to stay. It's faster and is being enhanced. DBMS_SQL and NDS can now accept CLOBs (no more 32k limit on NDS). A ref cursor can become a DBMS_SQL cursor and vice versa. DBMS_SQL now supprts user defined types and bulk operations.

o Fully Automatic SQL Tuning - The 10g automatic tuning advisor makes tuning suggestions in the form of SQL profiles that will improve performance. You can tell 11g to automatically apply SQL profiles for statements where the suggested profile give 3-times better performance that the existing statement. The performance comparisons are done by a new administrative task during a user-specified maintenance window.

o Improved SQL Access Advisor - The 11g SQL Access Advisor gives partitioning advice, including advice on the new interval partitioning. Interval partitioning is an automated version of range partitioning, where new equally-sized partitions are automatically created when needed. Both range and interval partitions can exist for a single table, and range partitioned tables can be converted to interval partitioned tables.