In my previous post I’ve shown and measured the performance loss on passing the parameters to and from procedure/function call inside PL/SQL code.

In this article I’m about to reveal another bottleneck that is often forgotten and not so easy to overcome.

Suppose you need to have some value calculated. The formula is straight matematical calculation but the calculation will be used in multiple SQL statements across system and is therefore a perfect candidate for a PL/SQL function.

I will use this sample formula:

For every Odd value return 6, for every Even value return 3.

case mod( X , 2 ) when 1 then 6 else 3 end

So the PL/SQL implementing the formula is:

CREATE OR REPLACE FUNCTION get_val( val IN INTEGER ) RETURN INTEGER IS BEGIN RETURN CASE mod( val , 2 ) WHEN 1 THEN 6 ELSE 3 END; END; /

One may either write the formula directly in each SQL statement, that needs it, or use the implemented function to keep the code DRY.

Let us compare the performance of the inline version of the formula in SQL statement vs. function call.

Simple script will do the trick.

set timing on PROMPT Calculate using function call from SQL SELECT sum(get_val(ROWNUM)) val FROM dual CONNECT BY LEVEL < 1200000; PROMPT Calculate using inline calculation in SQL SELECT sum( CASE mod( ROWNUM , 2 ) WHEN 1 THEN 6 ELSE 3 END ) FROM dual CONNECT BY LEVEL < 1200000;

And the results are:

Calculate using function call from SQL 5399997 Elapsed: 00:00:15.562

Calculate using inline calculation in SQL 5399997 Elapsed: 00:00:01.212

The SELECT statement with function call was **over 10 times slower **than the inline calculation (when running on 1.2 million rows).

As Tom Kyte explained in his answer, SQL and PL/SQL are two separate languages with two separate runtime engines and each function call causes a context switch.

How would the query perform, if we would have 2 function calls in it?

set timing on PROMPT Calculate using function call from SQL SELECT sum(get_val(ROWNUM)) val, sum(get_val(-ROWNUM)) val_1 FROM dual CONNECT BY LEVEL < 1200000; PROMPT Calculate using inline calculation in SQL SELECT sum( CASE mod( ROWNUM , 2 ) WHEN 1 THEN 6 ELSE 3 END ) val sum( CASE mod( -ROWNUM , 2 ) WHEN 1 THEN 6 ELSE 3 END ) val_1 FROM dual CONNECT BY LEVEL < 1200000;

See the results

Calculate using function call from SQL 5399997 3599997 Elapsed: 00:00:29.260

Calculate using inline calculation in SQL 5399997 5399997 Elapsed: 00:00:01.210

As expected, with two function calls from SQL statements, the overhead was doubled. Now the code with function calls runs almost 30 times slower than the inline version.

Now that the issue was clearly shown, some questions should be answered.

– Is it always an issue?

– What can be done to maintain the function encapsulation (have the code DRY) and keep high performance?

I will give some answers to that on my next post.

I think every oracle developer needs to remember Tom Kyte’s mantra: http://tkyte.blogspot.ru/2006/10/slow-by-slow.html

In my humble opinion, the main problem of using pl/sql functions is not a performance(of course it will be worse in most cases), but inconsistency, because we have many different methods to improve performance, like native compilation, caching mechanism of deterministic functions, result_cache, scalar subquery caching, new pragma UDF or inline “WITH” functions… But there is only one method to make function results consistent – to use operators (and it is not always reliable).

So in most cases, we have to agree with Tom Kyte’s mantra and to avoid own pl/sql functions usage in queries.

btw, i wrote a couple months ago post about interplay of deterministic functions, result_cache and operators: http://orasql.org/2014/03/31/deterministic-functions-result_cache-and-operators/

I will be glad to read it.

Thanks a lot for sharing

Hi there, interesting question you took for a blog article.

I suggest you start using a more scientific method of measuring and comparing the performance, like auto trace. Elapsed time really is NOT a good indicator, as it may turn out not to be reproducible over multiple runs and over different systems. Resource consumption tracing like auto trace provide more insight into what is actually done. http://www.oracle.com/technetwork/issue-archive/2008/08-jan/o18asktom-090158.html

Thanks for your comment.

When it comes to detailed analysis of PL/SQL code I usually use DBMS_PROFILER to trace the PL/SQL performance and EXPLAIN PLAN for SQL.

The test cases I have provided however, are simplified to the edge, to put focus on the thing i want the reader to focus on.

That is one of the reasons why I use simple timing to prove my case.

Another reason is, that in real-life we actually care about the response/processing time, when we talk about performance, and tools like EXPLAIN PLAN, PROFILER are just to help us understand what is going on on the system.