Helped an old client out with some SQL tuning issues today. They've moved some of their E-Business mod plsql customizations over to APEX in preparation for an upgrade to R12.1.3, but they ran into some performance issues. Turned out to be a very basic issue but, with some many customers in the midst of undertaking this kind of thing, I thought it might be helpful to have a basic review of SQL processing. Because close to 70 percent of the tuning issues I encounter with Oracle databases are caused (at least in part) by issues with SQL statements.
So, without getting too deep into the weeds here, let's look at what happens when a SQL statement is processed for the first time: hard parsing and execution.
Parsing generally consists of the following steps:
1. Check the incoming SQL statement for syntax errors
2. Validate referenced objects for existence and accessibility
3. Generate basic parse tree structure
4. Determine the best execution query plan for the statement based on the Cost Based Optimizer (and CBO is way beyond the scope of this discussion - find more here).
After parsing, the statement is executed according to the plan. While that sounds pretty mundane, there is an important process from a performance perspective that takes place during execution: a cursor (a pointer to the parsed plan that can be shared and executed by multiple other SQL statements, even in multiple sessions) is created and stored in the library cache.
Now why is a cursor so important to the performance of SQL statements? Because ti smooths the way for subsequent SQL statements that are identical in nature. The parsing of a statement is where all the heavy lifting is done - it's a CPU-intensive operation and a drain on database server resources. But when the database server is able to match a submitted SQL statement to a cursor in the library cache, most of the heavy lifting of the hard parse is eliminated…utilizing an already parsed plan on a different query is called a soft parse. It's a lot like using an existing wheel design rather than reinventing the wheel.
So, while all this technical overview stuff is well and good, how can you leverage it? Well, one idea that immediately comes to mind (and one of my favorite checkpoints when tuning SQL statements) is using bind variables. A bind variable is a placeholder in a query. An example:
Coding in a hard value yields a select statement like:
Select * from Supplier where Supplier_ID = 654;
If I worked a LOOP into my query to cover a range of hard coded values, my SQL statement would go through a hard parse each time it ran through a cycle of the loop: painfully slow and potential reducing my system to the speed of molasses…running uphill…in the winter.
If using a bind variable, I would write the Select statement as follows:
Select * from Supplier where Supplier_ID = :suppid
I'm using ":suppid" as my bind variable. Now when I use LOOP logic, I can generate a range of values for my query results. Each cycle of the loop will now leverage the cursor created during the hard parse of the first cycle of the query. Significant performance gains over the hard-coded statement.
How much improvement? Your mileage will vary according to your hardware and software environment. But just for the sake of discussion… My Oracle database test bed is on a CentOS Linux VM, running on a duo-core iMac with 4GB of ram. If compare the performance results of two queries similar to the examples shown above, the bind variable yields results about 10 times faster than the hard-coded example. What that should tell you is that the statement with the hard-coded value spends much, much, much more time parsing than actually executing my query. Again, your mileage could vary significantly.
So, where is this story going? When my old client (and friend) called for a little quick help today with some performance tuning, it didn't take long for me to start checking his SQL. Want to guess the first issue I found? Yup, statements with hard-coded values. After tweaking the code to use bind variables, we picked up performance by orders of magnitude. Easy win, low-hanging fruit, just by applying a fundamental concept.
And we won't even talk today about the scalability gains picked up by reducing the latching time through avoiding hard parses.
So, for all you E-Business customers with mod plsql and other SQL-based customizations out there (and I KNOW that there are bunches of you out there…we've all done it to some extent), check your code as one of your upgrade preparation steps. Hard coded values, full table scans…you might be amazed at the low-hanging fruit you can pick.
Got some good tuning stories? Share in the comments.