Portions from Oracle Best Practices (BP) - Summarized
§ No amount of 'great' coding can alleviate the adverse effects of poor design on performance. May be one could make a difference of up to 10% or 20% or so in performance by using various strategies, but that is about all. To really make a dramatic difference in performance, ultimately the poor design would have to be rectified. Hence, the importance of good design.
§ Of course, given an already good design, performance can be still improved through various strategies.
§ A design may be good from the angle of implementing business logic, and there may be several such alternate designs, however, they may not be good enough from performance angle, given volumes or number of simultaneous users.
§ In other words, a good design otherwise, may not be a good enough design from performance angle.
§ Hence, the importance of not merely considering the business specs and business logic at the time of design, but also business volumes, and peak requirements.
§ A design that does not answer to performance, even if good otherwise, may have to be changed to meet the practical usage requirements.
§ Always normalize your tables - right upto 5th normal for,.
§ Denormalize only data-warehousing type, read-only tables, comprising aggregates.
§ Do not create non-normalized structure under the impression that it would result in better performance.
§ No non-archival or non-report or non-aggregate table should be denormalized.
§ Do not store more than one value in a column : comma-separated, or combination code.
§ For example, do not store values in a column like say 'BFMRY' where each letter (B, F, M, R, Y) stands for a separate attribute; and it is stored as one word to make do with one column rather than five, and to save space!! Separate attributes should have separate columns.
§ Do not store say in the 'QUALIFICATION' column comma-separated values like 'B.Tech.(IT), M.B.A.(Fin), C.A.' , if the business requirement includes running queries to determine say 'How may are MBAs?', 'Who all are CA?', and so on.
§ Do not finalize the database design and table structure until all the relevant business specifications are clear.
§ There is a temptation to retain columns not needed or introduce a few dummy columns, lest they be needed later.
§ Even if needed later, they may have to be renamed or their datatypes may be found to be unsuitable.
§ Hence, create/introduce a column only when needed; do not keep a reservoir of them.
§ Exercise due diligence in assigning the correct datatypes and length to the columns. As part of the business specifications, seek clarifications on the domain of each column, and also document the same.
§ One must know the range of values a numeric column would have, the possible dates that a date column should accommodate, whether the date storage should be after excluding/truncating the time component, the maximum length required for varchar2 columns, as per the domain requirements, and so on.
§ Define appropriate datatypes for the columns, including their length. Where feasible, use integers for keys, rather than varchar2, for better performance.
§ Do not casually assign length to columns.
- For example, defining a numeric column as just NUMBER, without specifying scale and precision. Not because one has determined that this is how it should be, but because it is convenient - it would capture whatever one enters - and there is no need to trouble oneself with the details.
- Do not specify say varchar2(500) where varchar2(30) is sufficient; or assigning say varchar2(3000) to a column, not because that would be the maximum requirement for the data, as determined, but because it has not been specified, and why bother to question - just be generous and provide a large enough size.
- Using NUMBER or NUMBER(20) where NUMBER(1) would do.
§ Create indexes ONLY if warranted. Indexes degrade insert and update performance, and use up storage.
§ Indexes enhance query performance (and updates with where clauses – to select specified rows) where properly used.
§ However, if the number of rows in a table are not significant, indexing its column is an unnecessary overhead, because Oracle can scan the whole table much faster.
§ Some times indexes can actually slow down queries. Use of indexes to join tables or to filter data may prove counter productive, where full table scan is faster.
§ When creating indexes also specify
"COMPUTE STATISTICS" for more efficient index look-up, and
"PARALLEL" for faster index creation, for example :
§ For better management and for better
performance, create all indexes on a separate tablespace, for example :
§ For better performance, create
indexes with NOLOGGING option, where otherwise there is no issue, for example :
§ Always, without fail, index a foreign key column.
§ When the column is frequently part of the “where clause”. That is, it is queried on frequently.
§ However, the thumb rule is that one should create/use indexes only if less than 6% of the rows in the table would be retrieved based on the indexes. However, for smaller tables (say around 1000 rows) this percentage (6%) may be even less - perhaps 1%. If the rows retrieved are much more than this, then full scan may be more efficient.
§ Columns used for JOINs.
§ In general, if a table is mostly read only, and it is only rarely that new rows are added to it or existing values updated, then creation of indexes do not affect insert/update. In such cases creation of indexes, if they help speed up, do not have the adverse side-effect of slowing down insert/update.
§ On the other hand, if the table is frequently updated, then one must think twice before creating indexes, and must confirm that over all indexes would be helpful.
§ Note: For PRIMARY KEY and for UNIQUE KEY constraints, indexes are automatically created.
§ When the table is small – does not have many rows. For such tables, full table scan is far more efficient.
§ When the thumb rule mentioned above is applicable : if less that 15% of the rows in the table would be retrieved based on the indexes.
§ When there are too many nulls in the column, and the search on non-null values is infrequent.
§ Extra space is consumed.
§ Additional resources consumed.
§ Insert/update operations are affected – become relatively slower, as index has to be generated/refreshed.
§ Note: Create indexes only if they really help speed up, and if the column is a foreign key.
§ Use EXPAIN PLAN on the queries involving the column.
§ Examine V$SQL_PLAN, the dynamic performance view for the actual execution plan used for a given query statement.
§ What if it is found (through EXPAIN PLAN / V$SQL_PLAN) that the desired index is not used, and it is felt that the usage of the index would make the query faster?
§ Use HINT involving the index, and check performance.
§ First create indexes on table, and then insert data; or first insert data, then create indexes?
§ It is always faster to first insert data, and then create indexes.
§ In an existing table with both data and indexes, when you insert additional rows, it may be faster to first drop indexes, then insert data, and then create indexes. But this depends upon factors like: how many rows are there in the table already and how many are planned to be inserted. If relatively few rows are to be inserted, there is no point first dropping the indexes.
§ SORT_AREA_SIZE : memory space reserved for sorting.
§ TEMPORARY TABLESPACE : Whether it has sufficient empty space for sorting. Specific TEMPORARY TABLESPACE for the schema and also that for the database.
§ Errors are likely for a variety of reasons :
- Illegal Data :
- Wrong value of input parameters.
- Null values.
- Incorrect logic leading to illegal resultant value.
- Zero denominator : divide by zero error.
- Erroneous string manipulation logic.
- Problems in database operations : inability to connect, loss of connection midway through the procedure execution, ….
- Hardware failure : Disk or Memory Error causing a problem that has nothing to do with the code. But, the code still needs to take corrective action.
§ Expect that at some time, your code will be passed incorrect or null parameters, that your queries will return no rows or more rows than you expect.
§ Do NOT assume optimistically that “at least in this part of the code, there is no question of error occurring!”
§ Build a standard error-logging mechanism. Create a separate error-logging package, error-logging tables and error-codes, and document error-handling, to serve as a ready-reference guide for all designers/ coders/ reviewers.
§ Ease of debugging should be built into the code. It should not be necessary to locate the error by going through the time-consuming process of stepping through the code in debug-mode.
§ To readily locate the code-piece/SQL causing the error, precede each code-piece/SQL with an assignment statement, assigning a unique string or integer to a 'statement_reference' variable and recording it in the error-log, so that the error would also indicate exactly which statement caused the error.
§ Do the above, before commencing coding for business modules.
§ All procedures and functions should necessarily have an EXCEPTION block at the end, and also 'WHEN OTHERS THEN' clause in the EXCEPTION block. The block should handle logging of errors in a standard way.
§ Wherever warranted :
- Check for bad data (null/illegal – input parameters or retrieved or calculated data), wherever warranted.
- Add error-checking code.
- Add exception handlers.
§ Code programs robust enough to work even if the database is not in the state you expect.
§ Destructive Testing : Test your code with different combinations of bad data to see what potential errors arise.
§ Please see (a)"Customized Timer & Error Handler" under "Sample Code & Utilities" below, and the chapter (b)Error Handling under PL/SQL.
§ Do not make exceptions on EXCEPTION handling.
§ Statements like "SELECT c1 INTO v_c1 FROM t1 WHERE c2 = v_c2" whose success is dependent upon the nature of data, may cause an exception like "NO_DATA_FOUND" when value of v_c2 does not exist in t1.
§ Such statements, where there are possibilities of EXCEPTIONs occurring, should be enclosed in their own BLOCK, with an EXCEPTION-clause to handle such exceptions.
§ Please see the chapter on "PL/SQL Errors & Error Handling".