Strict Standards: Declaration of Walker_Page::start_lvl() should be compatible with Walker::start_lvl(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 576

Strict Standards: Declaration of Walker_Page::end_lvl() should be compatible with Walker::end_lvl(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 576

Strict Standards: Declaration of Walker_Page::start_el() should be compatible with Walker::start_el(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 576

Strict Standards: Declaration of Walker_Page::end_el() should be compatible with Walker::end_el(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 576

Strict Standards: Declaration of Walker_PageDropdown::start_el() should be compatible with Walker::start_el(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 593

Strict Standards: Declaration of Walker_Category::start_lvl() should be compatible with Walker::start_lvl(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 687

Strict Standards: Declaration of Walker_Category::end_lvl() should be compatible with Walker::end_lvl(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 687

Strict Standards: Declaration of Walker_Category::start_el() should be compatible with Walker::start_el(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 687

Strict Standards: Declaration of Walker_Category::end_el() should be compatible with Walker::end_el(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 687

Strict Standards: Declaration of Walker_CategoryDropdown::start_el() should be compatible with Walker::start_el(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 710

Strict Standards: Redefining already defined constructor for class wpdb in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/wp-db.php on line 58

Deprecated: Assigning the return value of new by reference is deprecated in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/cache.php on line 99

Strict Standards: Redefining already defined constructor for class WP_Object_Cache in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/cache.php on line 404

Deprecated: Assigning the return value of new by reference is deprecated in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/query.php on line 21

Deprecated: Assigning the return value of new by reference is deprecated in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/theme.php on line 576
How To
Feb 26

Here’s a fun bit of Oracle database nuance that you may not see unless you are doing a lot of work with LOBS (Large Object Datatypes – BLOB, CLOB, NCLOB, and BFILE).  Indexes of LOB datatype columns will show up in the DBA_INDEXES and USER_INDEXES view – However, if you check the ALL_INDEXES view, they are simply not listed.  This may become an issue if you create all of your objects under one user, which is then locked and another user is granted privileges to select from that object (A good, secure design practice to prevent tables, views, and other objects from being modified).

Oracle does this because LOB indexes cannot be modified, therefore there is really little need to address them at all.  If you try to update the LOB primary index, you’ll run into an ORA-14326, if you try to alter or drop the index, expect an ORA-22864.

Also, LOBs in Oracle 10gR2 and prior are now referred to as BasicFiles.  This is because in 11g, Oracle has made some stout improvements to the handling of LOBs via the new SecureFiles.   SecureFiles offer some incredible benefits over the old LOBs in terms of security and efficiency.  The only thing I can’t figure out is why Oracle doesn’t seem to list this as a huge reason to upgrade!

Oct 14

Safe and Secure

The Oracle October Critical Product Update (CPU) was released yesterday - it includes 15 security fixes for the core RDBMS, including a fix for a vulnerability allowing DB access without authentication.

Despite the high impact, that particular vulnerability only scored a 4.0 in the Common Vulnerability Scoring System v2.0 (CVSS v2). The vulnerability allows for successful a buffer overflow in the Apache Connector component (mod_weblogic) of a Weblogic Server to be exploited for running external code. This vulnerability effects a broad spectrum of WebLogic Server versions (6.1-10.0MP1), however Oracle had addressed this, along with providing guidence for a workaround, back in July with CVE2008-3257.

Another point of interest - A new post-installation script, catbundle.sql, is available with Critical Patch Updates for Oracle Database 11.1.0.6 and 10.2.0.4 on Microsoft Windows. This script replaces catcpu.sql and catcpu_rollback.sql. For more information, see OracleMetaLink Note# 605795.1, Introduction to catbundle.sql. For UNIX/LINUX Critical Patch Updates, catbundle.sql was released with CPUJUL2008.

Remember, Oracle CPUs are cumulative, so even if you have never applied one to your system, you can catch up on all the bug and security fixes entirely with the application of the latest CPU!

Next scheduled CPU will be released on 13 January 2009

Jun 08

LockVarious organizations provide various security guidelines to aid us in hardening our databases. They are an EXCELLENT tool to this end and I cannot recommend enough reading and research in this regard. However, blindly implementing the guidelines is not a security panacea!!! It takes a knowledgeable DBA teaming with insightful IA personnel to determine if the guidelines make sense in your situation. I’ll illustrate this with an example:

How to Follow DoD/DISA Database Security Guidelines to Make Your Oracle Database Vulnerable to a Denial of Service (DoS) Attack

Necessary items:

Step 1. Apply the latest STIG Guidence to your database – Especially Item DG0073 in Section 3.3.10 – “The DBA will configure the DBMS to lock database accounts after 3 consecutive unsuccessful connection attempts within a specified period of time.”

Step 2. Mine Pete Finnigan’s list of common and default Oracle userids, and put them in a text file. Feel free to add any common database connection userids for popular applications.

Step 3. Use a command to iteratively feed the user ids from your file to sqlplus with a bogus password (MSWindows):

C:\>for /f "tokens=*" %I in (test.txt) do @sqlplus -L %I/NotThePassword@SID

Step 4. Repeat. After the 3rd incorrect password, the database account will be locked, and the application cannot connect until the account is unlocked by a privileged user.

Granted, if all the other items listed in the STIG are implemented, this will be extremely difficult (if not impossible) to accomplish from the outside, but it is easily accomplished by anyone who has access to the Oracle client (or JDBC, ODBC etc.) on any of the application servers – providing opportunity to an insider who doesn’t necessarily have database access.

This isn’t a specific Oracle issue, or an OS issue - the guidance is general enough to cover any DB/OS combination. The DISA/DoD STIG isn’t solely to blame either. The same guidance can be gained here, here, etc.

The larger issue, effectively securing your database, requires a bit of paradigm shift, a willingness to focus on the goal, (rather than the method) and a lot of teamwork and trust between DBAs and IA professionals.

The 3rd Alternative

When creating your roles, consider the automated, application users in your database and do not set a limit for unsuccessful login attempts on those accounts. To keep brute force and dictionary attacks at bay, you’ll need to ensure the application’s database account passwords are long and strong. Putting your database behind a stout firewall is also key - Isolating your database server from the internet altogether is really the best idea. Using the guidelines that are appropriate for your environment in 3.1.4.1 of the Database STIGs will further harden your installation.

After that, your best defense is malicious activity detection via auditing:

select  USERNAME, count(USERNAME)
from DBA_AUDIT_TRAIL
where RETURNCODE=1017
and TIMESTAMP > CURRENT_DATE - interval '3' day
group by USERNAME;

If you set up auditing, and use something like the SQL above to provide the raw instrumentation data for your database, you’ll be able to trend and perform velocity checks to sound the alarm when trouble may be in progress.

And that strengthens our security posture.

Brian Fedorko

May 31

Like responsibility, it grows!The goldfish always grows to the size of the bowl. If you’re a DBA goldfish, you’ll probably script out repetitive tasks until the bowl gets bigger. And then they feed you more databases from various business areas, and you grow some more. How is that for a strained analogy?

Any Oracle DBA has been there - After your initial herd of databases are stable, happy, and well-fed, people notice. And then you reap the true reward of good work: More work! Unfortunately, this is usually when someone fishes a stove-piped database that has become very important internally. You know, the one put together by someone who left 2 years ago. No Critical Product Updates, one or two control files, and the telling 5Mb redo logs that switch every 10 seconds. But you gladly take it in anyway…

A bit of work and now the database is chugging along like a champ! Tuned, Optimized, Mirrored, multiplexed, in ARCHIVELOG mode, and integrated into your RMAN backup scripting.

Everything seems fine, but is it?

Surely you could easily and successfully recover if you had to this very minute, right?

Maybe.

Is logging of all operations enforced on this database, or at least in the user’s tablespace? Use the following to find out:

select FORCE_LOGGING from V$DATABASE;
select TABLESPACE_NAME, FORCE_LOGGING from DBA_TABLESPACES;

If forced logging is not or can not be applied to the database, there is a risk that NOLOGGING operations may have been performed on the databases objects. Common operations that are run under NOLOGGING are index builds, index rebuilds, direct load inserts, direct loads with SQL Loader, and partition manipulation. Once a NOLOGGING operation has been performed, we cannot roll forward, past that change in that tablespace! If it is a tablespace only containing indexes, we’ll suffer downtime while the indexes rebuild and bring the database back to a reasonable level of performance. If the database contains objects containing data, the risk grows for losing the transactions since the NOLOGGING operation.

A good first line of defense is to include REPORT UNRECOVERABLE into your RMAN backup scripts, and stay on top of the logs - Or test for the expected return and pipe the results to your dashboard or monitoring software like Big Brother by Quest. This will catch all manners of problems before they become critical:

RMAN> report unrecoverable;
Report of files that need backup due to unrecoverable operations
File Type of Backup Required Name
---- ----------------------- -----------------------------------
4    full or incremental     X:\ORADATA\DATA01\TESTDB\TEST01.DBF

Here’s a quick script I wrote to find when the last NOLOGGING operation occurred (Note: Output has been edited for page fit):

set LINESIZE 120
set PAGESIZE 40
DEFINE LINE1= 'LAST NON-LOGGED OPERATIONS'
DEFINE LINE2= 'Check the Change Numbers and times against your backups to determine'
DEFINE LINE3= 'if non-logged operations have occurred'
TTITLE Skip 3 CENTER LINE1 SKIP 2 LINE2 SKIP 1 LINE3 SKIP 2
BTITLE CENTER "BFBlog.TheDatabaseShop.com"
COLUMN DBF_NAME FORMAT A40 WORD_WRAPPED
COLUMN TS_NAME FORMAT A15 WORD_WRAPPED
select  d.NAME as DBF_NAME,
t.NAME as TS_NAME,
d.UNRECOVERABLE_CHANGE# as NOLOG_CHNG#,
to_char(d.UNRECOVERABLE_TIME, 'Dy DD-Mon-YYYY HH24:MI:SS') as NOLOG_TIME
from V$DATAFILE d join V$TABLESPACE t
on d.TS# = t.TS#
order by t.NAME;

Output:

LAST NON-LOGGEDOPERATIONS

Check the Change Numbers and times against your backups to determine
if non-logged operations have occurred

DBF_NAME             TS_NAME   NOLOG_CHNG# NOLOG_TIME
-------------------- --------- ----------- ------------------------
J:\...\SYSTEM01.DBF  SYSTEM    0
J:\...\UNDOTBS01.DBF UNDOTBS1  0
J:\...\SYSAUX01.DBF  SYSAUX    0
J:\...\TEST01.DBF    TEST      6271597     Tue 02-Jun-2008 18:30:46
J:\...\USERS01.DBF   USERS     0

After that, just make sure your last Level 0 backup is newer than the times listed, and be aware that Point In Time Recovery will be limited to before the NOLOGGING operations occurred and when the last Level 0 backup was taken.

Be sure to set up lines of communication and coordination in the future, so the risk of not being able to recover the entire database to the last transaction is reduced.

Brian Fedorko