Strict Standards: Declaration of Walker_Page::start_lvl() should be compatible with Walker::start_lvl(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 576

Strict Standards: Declaration of Walker_Page::end_lvl() should be compatible with Walker::end_lvl(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 576

Strict Standards: Declaration of Walker_Page::start_el() should be compatible with Walker::start_el(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 576

Strict Standards: Declaration of Walker_Page::end_el() should be compatible with Walker::end_el(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 576

Strict Standards: Declaration of Walker_PageDropdown::start_el() should be compatible with Walker::start_el(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 593

Strict Standards: Declaration of Walker_Category::start_lvl() should be compatible with Walker::start_lvl(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 687

Strict Standards: Declaration of Walker_Category::end_lvl() should be compatible with Walker::end_lvl(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 687

Strict Standards: Declaration of Walker_Category::start_el() should be compatible with Walker::start_el(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 687

Strict Standards: Declaration of Walker_Category::end_el() should be compatible with Walker::end_el(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 687

Strict Standards: Declaration of Walker_CategoryDropdown::start_el() should be compatible with Walker::start_el(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 710

Strict Standards: Redefining already defined constructor for class wpdb in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/wp-db.php on line 58

Deprecated: Assigning the return value of new by reference is deprecated in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/cache.php on line 99

Strict Standards: Redefining already defined constructor for class WP_Object_Cache in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/cache.php on line 404

Deprecated: Assigning the return value of new by reference is deprecated in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/query.php on line 21

Deprecated: Assigning the return value of new by reference is deprecated in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/theme.php on line 576
Oracle
Apr 22

Once upon a year ago, when Sun Microsystems acquired MySQL, there were many bloggers who theorized that Big Red, who had a long-running, close partnership with Sun, was pulling some strings in this deal.  The people who endorsed the idea that Oracle couldn’t put the kibosh on MySQL without a PR headache (But Sun could), were dismissed as crazy, conspiracy-theory people.  My only surprise so far is the courteous lack of ‘I told you so’ popping up in the expert blogs.

So where do we go from here?  Oracle doesn’t have any experience in hardware.  If they keep most of Sun’s staffing, and continue to fund their innovation efforts, we may continue to see excellent products from them.  But will they retain their stellar brand identity?  Will they abandon the Sparc chip architecture and adopt x86?  It seems the best solution here looks more like a tightly coupled partnership rather than a merging of the two companies.

From Oracle’s whitepaper on the decision, MySQL’s fate seems a little less promising:

MySQL will be an addition to Oracle’s existing suite of database products, which already includes Oracle Database 11g, TimesTen, Berkeley DB open source database, and the open source transactional storage engine, InnoDB.

This doesn’t sound like Oracle is poised to grow MySQL and allow it to flourish.  At this time MySQL 6.0 is in Public Alpha, and has added the Falcon transactional engine as an advanced alternative to InnoDB, and SolidDB.  Looking at the architecture, this engine brings some industrial-grade caching, recovery, and long transaction support to MySQL.  Couple this with the real deal disaster recovery 6.0 is bringing to the table, and you have a free multi-platform database that rivals everything an Oracle database can offer outside of Enterprise Edition, and soundly trounces the latest Microsoft SQLServer offering.

But will Oracle put the resources toward MySQL, to allow it to be all it can be?  Personally, I don’t see it happening, but I hope I am very, very wrong.

Sid

Apr 15

The quarterly Oracle CPU hit the streets on Tuesday, 14 April, and patches 16 vulnerabilities in the Oracle RDBMS, including a remotely accessible exploit of the listener without authentication.  Oddly this only scored a 5.0 on the CVSS v2.0.  There was an 8.5  CVSS-scored vulnerability in Resource manager that was patched.  It has been speculated that this vulnerability could be exploited by SQL Injection, but the high score seems odd.  I’ll keep looking for details on this item.

Feb 10

Safe and SecureReading through the online blogs, I came across a discussion of whether Oracle’s Critical Product Updates are worth the ‘trouble’ of applying.  Of course, I’m always very interested to see what DBAs have to say in regards to Information Assurance and database security in general - and I wasn’t disappointed.  Quite a few DBAs had some great, common-sense guidelines for approaching Oracle’s Critical Product Updates!  A thorough system of analyzing the risk and impact, coupled with thorough testing.  It may take a little discipline to keep that process going, but many were thoughtful and solid.

Surpisingly (to me, at least) I had to take umbrage with Don Burleson.  His advice:

You DON’T have to apply patches, and sometimes patches can CAUSE unplanned issues.  The best practices are always to conduct a full stress in a TEST environment before applying the patches in production… I wait for major releases and re-install at-once, and I only look at patches to fix specific issues.”  - Courtesy of TechTarget

I have personally applied dozens of CPUs on (literally) hundreds of systems of all flavors.  I have yet to see a problem on our production servers that was caused by a CPU.  Of course, I have seen a few weeded out through thorough and careful testing beforehand.

The problem with simply not assessing and applying these CPUs due to FUD (Fear, Uncertainty, and Doubt) comes when things go badly, or when you need to meet SOX, HIPAA, PCI DSS, FDCC, FISMA,etc. compliance.

Even on a closed system, if an insider is able to view/modify/delete sensitive information by exploiting a vulnerability fixed by a CPU, your company will be in a very unenviable legal position, as ignoring security patches is not adequately performing ‘due care’.  Also, when operating under various compliance standards or on a DoD system, there is rarely an option to avoid applying a CPU, unless you can document a specific problem the application will induce and your plan to mitigate or eliminate the risk.  This is where a strong  process, when well-documented would be an excellent solution.

If a DBA has a stringent standard of apathy towards Oracle CPUs, it may be an indicator of systemic security problems in their data stores as well, and may warrent some pointed questions:

  • Are you auditing, at the very least, privilege escalation and access to sensitive objects?
  • Are you making sure that glogin and the system executables are not being tampered with?
  • Do you have a benchmark of defined roles and privileges documented?
  • Are you logging who is connecting and accessing the data, when and from where?

If the answers to these are ‘no’, we wouldn’t be aware of a security breach even if it had happened!

Turning the key of your database and letting it go can be a very perilous practice despite what the remote database administration service vendors may tell you.  If data is the lifeblood of your company, it really should be maintained AND protected as such.  No company wants the bad press that follows a data theft.

Brian Fedorko

Jan 17

LanguageHave you ever run into this situation: You are happily scripting out or designing a new capability, performing maintenance, or providing support. Perhaps you are eating lunch, or are home in bed, soundly sleeping at 3:00AM.

And then it happens.

Something broke somewhere, and it is database-related. No, it is not something you’ve built, maintained, or even seen - It is something from another business area, and their help is not available.

When you arrive, you are greeted by the ever-present group of concerned stake-holders, and a terminal. Will you staunch the flow of money they may be hemorrhaging? Will you bring back the data they may have lost? Will you be able to restore their system to service?

What you don’t want to do is flounder because they don’t have your favorite management software, your preferred shell, or your favorite OS.

Learn to speak the native languages!

There are 3 skill sets every good data storage professional should keep current at all times, outside of their core RDBMS interface languages:

  • Bourne Shell (bash)
  • vi (Unix/inux text editor)
  • CMD Shell

I guarantee that any Linux system you log into will have bash and vi. I personally prefer the korn shell for navigation, and the c shell for scripting - but the bourne shell is on every system. Same with vi - Except, I really prefer vi to anything else.

This means no matter what Linux or Unix server you are presented with, you can become effective immediately.

I’ve included Microsoft Windows command shell is included because it fits in with a parallel reason for learning the native language - you can proactively increase survivability in your data storage and management systems by using the tools and utilities you KNOW will be available - Even if libraries are unavailable, even if interpreters and frameworks are lost/broken.

If the operating system can boot, you can be sure the bourn shell or CMD shell is available for use.

Knowing that, you should consider scripting the most vital system functions using the available shell script, and initiating them with the operating system’s integral scheduling tool (crontab/Scheduled Tasks). This way you can ensure that if the OS is up, your vital scripts will be executed!

And who doesn’t want that?

Dec 20

Bad Things CAN Happen

I was conversing with a colleague of mine who was working with some Oracle DBAs who were deciding to abandon Oracle’s Recovery Manager and replace it with a 3rd party disk-imaging ‘backup’ solution. Not augment RMAN, but replace it entirely.

I was really surprised. Really, REALLY surprised!

After mulling over all the concerns, I put together some items you may want to consider before heading down this path:

  • Are you operating in ARCHIVELOG mode? If you are not, YOU WILL LOSE DATA.
  • If you are in ARCHIVELOG mode – What happens to the old archivelogs? Deleting the old ones before the next RMAN level zero renders the ones you have useless (except for logmining).
  • If you are in NOARCHIVELOG mode, how far back can you troubleshoot unauthorized data modification or application error? How quickly do your redo logs switch? – Multiply that by the number of groups you have, and you have your answer.
  • How do you address block corruption (logical AND physical) without RMAN? With a RMAN-based DR solution, block recovery takes ONE command. No data loss, no downtime. If you take a snapshot using 3rd party tools – Your backups now have that same block corruption. Where do you go from there?
  • If disk space is an issue, do you use the AS COMPRESSED BACKUPSET argument to reduce backup size? Do you pack the archivelogs into daily level ones? I’ve found ways to optimize our Oracle RMAN backups so we can cover 2 weeks with the same disk space that used to cover 2 days.
  • How do you monitor for block corruption? (Waiting for something to break is not valid instrumentation) I check for block corruption automatically, every day, by using RMAN and building it into my daily database backup scripts.

NOTE: Logical corruption happens. Even on a SAN, even on a VM. VMs can crash, power can be lost. I’ve experienced 2 incidents with block corruption in the recent quarter. Of course, since I built the Disaster Recovery system around RMAN – We caught the corruption the next day and fixed it with ZERO downtime and ZERO data loss.

Point-in-Time-Recovery (PITR) is enabled by RMAN - ALL disk imaging backup solutions lack this capability. If you are relying solely on a snapshot backup, you will lose all the data since the last snapshot.

Without tablespace PITR, you have to roll ALL the data in the database back. If you have multiple instances and are using a server snapshot with no RMAN, ALL the databases on that server will lose data! This is usually not acceptable.

Lastly, How much testing have you done with the snapshot solution? REAL TESTING. Have you taken a snapshot during continuous data change? We tried snap-shotting the database server using 3 different pieces of software. NONE took a consistently consistent and usable snapshot of the database. Sometimes it did. If we were lucky, and the DB was quiet. Is it acceptable to sometime get your client’s/company’s data restored?

Remember, the key is a multi-layered DR strategy (where disk imaging and snap-shotting IN CONJUNCTION with RMAN is incredibly effective!) and continuous REAL WORLD testing.

As a parting shot, in case you were wondering, The ‘DBAs’ had decided to rely soley on a disk imaging backup solution, not because they felt it had more to offer, or because it was tested to be more effective. But because they felt RMAN was difficult to use…

Brian Fedorko

Nov 15

GUIs are for look’n, the Command Line is for Doin’” – That is some of the best mentoring advice I have received or could give as a data storage professional, and it is true to this day!

GUIs (Graphical User Interfaces) have really made enterprise-class databases much more accessible, and have made viewing data and corralling vital stats wonderfully pleasant and simple. MySQL Enterprise Monitor and Oracle Enterprise Manager include some excellent, time-saving ‘advisers’ that simply tuning tasks as well. They have come a long way, and their utility is undeniable.

But, as a data storage professional, we are expected to be able to restore and return the system to and operational capacity when things go badly. Usually, this is where we need the skills to ‘pop open the hood’.

Just as a good drummer should be able to do whatever can be done with their hands with their feet, when they are behind their kit - A good DBA will be able to perform any action in the GUI, at the command-line as well. This is critically important because:

  • The GUI contains a subset of the CLI capabilities, utilities, and tools
  • The GUI is a separate piece of software, often with additional dependencies, that can break, while leaving the database up and available.

Remember, of all the duties a DBA is asked to perform, there is one that we must do correctly and effectively EVERY time - Data Recovery. Data loss is absolutely unacceptable. So, you must honestly ask yourself - If the database goes down, the GUI is unusable, and the data must be recovered, can I do it at the command line? If not, it should be your focus to develop that skill set immediately - Not being able to recover your company’s or client’s data because you couldn’t ‘point n’ click‘ your way through the process, your company can lose a fortune – And it will, most likely, cost you your job!

Oracle Enterprise Manager is a great example. It is extremely useful, but in my experience, extremely delicate. It cannot withstand being cloned or moved to a different server, and it can break with any ungraceful handling of its repository, inside the database. Chances are, if the database is in dire straits, EM will not be there.

Will you be ready?

Brian Fedorko

Oct 22

Finally!!!  Oracle has published an Early Adopter Release of the Oracle SQL Developer Data Modeling package!

Right now it is a standalone product, but they are planning to integrate this into their excellent, platform independent, and affordable (read as: FREE!) SQL Developer tool.

I’m a big fan of SQL Developer, and it is readily adopted by clients due to price and functionality.  With no cost associated, I’ve seen anyone from developers to testers to integration groups use this tool to great effect.  But for the longest time, designers and architects were left with mostly 3rd party choices for creating data model design and structure.

I’m currently installing and testing this product, and will publish results – Good or Bad.

More to come!

Oct 14

Safe and Secure

The Oracle October Critical Product Update (CPU) was released yesterday - it includes 15 security fixes for the core RDBMS, including a fix for a vulnerability allowing DB access without authentication.

Despite the high impact, that particular vulnerability only scored a 4.0 in the Common Vulnerability Scoring System v2.0 (CVSS v2). The vulnerability allows for successful a buffer overflow in the Apache Connector component (mod_weblogic) of a Weblogic Server to be exploited for running external code. This vulnerability effects a broad spectrum of WebLogic Server versions (6.1-10.0MP1), however Oracle had addressed this, along with providing guidence for a workaround, back in July with CVE2008-3257.

Another point of interest - A new post-installation script, catbundle.sql, is available with Critical Patch Updates for Oracle Database 11.1.0.6 and 10.2.0.4 on Microsoft Windows. This script replaces catcpu.sql and catcpu_rollback.sql. For more information, see OracleMetaLink Note# 605795.1, Introduction to catbundle.sql. For UNIX/LINUX Critical Patch Updates, catbundle.sql was released with CPUJUL2008.

Remember, Oracle CPUs are cumulative, so even if you have never applied one to your system, you can catch up on all the bug and security fixes entirely with the application of the latest CPU!

Next scheduled CPU will be released on 13 January 2009

Jul 29

If you haven’t explored server virtualization, there is no better time! VMWare has announced that ESXi is now free! (CHEAP!)

Q. ESXi only supports a single VM, what is the advantage of this?

A. Portability & Flexibility. Since the VM isn’t tied to the hardware, it is ultimately transportable. Have a test server and production server? You can copy the REAL production VM to the test server. If you’re developing, you can copy the VM, archive it for Configuration Management purposes and promote the test environment to production with little risk for surprises due to differences in configuration!

You can get more out of less hardware. For development, your test hardware can be an Oracle 11g database server running RHEL on Monday, a JBoss App Server on SUSE on Tuesday, and an Oracle RAC instance on Oracle Enterprise Linux for emergency scalability on Wednesday, and an impromptu Backup Domain Controller on Windows Server 2008 on Thursday. The same server is the hardware you need when you need it!

Best of all, the VMs you create on ESXi are completely compatible with any of the VM Servers VMWare offers – Port it right into a ESX Server BladeServer or the like, when you are ready.

Q. What about Oracle Licensing on VMs?

A. Oracle does not officially support their products on any VM Server except Oracle VM – Their licensed version of Xen. However, I’ve been running Oracle on ESX on a wide variety of hardware implementations and have yet to experience one problem. Licensing a Virtualized Oracle Server can be expensive on a consolidated VM Server, as you must pay for every socket, whether you are using it for the Oracle Server VM or not – But on an ESXi hypervisor, with single VM setup, the cost is the same as if you put it on the physical server!

Q. What about Microsoft’s Hyper-V – That is free too!

A. Microsoft’s Hyper-V isn’t as ‘free’ or ‘Hyper’ as they would like you to believe. ESXi is free – It sits on the Hardware, requiring no foundational OS. MS Hyper-V requires you to purchase and install Server 2008 to run Hyper-V ($1000-$6000 depending on the flavor). Plus, you get all the overhead of having Microsoft Server 2008 as disk Sspace, memory, and processor overhead!

Then there is the matter of Hyper-V’s supported OS list It supports Windows, Windows, Windows, and SUSE.

Hyper-V space requirement: 10Gb MINIMUM. ESXi: 32Mb

Hyper-V max processors per host: 4. ESXi Max processors per host: 8

Etc…

In short, If you haven’t tried virtualizing your servers, now is a great time (It is always a great time to save your client/company/self equipment funds!). Now, you have nothing to lose!

Brian Fedorko

Jul 15

Safe and Secure

It is time once again to eliminate bugs and increase the security posture of our Oracle databases. The Advisories and Risk Matrices can be found on Oracle Technology Network. The full availability information is found at Oracle Metalink under DocID# 579278.1

Points of Interest:

  • This CPU contains 11 security fixes for the Oracle Enterprise Database Server
  • None of the security holes for the Enterprise DBMS are remotely exploitable without authentication
  • Oracle Application Express requires no security fixes (This product continues to impress me)
  • ALL Windows platforms running Oracle Enterprise DB Server v10.2.0.3 will have to wait until 22-July-2008 for their CPU
  • Support for Solaris 32-bit and Oracle Enterprise DB Server v10.2.0.2 seems to have been pulled! There’s no CPU for these, and none planned for the October 2008 Critical Product Update as per Oracle Metalink DocID# 579278.1.

Don’t forget to read the patch notes, test thoroughly, and check to make sure you’re using the proper version of OPatch!

Next CPU: 14-October2008

Brian Fedorko

Jun 16

Paper Cash Money!

Oracle’s latest price list was published today!

Oracle Technology Global Price List

There are increases scattered throughout the various licensing options, most notably:

Oracle Enterprise Edition

  • $7500 increase in the base per-processor licensing
  • $150 increase in per-user licensing

Oracle Standard Edition

  • $2500 increase in the base per-processor licensing
  • $50 increase in per-user licensing

Oracle Standard Edition One

  • $805 increase in the base per-processor licensing
  • $41 increase in per-user licensing

RAC

  • $300 increase in the base per-processor licensing
  • $60 increase in per-user licensing

Active Data Guard

  • $800 increase in the base per-processor licensing
  • $20 increase in per-user licensing

Advanced Security Partitioning, Advanced Compression, Real Application Testing, Label Security

  • $1500 increase in the base per-processor licensing
  • $30 increase in per-user licensing

Diagnostics Pack , Tuning Pack, Change Management Pack, Configuration Management Pack, Provisioning Pack for Database

  • $500 increase in the base per-processor licensing
  • $10 increase in per-user licensing

Internet Application Server Enterprise Edition

  • $5000 increase in the base per-processor licensing
  • $100 increase in per-user licensing

Enterprise Single Sign-On Suite

  • $10 increase in per-user licensing

This is certainly not an exhaustive list and I’m sure that there are many, many other changes. Rounding up your Enterprise’s licensing and product use information for acquisition planning purposes may be a prudent and proactive task for this month!

Brian Fedorko

Jun 15

I have always enjoyed the teaching and wisdom of Dr. Steven Covey (especially if he does not litigate for derivative works!). He has a real knack for capturing introspective how-to lessons detailing the simplicity of living a good and productive life.

In homage to Dr. Covey’s amazing work, I’d like to narrow the scope, but offer lessons with a similar impact for database administrators – Expanding on the inobviously obvious to illuminate the good path to success.

Habit One - Multiplex and Mirror Everything!

Mirror, Mirror...

Multiplex and mirror all of your critical files – Is there a reason not to? Today’s SANs have gone a long way to provide for redundancy and reduce I/O contention, but they are definitely not an excuse to abandon this basic key to database survivability!

The SAN Trap: SANs are often used as a panacea for data availability. However, have you taken a close look at your SAN to determine how robust and survivable it really is?

  • How many LUNS are your files spread across?
  • What RAID level are you using and how many simultaneous disk failures will it take to make your files irretrievable? (Anything under 20% incurs quite a bit of risk).
  • Do you have redundant controllers?
  • Redundant switches?

Even the most survivable storage setup is still vulnerable to logical corruption, and the vastly more common, human error (“I just deleted all the .LOG files to save some space!”).

Conversely, for very slim installs, you may only have a single disk or LUN – While there is greatly increased risk when in such a situation, reality dictates that sometimes the circumstances are unavoidable. Until you can grow your storage footprint, multiplexing and mirroring (across directories) becomes even more critical as

Mirroring and multiplexing your control files, redo logs, archived redo logs, and RMAN backups will significantly increase the likelihood of a successful recovery, should the need arise (See Habit 5 – Preparation). The procedure is extremely easy, and the files generally take up very little space, if properly scaled and tuned to your needs.

Here are some best practices for you to tailor to your needs:

  • Control Files: Multiplex two to three times and mirror over two to three disks/LUNs/directories
  • Redo Logs: Three to four members per group with two to three groups spread across disks/LUNs/directories
  • Archived Redo Logs: Mandatory mirroring between at least 2 disks/LUNs/directories
  • RMAN Backup Files: Mirror between at least two disks/LUNsdirectories
  • SPFILE: Periodically create a PFILE from the SPFILE and archive it, along with your backups and control file snapshots

A database administrator worth their salt NEVER loses data, and the best way to maintain this is to avoid a position where data loss is likely. Mirroring and Multiplexing are one of our most effective tools to reduce risk.

Brian Fedorko

Jun 08

LockVarious organizations provide various security guidelines to aid us in hardening our databases. They are an EXCELLENT tool to this end and I cannot recommend enough reading and research in this regard. However, blindly implementing the guidelines is not a security panacea!!! It takes a knowledgeable DBA teaming with insightful IA personnel to determine if the guidelines make sense in your situation. I’ll illustrate this with an example:

How to Follow DoD/DISA Database Security Guidelines to Make Your Oracle Database Vulnerable to a Denial of Service (DoS) Attack

Necessary items:

Step 1. Apply the latest STIG Guidence to your database – Especially Item DG0073 in Section 3.3.10 – “The DBA will configure the DBMS to lock database accounts after 3 consecutive unsuccessful connection attempts within a specified period of time.”

Step 2. Mine Pete Finnigan’s list of common and default Oracle userids, and put them in a text file. Feel free to add any common database connection userids for popular applications.

Step 3. Use a command to iteratively feed the user ids from your file to sqlplus with a bogus password (MSWindows):

C:\>for /f "tokens=*" %I in (test.txt) do @sqlplus -L %I/NotThePassword@SID

Step 4. Repeat. After the 3rd incorrect password, the database account will be locked, and the application cannot connect until the account is unlocked by a privileged user.

Granted, if all the other items listed in the STIG are implemented, this will be extremely difficult (if not impossible) to accomplish from the outside, but it is easily accomplished by anyone who has access to the Oracle client (or JDBC, ODBC etc.) on any of the application servers – providing opportunity to an insider who doesn’t necessarily have database access.

This isn’t a specific Oracle issue, or an OS issue - the guidance is general enough to cover any DB/OS combination. The DISA/DoD STIG isn’t solely to blame either. The same guidance can be gained here, here, etc.

The larger issue, effectively securing your database, requires a bit of paradigm shift, a willingness to focus on the goal, (rather than the method) and a lot of teamwork and trust between DBAs and IA professionals.

The 3rd Alternative

When creating your roles, consider the automated, application users in your database and do not set a limit for unsuccessful login attempts on those accounts. To keep brute force and dictionary attacks at bay, you’ll need to ensure the application’s database account passwords are long and strong. Putting your database behind a stout firewall is also key - Isolating your database server from the internet altogether is really the best idea. Using the guidelines that are appropriate for your environment in 3.1.4.1 of the Database STIGs will further harden your installation.

After that, your best defense is malicious activity detection via auditing:

select  USERNAME, count(USERNAME)
from DBA_AUDIT_TRAIL
where RETURNCODE=1017
and TIMESTAMP > CURRENT_DATE - interval '3' day
group by USERNAME;

If you set up auditing, and use something like the SQL above to provide the raw instrumentation data for your database, you’ll be able to trend and perform velocity checks to sound the alarm when trouble may be in progress.

And that strengthens our security posture.

Brian Fedorko

May 31

Like responsibility, it grows!The goldfish always grows to the size of the bowl. If you’re a DBA goldfish, you’ll probably script out repetitive tasks until the bowl gets bigger. And then they feed you more databases from various business areas, and you grow some more. How is that for a strained analogy?

Any Oracle DBA has been there - After your initial herd of databases are stable, happy, and well-fed, people notice. And then you reap the true reward of good work: More work! Unfortunately, this is usually when someone fishes a stove-piped database that has become very important internally. You know, the one put together by someone who left 2 years ago. No Critical Product Updates, one or two control files, and the telling 5Mb redo logs that switch every 10 seconds. But you gladly take it in anyway…

A bit of work and now the database is chugging along like a champ! Tuned, Optimized, Mirrored, multiplexed, in ARCHIVELOG mode, and integrated into your RMAN backup scripting.

Everything seems fine, but is it?

Surely you could easily and successfully recover if you had to this very minute, right?

Maybe.

Is logging of all operations enforced on this database, or at least in the user’s tablespace? Use the following to find out:

select FORCE_LOGGING from V$DATABASE;
select TABLESPACE_NAME, FORCE_LOGGING from DBA_TABLESPACES;

If forced logging is not or can not be applied to the database, there is a risk that NOLOGGING operations may have been performed on the databases objects. Common operations that are run under NOLOGGING are index builds, index rebuilds, direct load inserts, direct loads with SQL Loader, and partition manipulation. Once a NOLOGGING operation has been performed, we cannot roll forward, past that change in that tablespace! If it is a tablespace only containing indexes, we’ll suffer downtime while the indexes rebuild and bring the database back to a reasonable level of performance. If the database contains objects containing data, the risk grows for losing the transactions since the NOLOGGING operation.

A good first line of defense is to include REPORT UNRECOVERABLE into your RMAN backup scripts, and stay on top of the logs - Or test for the expected return and pipe the results to your dashboard or monitoring software like Big Brother by Quest. This will catch all manners of problems before they become critical:

RMAN> report unrecoverable;
Report of files that need backup due to unrecoverable operations
File Type of Backup Required Name
---- ----------------------- -----------------------------------
4    full or incremental     X:\ORADATA\DATA01\TESTDB\TEST01.DBF

Here’s a quick script I wrote to find when the last NOLOGGING operation occurred (Note: Output has been edited for page fit):

set LINESIZE 120
set PAGESIZE 40
DEFINE LINE1= 'LAST NON-LOGGED OPERATIONS'
DEFINE LINE2= 'Check the Change Numbers and times against your backups to determine'
DEFINE LINE3= 'if non-logged operations have occurred'
TTITLE Skip 3 CENTER LINE1 SKIP 2 LINE2 SKIP 1 LINE3 SKIP 2
BTITLE CENTER "BFBlog.TheDatabaseShop.com"
COLUMN DBF_NAME FORMAT A40 WORD_WRAPPED
COLUMN TS_NAME FORMAT A15 WORD_WRAPPED
select  d.NAME as DBF_NAME,
t.NAME as TS_NAME,
d.UNRECOVERABLE_CHANGE# as NOLOG_CHNG#,
to_char(d.UNRECOVERABLE_TIME, 'Dy DD-Mon-YYYY HH24:MI:SS') as NOLOG_TIME
from V$DATAFILE d join V$TABLESPACE t
on d.TS# = t.TS#
order by t.NAME;

Output:

LAST NON-LOGGEDOPERATIONS

Check the Change Numbers and times against your backups to determine
if non-logged operations have occurred

DBF_NAME             TS_NAME   NOLOG_CHNG# NOLOG_TIME
-------------------- --------- ----------- ------------------------
J:\...\SYSTEM01.DBF  SYSTEM    0
J:\...\UNDOTBS01.DBF UNDOTBS1  0
J:\...\SYSAUX01.DBF  SYSAUX    0
J:\...\TEST01.DBF    TEST      6271597     Tue 02-Jun-2008 18:30:46
J:\...\USERS01.DBF   USERS     0

After that, just make sure your last Level 0 backup is newer than the times listed, and be aware that Point In Time Recovery will be limited to before the NOLOGGING operations occurred and when the last Level 0 backup was taken.

Be sure to set up lines of communication and coordination in the future, so the risk of not being able to recover the entire database to the last transaction is reduced.

Brian Fedorko

May 27

A planned installations always requires...  Plans!Designing the data structure

If there were a more crucial time for a Database Administrator to team with and guide the application developers, I can not think of one. Getting this first step as correct as possible will save rework ranging from an inordinate amount of time dedicated to tuning to total application overhaul. This translates into your company/client hemorrhaging thousands of hours/hundreds-of-thousands-of dollars of dollars of unnecessary spending… or saving that very same amount. This is what a Professional DBA brings to the table. But how do you know if you are doing it well?
You design the database for the data. It is ALWAYS about the data, and how the user interacts with the data. Requirements are a great place to start if they are well-written, but mapping out use cases with the developer and the user is simply the best way to go. By exhaustively examining all the use cases, your structure will practically write itself. A solid understanding of the use cases will tell you:

  • How transactional and dynamic your database will be
  • What data will be input, when, and how
  • Where relationships and data constraints need to be implemented
  • What data will be extracted and how it will be grouped
  • Where locking issues will manifest
  • What data may need special handling (HIPAA, SOX, DoD Sensitive, Privacy Act, etc.)

The use cases, combined with a bit of foresight and communications, you can determine if the data will need warehousing in the future, if the system will require inordinate scalability, and/or the necessity of alternate operational sites. Initially designing the data system for end-game use will help you evolve the system as it is developed, rather than bolting on solutions in an ad-hoc manner as the needs become critical.

Common Pitfalls to Avoid:

Over-Normalization: There is no shame in under-normalizing your database if you have a solid reason to skip some normalization opportunities. Commonly, you can improve performance and maintainability – And if your data will eventually be warehoused, it will need to be (sometimes greatly) denormalized. Being able to efficiently convert your transactional data storage structure into a warehoused structure, optimized for data mining and reporting truly requires a planned, engineered effort.

The Developer Mindset: An excellent developer with a focus on efficiency and optimization is careful to only create and use resources a long as is absolutely necessary. However, an excellent data structure must be extremely static. Creation and destruction of tables is not only a hallmark of suspect design, but also creates a host of security and auditing challenges.

Data Generation: Any data created for storage must be carefully and thoroughly scrutinized. Fields of created data, stored to increase application performance, can reduce the performance of the entire database. If this practice is prevalent enough, storage requirements can increase dramatically! I have seen very few instances where the data manipulation is not best handled during retrieval.

Incremental Primary Keys: Iterative ID fields (‘Auto-Number’) in transactional tables must be avoided! Not only does it compromise our goal of not creating or destroying stored data, but it wreaks havoc on any sort of multi-master, bi-directional replication (ex. Oracle Streams, Advanced Replication, etc.). For example, if two sites are being used to accept transactions, the chances are excellent that the sites will receive separate transactions at the same time. If both create their Primary Key from the last record, incremented by one, they will BOTH have the same ID and a collision will occur.

Sure, you could design logic to constantly monitor for this issue, and gain additional overhead. I’ve also seen the transactions staggered by ‘odds and evens’. But what happens when you add an additional site? Your scalability is inherently limited.

There are very few instances where a natural key cannot be drawn from existing data. Usually, a timestamp combined with 1 or 2 data fields (ex. PRODUCT_ID, LOCATION, SSN - if protected, etc.) will produce an excellent, unique key. In the very RARE cases that it is impossible to generate a unique natural key, the Universal/Global Unique Identifier (UUID/GUID) is a viable alternative. All major databases support the generation of this ID, based on Timestamp, MAC address, MD5 Hash, SHA-1 Hash, and/or Random numbers depending on the version used. Given that there are 3.4 × 10^38 combinations, it is unlikely that you’ll run out. Ever. Every major DBMS has a utility to generate a UUID/GUID - SYS_GUID() in Oracle, UUID() in MySQL, and NEWID() in TSQL. There are also implementations for creating the UUID/GUID in C, Ruby, PhP, Perl, Java, etc.

This is just a light touch of creating a solid, production-grade data structure, but it is a good start. We’ll have plenty of room to explore some additional facets and expand on some of the items mentioned in further articles. Always remember, a good DBA must synergize with the development team, bringing different mindsets with distinct goals together to provide a robust, efficient solution

Brian Fedorko