Strict Standards: Declaration of Walker_Page::start_lvl() should be compatible with Walker::start_lvl(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 576

Strict Standards: Declaration of Walker_Page::end_lvl() should be compatible with Walker::end_lvl(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 576

Strict Standards: Declaration of Walker_Page::start_el() should be compatible with Walker::start_el(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 576

Strict Standards: Declaration of Walker_Page::end_el() should be compatible with Walker::end_el(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 576

Strict Standards: Declaration of Walker_PageDropdown::start_el() should be compatible with Walker::start_el(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 593

Strict Standards: Declaration of Walker_Category::start_lvl() should be compatible with Walker::start_lvl(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 687

Strict Standards: Declaration of Walker_Category::end_lvl() should be compatible with Walker::end_lvl(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 687

Strict Standards: Declaration of Walker_Category::start_el() should be compatible with Walker::start_el(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 687

Strict Standards: Declaration of Walker_Category::end_el() should be compatible with Walker::end_el(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 687

Strict Standards: Declaration of Walker_CategoryDropdown::start_el() should be compatible with Walker::start_el(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 710

Strict Standards: Redefining already defined constructor for class wpdb in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/wp-db.php on line 58

Deprecated: Assigning the return value of new by reference is deprecated in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/cache.php on line 99

Strict Standards: Redefining already defined constructor for class WP_Object_Cache in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/cache.php on line 404

Deprecated: Assigning the return value of new by reference is deprecated in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/query.php on line 21

Deprecated: Assigning the return value of new by reference is deprecated in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/theme.php on line 576
Database
Feb 26

Here’s a fun bit of Oracle database nuance that you may not see unless you are doing a lot of work with LOBS (Large Object Datatypes – BLOB, CLOB, NCLOB, and BFILE).  Indexes of LOB datatype columns will show up in the DBA_INDEXES and USER_INDEXES view – However, if you check the ALL_INDEXES view, they are simply not listed.  This may become an issue if you create all of your objects under one user, which is then locked and another user is granted privileges to select from that object (A good, secure design practice to prevent tables, views, and other objects from being modified).

Oracle does this because LOB indexes cannot be modified, therefore there is really little need to address them at all.  If you try to update the LOB primary index, you’ll run into an ORA-14326, if you try to alter or drop the index, expect an ORA-22864.

Also, LOBs in Oracle 10gR2 and prior are now referred to as BasicFiles.  This is because in 11g, Oracle has made some stout improvements to the handling of LOBs via the new SecureFiles.   SecureFiles offer some incredible benefits over the old LOBs in terms of security and efficiency.  The only thing I can’t figure out is why Oracle doesn’t seem to list this as a huge reason to upgrade!

Feb 10

Safe and SecureReading through the online blogs, I came across a discussion of whether Oracle’s Critical Product Updates are worth the ‘trouble’ of applying.  Of course, I’m always very interested to see what DBAs have to say in regards to Information Assurance and database security in general - and I wasn’t disappointed.  Quite a few DBAs had some great, common-sense guidelines for approaching Oracle’s Critical Product Updates!  A thorough system of analyzing the risk and impact, coupled with thorough testing.  It may take a little discipline to keep that process going, but many were thoughtful and solid.

Surpisingly (to me, at least) I had to take umbrage with Don Burleson.  His advice:

You DON’T have to apply patches, and sometimes patches can CAUSE unplanned issues.  The best practices are always to conduct a full stress in a TEST environment before applying the patches in production… I wait for major releases and re-install at-once, and I only look at patches to fix specific issues.”  - Courtesy of TechTarget

I have personally applied dozens of CPUs on (literally) hundreds of systems of all flavors.  I have yet to see a problem on our production servers that was caused by a CPU.  Of course, I have seen a few weeded out through thorough and careful testing beforehand.

The problem with simply not assessing and applying these CPUs due to FUD (Fear, Uncertainty, and Doubt) comes when things go badly, or when you need to meet SOX, HIPAA, PCI DSS, FDCC, FISMA,etc. compliance.

Even on a closed system, if an insider is able to view/modify/delete sensitive information by exploiting a vulnerability fixed by a CPU, your company will be in a very unenviable legal position, as ignoring security patches is not adequately performing ‘due care’.  Also, when operating under various compliance standards or on a DoD system, there is rarely an option to avoid applying a CPU, unless you can document a specific problem the application will induce and your plan to mitigate or eliminate the risk.  This is where a strong  process, when well-documented would be an excellent solution.

If a DBA has a stringent standard of apathy towards Oracle CPUs, it may be an indicator of systemic security problems in their data stores as well, and may warrent some pointed questions:

  • Are you auditing, at the very least, privilege escalation and access to sensitive objects?
  • Are you making sure that glogin and the system executables are not being tampered with?
  • Do you have a benchmark of defined roles and privileges documented?
  • Are you logging who is connecting and accessing the data, when and from where?

If the answers to these are ‘no’, we wouldn’t be aware of a security breach even if it had happened!

Turning the key of your database and letting it go can be a very perilous practice despite what the remote database administration service vendors may tell you.  If data is the lifeblood of your company, it really should be maintained AND protected as such.  No company wants the bad press that follows a data theft.

Brian Fedorko

Jan 17

LanguageHave you ever run into this situation: You are happily scripting out or designing a new capability, performing maintenance, or providing support. Perhaps you are eating lunch, or are home in bed, soundly sleeping at 3:00AM.

And then it happens.

Something broke somewhere, and it is database-related. No, it is not something you’ve built, maintained, or even seen - It is something from another business area, and their help is not available.

When you arrive, you are greeted by the ever-present group of concerned stake-holders, and a terminal. Will you staunch the flow of money they may be hemorrhaging? Will you bring back the data they may have lost? Will you be able to restore their system to service?

What you don’t want to do is flounder because they don’t have your favorite management software, your preferred shell, or your favorite OS.

Learn to speak the native languages!

There are 3 skill sets every good data storage professional should keep current at all times, outside of their core RDBMS interface languages:

  • Bourne Shell (bash)
  • vi (Unix/inux text editor)
  • CMD Shell

I guarantee that any Linux system you log into will have bash and vi. I personally prefer the korn shell for navigation, and the c shell for scripting - but the bourne shell is on every system. Same with vi - Except, I really prefer vi to anything else.

This means no matter what Linux or Unix server you are presented with, you can become effective immediately.

I’ve included Microsoft Windows command shell is included because it fits in with a parallel reason for learning the native language - you can proactively increase survivability in your data storage and management systems by using the tools and utilities you KNOW will be available - Even if libraries are unavailable, even if interpreters and frameworks are lost/broken.

If the operating system can boot, you can be sure the bourn shell or CMD shell is available for use.

Knowing that, you should consider scripting the most vital system functions using the available shell script, and initiating them with the operating system’s integral scheduling tool (crontab/Scheduled Tasks). This way you can ensure that if the OS is up, your vital scripts will be executed!

And who doesn’t want that?

Dec 20

Bad Things CAN Happen

I was conversing with a colleague of mine who was working with some Oracle DBAs who were deciding to abandon Oracle’s Recovery Manager and replace it with a 3rd party disk-imaging ‘backup’ solution. Not augment RMAN, but replace it entirely.

I was really surprised. Really, REALLY surprised!

After mulling over all the concerns, I put together some items you may want to consider before heading down this path:

  • Are you operating in ARCHIVELOG mode? If you are not, YOU WILL LOSE DATA.
  • If you are in ARCHIVELOG mode – What happens to the old archivelogs? Deleting the old ones before the next RMAN level zero renders the ones you have useless (except for logmining).
  • If you are in NOARCHIVELOG mode, how far back can you troubleshoot unauthorized data modification or application error? How quickly do your redo logs switch? – Multiply that by the number of groups you have, and you have your answer.
  • How do you address block corruption (logical AND physical) without RMAN? With a RMAN-based DR solution, block recovery takes ONE command. No data loss, no downtime. If you take a snapshot using 3rd party tools – Your backups now have that same block corruption. Where do you go from there?
  • If disk space is an issue, do you use the AS COMPRESSED BACKUPSET argument to reduce backup size? Do you pack the archivelogs into daily level ones? I’ve found ways to optimize our Oracle RMAN backups so we can cover 2 weeks with the same disk space that used to cover 2 days.
  • How do you monitor for block corruption? (Waiting for something to break is not valid instrumentation) I check for block corruption automatically, every day, by using RMAN and building it into my daily database backup scripts.

NOTE: Logical corruption happens. Even on a SAN, even on a VM. VMs can crash, power can be lost. I’ve experienced 2 incidents with block corruption in the recent quarter. Of course, since I built the Disaster Recovery system around RMAN – We caught the corruption the next day and fixed it with ZERO downtime and ZERO data loss.

Point-in-Time-Recovery (PITR) is enabled by RMAN - ALL disk imaging backup solutions lack this capability. If you are relying solely on a snapshot backup, you will lose all the data since the last snapshot.

Without tablespace PITR, you have to roll ALL the data in the database back. If you have multiple instances and are using a server snapshot with no RMAN, ALL the databases on that server will lose data! This is usually not acceptable.

Lastly, How much testing have you done with the snapshot solution? REAL TESTING. Have you taken a snapshot during continuous data change? We tried snap-shotting the database server using 3 different pieces of software. NONE took a consistently consistent and usable snapshot of the database. Sometimes it did. If we were lucky, and the DB was quiet. Is it acceptable to sometime get your client’s/company’s data restored?

Remember, the key is a multi-layered DR strategy (where disk imaging and snap-shotting IN CONJUNCTION with RMAN is incredibly effective!) and continuous REAL WORLD testing.

As a parting shot, in case you were wondering, The ‘DBAs’ had decided to rely soley on a disk imaging backup solution, not because they felt it had more to offer, or because it was tested to be more effective. But because they felt RMAN was difficult to use…

Brian Fedorko

Nov 15

GUIs are for look’n, the Command Line is for Doin’” – That is some of the best mentoring advice I have received or could give as a data storage professional, and it is true to this day!

GUIs (Graphical User Interfaces) have really made enterprise-class databases much more accessible, and have made viewing data and corralling vital stats wonderfully pleasant and simple. MySQL Enterprise Monitor and Oracle Enterprise Manager include some excellent, time-saving ‘advisers’ that simply tuning tasks as well. They have come a long way, and their utility is undeniable.

But, as a data storage professional, we are expected to be able to restore and return the system to and operational capacity when things go badly. Usually, this is where we need the skills to ‘pop open the hood’.

Just as a good drummer should be able to do whatever can be done with their hands with their feet, when they are behind their kit - A good DBA will be able to perform any action in the GUI, at the command-line as well. This is critically important because:

  • The GUI contains a subset of the CLI capabilities, utilities, and tools
  • The GUI is a separate piece of software, often with additional dependencies, that can break, while leaving the database up and available.

Remember, of all the duties a DBA is asked to perform, there is one that we must do correctly and effectively EVERY time - Data Recovery. Data loss is absolutely unacceptable. So, you must honestly ask yourself - If the database goes down, the GUI is unusable, and the data must be recovered, can I do it at the command line? If not, it should be your focus to develop that skill set immediately - Not being able to recover your company’s or client’s data because you couldn’t ‘point n’ click‘ your way through the process, your company can lose a fortune – And it will, most likely, cost you your job!

Oracle Enterprise Manager is a great example. It is extremely useful, but in my experience, extremely delicate. It cannot withstand being cloned or moved to a different server, and it can break with any ungraceful handling of its repository, inside the database. Chances are, if the database is in dire straits, EM will not be there.

Will you be ready?

Brian Fedorko

Jul 15

Safe and Secure

It is time once again to eliminate bugs and increase the security posture of our Oracle databases. The Advisories and Risk Matrices can be found on Oracle Technology Network. The full availability information is found at Oracle Metalink under DocID# 579278.1

Points of Interest:

  • This CPU contains 11 security fixes for the Oracle Enterprise Database Server
  • None of the security holes for the Enterprise DBMS are remotely exploitable without authentication
  • Oracle Application Express requires no security fixes (This product continues to impress me)
  • ALL Windows platforms running Oracle Enterprise DB Server v10.2.0.3 will have to wait until 22-July-2008 for their CPU
  • Support for Solaris 32-bit and Oracle Enterprise DB Server v10.2.0.2 seems to have been pulled! There’s no CPU for these, and none planned for the October 2008 Critical Product Update as per Oracle Metalink DocID# 579278.1.

Don’t forget to read the patch notes, test thoroughly, and check to make sure you’re using the proper version of OPatch!

Next CPU: 14-October2008

Brian Fedorko

Jun 16

Paper Cash Money!

Oracle’s latest price list was published today!

Oracle Technology Global Price List

There are increases scattered throughout the various licensing options, most notably:

Oracle Enterprise Edition

  • $7500 increase in the base per-processor licensing
  • $150 increase in per-user licensing

Oracle Standard Edition

  • $2500 increase in the base per-processor licensing
  • $50 increase in per-user licensing

Oracle Standard Edition One

  • $805 increase in the base per-processor licensing
  • $41 increase in per-user licensing

RAC

  • $300 increase in the base per-processor licensing
  • $60 increase in per-user licensing

Active Data Guard

  • $800 increase in the base per-processor licensing
  • $20 increase in per-user licensing

Advanced Security Partitioning, Advanced Compression, Real Application Testing, Label Security

  • $1500 increase in the base per-processor licensing
  • $30 increase in per-user licensing

Diagnostics Pack , Tuning Pack, Change Management Pack, Configuration Management Pack, Provisioning Pack for Database

  • $500 increase in the base per-processor licensing
  • $10 increase in per-user licensing

Internet Application Server Enterprise Edition

  • $5000 increase in the base per-processor licensing
  • $100 increase in per-user licensing

Enterprise Single Sign-On Suite

  • $10 increase in per-user licensing

This is certainly not an exhaustive list and I’m sure that there are many, many other changes. Rounding up your Enterprise’s licensing and product use information for acquisition planning purposes may be a prudent and proactive task for this month!

Brian Fedorko

Jun 15

I have always enjoyed the teaching and wisdom of Dr. Steven Covey (especially if he does not litigate for derivative works!). He has a real knack for capturing introspective how-to lessons detailing the simplicity of living a good and productive life.

In homage to Dr. Covey’s amazing work, I’d like to narrow the scope, but offer lessons with a similar impact for database administrators – Expanding on the inobviously obvious to illuminate the good path to success.

Habit One - Multiplex and Mirror Everything!

Mirror, Mirror...

Multiplex and mirror all of your critical files – Is there a reason not to? Today’s SANs have gone a long way to provide for redundancy and reduce I/O contention, but they are definitely not an excuse to abandon this basic key to database survivability!

The SAN Trap: SANs are often used as a panacea for data availability. However, have you taken a close look at your SAN to determine how robust and survivable it really is?

  • How many LUNS are your files spread across?
  • What RAID level are you using and how many simultaneous disk failures will it take to make your files irretrievable? (Anything under 20% incurs quite a bit of risk).
  • Do you have redundant controllers?
  • Redundant switches?

Even the most survivable storage setup is still vulnerable to logical corruption, and the vastly more common, human error (“I just deleted all the .LOG files to save some space!”).

Conversely, for very slim installs, you may only have a single disk or LUN – While there is greatly increased risk when in such a situation, reality dictates that sometimes the circumstances are unavoidable. Until you can grow your storage footprint, multiplexing and mirroring (across directories) becomes even more critical as

Mirroring and multiplexing your control files, redo logs, archived redo logs, and RMAN backups will significantly increase the likelihood of a successful recovery, should the need arise (See Habit 5 – Preparation). The procedure is extremely easy, and the files generally take up very little space, if properly scaled and tuned to your needs.

Here are some best practices for you to tailor to your needs:

  • Control Files: Multiplex two to three times and mirror over two to three disks/LUNs/directories
  • Redo Logs: Three to four members per group with two to three groups spread across disks/LUNs/directories
  • Archived Redo Logs: Mandatory mirroring between at least 2 disks/LUNs/directories
  • RMAN Backup Files: Mirror between at least two disks/LUNsdirectories
  • SPFILE: Periodically create a PFILE from the SPFILE and archive it, along with your backups and control file snapshots

A database administrator worth their salt NEVER loses data, and the best way to maintain this is to avoid a position where data loss is likely. Mirroring and Multiplexing are one of our most effective tools to reduce risk.

Brian Fedorko

Jun 08

LockVarious organizations provide various security guidelines to aid us in hardening our databases. They are an EXCELLENT tool to this end and I cannot recommend enough reading and research in this regard. However, blindly implementing the guidelines is not a security panacea!!! It takes a knowledgeable DBA teaming with insightful IA personnel to determine if the guidelines make sense in your situation. I’ll illustrate this with an example:

How to Follow DoD/DISA Database Security Guidelines to Make Your Oracle Database Vulnerable to a Denial of Service (DoS) Attack

Necessary items:

Step 1. Apply the latest STIG Guidence to your database – Especially Item DG0073 in Section 3.3.10 – “The DBA will configure the DBMS to lock database accounts after 3 consecutive unsuccessful connection attempts within a specified period of time.”

Step 2. Mine Pete Finnigan’s list of common and default Oracle userids, and put them in a text file. Feel free to add any common database connection userids for popular applications.

Step 3. Use a command to iteratively feed the user ids from your file to sqlplus with a bogus password (MSWindows):

C:\>for /f "tokens=*" %I in (test.txt) do @sqlplus -L %I/NotThePassword@SID

Step 4. Repeat. After the 3rd incorrect password, the database account will be locked, and the application cannot connect until the account is unlocked by a privileged user.

Granted, if all the other items listed in the STIG are implemented, this will be extremely difficult (if not impossible) to accomplish from the outside, but it is easily accomplished by anyone who has access to the Oracle client (or JDBC, ODBC etc.) on any of the application servers – providing opportunity to an insider who doesn’t necessarily have database access.

This isn’t a specific Oracle issue, or an OS issue - the guidance is general enough to cover any DB/OS combination. The DISA/DoD STIG isn’t solely to blame either. The same guidance can be gained here, here, etc.

The larger issue, effectively securing your database, requires a bit of paradigm shift, a willingness to focus on the goal, (rather than the method) and a lot of teamwork and trust between DBAs and IA professionals.

The 3rd Alternative

When creating your roles, consider the automated, application users in your database and do not set a limit for unsuccessful login attempts on those accounts. To keep brute force and dictionary attacks at bay, you’ll need to ensure the application’s database account passwords are long and strong. Putting your database behind a stout firewall is also key - Isolating your database server from the internet altogether is really the best idea. Using the guidelines that are appropriate for your environment in 3.1.4.1 of the Database STIGs will further harden your installation.

After that, your best defense is malicious activity detection via auditing:

select  USERNAME, count(USERNAME)
from DBA_AUDIT_TRAIL
where RETURNCODE=1017
and TIMESTAMP > CURRENT_DATE - interval '3' day
group by USERNAME;

If you set up auditing, and use something like the SQL above to provide the raw instrumentation data for your database, you’ll be able to trend and perform velocity checks to sound the alarm when trouble may be in progress.

And that strengthens our security posture.

Brian Fedorko