Strict Standards: Declaration of Walker_Page::start_lvl() should be compatible with Walker::start_lvl(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 576

Strict Standards: Declaration of Walker_Page::end_lvl() should be compatible with Walker::end_lvl(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 576

Strict Standards: Declaration of Walker_Page::start_el() should be compatible with Walker::start_el(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 576

Strict Standards: Declaration of Walker_Page::end_el() should be compatible with Walker::end_el(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 576

Strict Standards: Declaration of Walker_PageDropdown::start_el() should be compatible with Walker::start_el(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 593

Strict Standards: Declaration of Walker_Category::start_lvl() should be compatible with Walker::start_lvl(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 687

Strict Standards: Declaration of Walker_Category::end_lvl() should be compatible with Walker::end_lvl(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 687

Strict Standards: Declaration of Walker_Category::start_el() should be compatible with Walker::start_el(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 687

Strict Standards: Declaration of Walker_Category::end_el() should be compatible with Walker::end_el(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 687

Strict Standards: Declaration of Walker_CategoryDropdown::start_el() should be compatible with Walker::start_el(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 710

Strict Standards: Redefining already defined constructor for class wpdb in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/wp-db.php on line 58

Deprecated: Assigning the return value of new by reference is deprecated in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/cache.php on line 99

Strict Standards: Redefining already defined constructor for class WP_Object_Cache in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/cache.php on line 404

Deprecated: Assigning the return value of new by reference is deprecated in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/query.php on line 21

Deprecated: Assigning the return value of new by reference is deprecated in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/theme.php on line 576
The Professional DBA
Apr 15

The quarterly Oracle CPU hit the streets on Tuesday, 14 April, and patches 16 vulnerabilities in the Oracle RDBMS, including a remotely accessible exploit of the listener without authentication.  Oddly this only scored a 5.0 on the CVSS v2.0.  There was an 8.5  CVSS-scored vulnerability in Resource manager that was patched.  It has been speculated that this vulnerability could be exploited by SQL Injection, but the high score seems odd.  I’ll keep looking for details on this item.

Feb 10

Safe and SecureReading through the online blogs, I came across a discussion of whether Oracle’s Critical Product Updates are worth the ‘trouble’ of applying.  Of course, I’m always very interested to see what DBAs have to say in regards to Information Assurance and database security in general - and I wasn’t disappointed.  Quite a few DBAs had some great, common-sense guidelines for approaching Oracle’s Critical Product Updates!  A thorough system of analyzing the risk and impact, coupled with thorough testing.  It may take a little discipline to keep that process going, but many were thoughtful and solid.

Surpisingly (to me, at least) I had to take umbrage with Don Burleson.  His advice:

You DON’T have to apply patches, and sometimes patches can CAUSE unplanned issues.  The best practices are always to conduct a full stress in a TEST environment before applying the patches in production… I wait for major releases and re-install at-once, and I only look at patches to fix specific issues.”  - Courtesy of TechTarget

I have personally applied dozens of CPUs on (literally) hundreds of systems of all flavors.  I have yet to see a problem on our production servers that was caused by a CPU.  Of course, I have seen a few weeded out through thorough and careful testing beforehand.

The problem with simply not assessing and applying these CPUs due to FUD (Fear, Uncertainty, and Doubt) comes when things go badly, or when you need to meet SOX, HIPAA, PCI DSS, FDCC, FISMA,etc. compliance.

Even on a closed system, if an insider is able to view/modify/delete sensitive information by exploiting a vulnerability fixed by a CPU, your company will be in a very unenviable legal position, as ignoring security patches is not adequately performing ‘due care’.  Also, when operating under various compliance standards or on a DoD system, there is rarely an option to avoid applying a CPU, unless you can document a specific problem the application will induce and your plan to mitigate or eliminate the risk.  This is where a strong  process, when well-documented would be an excellent solution.

If a DBA has a stringent standard of apathy towards Oracle CPUs, it may be an indicator of systemic security problems in their data stores as well, and may warrent some pointed questions:

  • Are you auditing, at the very least, privilege escalation and access to sensitive objects?
  • Are you making sure that glogin and the system executables are not being tampered with?
  • Do you have a benchmark of defined roles and privileges documented?
  • Are you logging who is connecting and accessing the data, when and from where?

If the answers to these are ‘no’, we wouldn’t be aware of a security breach even if it had happened!

Turning the key of your database and letting it go can be a very perilous practice despite what the remote database administration service vendors may tell you.  If data is the lifeblood of your company, it really should be maintained AND protected as such.  No company wants the bad press that follows a data theft.

Brian Fedorko

Jan 17

LanguageHave you ever run into this situation: You are happily scripting out or designing a new capability, performing maintenance, or providing support. Perhaps you are eating lunch, or are home in bed, soundly sleeping at 3:00AM.

And then it happens.

Something broke somewhere, and it is database-related. No, it is not something you’ve built, maintained, or even seen - It is something from another business area, and their help is not available.

When you arrive, you are greeted by the ever-present group of concerned stake-holders, and a terminal. Will you staunch the flow of money they may be hemorrhaging? Will you bring back the data they may have lost? Will you be able to restore their system to service?

What you don’t want to do is flounder because they don’t have your favorite management software, your preferred shell, or your favorite OS.

Learn to speak the native languages!

There are 3 skill sets every good data storage professional should keep current at all times, outside of their core RDBMS interface languages:

  • Bourne Shell (bash)
  • vi (Unix/inux text editor)
  • CMD Shell

I guarantee that any Linux system you log into will have bash and vi. I personally prefer the korn shell for navigation, and the c shell for scripting - but the bourne shell is on every system. Same with vi - Except, I really prefer vi to anything else.

This means no matter what Linux or Unix server you are presented with, you can become effective immediately.

I’ve included Microsoft Windows command shell is included because it fits in with a parallel reason for learning the native language - you can proactively increase survivability in your data storage and management systems by using the tools and utilities you KNOW will be available - Even if libraries are unavailable, even if interpreters and frameworks are lost/broken.

If the operating system can boot, you can be sure the bourn shell or CMD shell is available for use.

Knowing that, you should consider scripting the most vital system functions using the available shell script, and initiating them with the operating system’s integral scheduling tool (crontab/Scheduled Tasks). This way you can ensure that if the OS is up, your vital scripts will be executed!

And who doesn’t want that?

Dec 20

Bad Things CAN Happen

I was conversing with a colleague of mine who was working with some Oracle DBAs who were deciding to abandon Oracle’s Recovery Manager and replace it with a 3rd party disk-imaging ‘backup’ solution. Not augment RMAN, but replace it entirely.

I was really surprised. Really, REALLY surprised!

After mulling over all the concerns, I put together some items you may want to consider before heading down this path:

  • Are you operating in ARCHIVELOG mode? If you are not, YOU WILL LOSE DATA.
  • If you are in ARCHIVELOG mode – What happens to the old archivelogs? Deleting the old ones before the next RMAN level zero renders the ones you have useless (except for logmining).
  • If you are in NOARCHIVELOG mode, how far back can you troubleshoot unauthorized data modification or application error? How quickly do your redo logs switch? – Multiply that by the number of groups you have, and you have your answer.
  • How do you address block corruption (logical AND physical) without RMAN? With a RMAN-based DR solution, block recovery takes ONE command. No data loss, no downtime. If you take a snapshot using 3rd party tools – Your backups now have that same block corruption. Where do you go from there?
  • If disk space is an issue, do you use the AS COMPRESSED BACKUPSET argument to reduce backup size? Do you pack the archivelogs into daily level ones? I’ve found ways to optimize our Oracle RMAN backups so we can cover 2 weeks with the same disk space that used to cover 2 days.
  • How do you monitor for block corruption? (Waiting for something to break is not valid instrumentation) I check for block corruption automatically, every day, by using RMAN and building it into my daily database backup scripts.

NOTE: Logical corruption happens. Even on a SAN, even on a VM. VMs can crash, power can be lost. I’ve experienced 2 incidents with block corruption in the recent quarter. Of course, since I built the Disaster Recovery system around RMAN – We caught the corruption the next day and fixed it with ZERO downtime and ZERO data loss.

Point-in-Time-Recovery (PITR) is enabled by RMAN - ALL disk imaging backup solutions lack this capability. If you are relying solely on a snapshot backup, you will lose all the data since the last snapshot.

Without tablespace PITR, you have to roll ALL the data in the database back. If you have multiple instances and are using a server snapshot with no RMAN, ALL the databases on that server will lose data! This is usually not acceptable.

Lastly, How much testing have you done with the snapshot solution? REAL TESTING. Have you taken a snapshot during continuous data change? We tried snap-shotting the database server using 3 different pieces of software. NONE took a consistently consistent and usable snapshot of the database. Sometimes it did. If we were lucky, and the DB was quiet. Is it acceptable to sometime get your client’s/company’s data restored?

Remember, the key is a multi-layered DR strategy (where disk imaging and snap-shotting IN CONJUNCTION with RMAN is incredibly effective!) and continuous REAL WORLD testing.

As a parting shot, in case you were wondering, The ‘DBAs’ had decided to rely soley on a disk imaging backup solution, not because they felt it had more to offer, or because it was tested to be more effective. But because they felt RMAN was difficult to use…

Brian Fedorko

Nov 15

GUIs are for look’n, the Command Line is for Doin’” – That is some of the best mentoring advice I have received or could give as a data storage professional, and it is true to this day!

GUIs (Graphical User Interfaces) have really made enterprise-class databases much more accessible, and have made viewing data and corralling vital stats wonderfully pleasant and simple. MySQL Enterprise Monitor and Oracle Enterprise Manager include some excellent, time-saving ‘advisers’ that simply tuning tasks as well. They have come a long way, and their utility is undeniable.

But, as a data storage professional, we are expected to be able to restore and return the system to and operational capacity when things go badly. Usually, this is where we need the skills to ‘pop open the hood’.

Just as a good drummer should be able to do whatever can be done with their hands with their feet, when they are behind their kit - A good DBA will be able to perform any action in the GUI, at the command-line as well. This is critically important because:

  • The GUI contains a subset of the CLI capabilities, utilities, and tools
  • The GUI is a separate piece of software, often with additional dependencies, that can break, while leaving the database up and available.

Remember, of all the duties a DBA is asked to perform, there is one that we must do correctly and effectively EVERY time - Data Recovery. Data loss is absolutely unacceptable. So, you must honestly ask yourself - If the database goes down, the GUI is unusable, and the data must be recovered, can I do it at the command line? If not, it should be your focus to develop that skill set immediately - Not being able to recover your company’s or client’s data because you couldn’t ‘point n’ click‘ your way through the process, your company can lose a fortune – And it will, most likely, cost you your job!

Oracle Enterprise Manager is a great example. It is extremely useful, but in my experience, extremely delicate. It cannot withstand being cloned or moved to a different server, and it can break with any ungraceful handling of its repository, inside the database. Chances are, if the database is in dire straits, EM will not be there.

Will you be ready?

Brian Fedorko

Oct 22

Finally!!!  Oracle has published an Early Adopter Release of the Oracle SQL Developer Data Modeling package!

Right now it is a standalone product, but they are planning to integrate this into their excellent, platform independent, and affordable (read as: FREE!) SQL Developer tool.

I’m a big fan of SQL Developer, and it is readily adopted by clients due to price and functionality.  With no cost associated, I’ve seen anyone from developers to testers to integration groups use this tool to great effect.  But for the longest time, designers and architects were left with mostly 3rd party choices for creating data model design and structure.

I’m currently installing and testing this product, and will publish results – Good or Bad.

More to come!

Jul 15

Safe and Secure

It is time once again to eliminate bugs and increase the security posture of our Oracle databases. The Advisories and Risk Matrices can be found on Oracle Technology Network. The full availability information is found at Oracle Metalink under DocID# 579278.1

Points of Interest:

  • This CPU contains 11 security fixes for the Oracle Enterprise Database Server
  • None of the security holes for the Enterprise DBMS are remotely exploitable without authentication
  • Oracle Application Express requires no security fixes (This product continues to impress me)
  • ALL Windows platforms running Oracle Enterprise DB Server v10.2.0.3 will have to wait until 22-July-2008 for their CPU
  • Support for Solaris 32-bit and Oracle Enterprise DB Server v10.2.0.2 seems to have been pulled! There’s no CPU for these, and none planned for the October 2008 Critical Product Update as per Oracle Metalink DocID# 579278.1.

Don’t forget to read the patch notes, test thoroughly, and check to make sure you’re using the proper version of OPatch!

Next CPU: 14-October2008

Brian Fedorko

Jun 27

Today we’re grieving over the loss of a dedicated professional, true friend, and incredible human being. Dave Wong has passed on last night, after his courageous struggle with stomach cancer.

Dave was the kind of person you wanted to be - He was generous in both time and spirit. He was passionate about his work as a DBA, making the seemingly impossible possible for our partners and teammates. His name was golden in a place where credibility is the coin of the realm. He was greatly respected by the everyone who knew him, or had the pleasure of working with him. He never, ever demanded respect, he earned it every day. He earned it through his honesty, his patient, unwaivering guidance, and his desire to see the team succeed. Dave inspired greatness in us all, by his example.

Dave served as President of our Central Florida Oracle User’s Group - He brought our community an abundance of knowledge and networking. He was a dynamic speaker, quick of wit and smile. He touched the lives of so many people, always making us all a bit better. A bit stronger and more wise.

I’ve never met someone as tough as Dave, or a cooler kat. His dedication, his fight, his determination, his kindness, his camaraderie, his humor, his friendship - I’ll never forget any of this.

I miss him terribly.

We all do.

Jun 16

Paper Cash Money!

Oracle’s latest price list was published today!

Oracle Technology Global Price List

There are increases scattered throughout the various licensing options, most notably:

Oracle Enterprise Edition

  • $7500 increase in the base per-processor licensing
  • $150 increase in per-user licensing

Oracle Standard Edition

  • $2500 increase in the base per-processor licensing
  • $50 increase in per-user licensing

Oracle Standard Edition One

  • $805 increase in the base per-processor licensing
  • $41 increase in per-user licensing

RAC

  • $300 increase in the base per-processor licensing
  • $60 increase in per-user licensing

Active Data Guard

  • $800 increase in the base per-processor licensing
  • $20 increase in per-user licensing

Advanced Security Partitioning, Advanced Compression, Real Application Testing, Label Security

  • $1500 increase in the base per-processor licensing
  • $30 increase in per-user licensing

Diagnostics Pack , Tuning Pack, Change Management Pack, Configuration Management Pack, Provisioning Pack for Database

  • $500 increase in the base per-processor licensing
  • $10 increase in per-user licensing

Internet Application Server Enterprise Edition

  • $5000 increase in the base per-processor licensing
  • $100 increase in per-user licensing

Enterprise Single Sign-On Suite

  • $10 increase in per-user licensing

This is certainly not an exhaustive list and I’m sure that there are many, many other changes. Rounding up your Enterprise’s licensing and product use information for acquisition planning purposes may be a prudent and proactive task for this month!

Brian Fedorko

Jun 15

I have always enjoyed the teaching and wisdom of Dr. Steven Covey (especially if he does not litigate for derivative works!). He has a real knack for capturing introspective how-to lessons detailing the simplicity of living a good and productive life.

In homage to Dr. Covey’s amazing work, I’d like to narrow the scope, but offer lessons with a similar impact for database administrators – Expanding on the inobviously obvious to illuminate the good path to success.

Habit One - Multiplex and Mirror Everything!

Mirror, Mirror...

Multiplex and mirror all of your critical files – Is there a reason not to? Today’s SANs have gone a long way to provide for redundancy and reduce I/O contention, but they are definitely not an excuse to abandon this basic key to database survivability!

The SAN Trap: SANs are often used as a panacea for data availability. However, have you taken a close look at your SAN to determine how robust and survivable it really is?

  • How many LUNS are your files spread across?
  • What RAID level are you using and how many simultaneous disk failures will it take to make your files irretrievable? (Anything under 20% incurs quite a bit of risk).
  • Do you have redundant controllers?
  • Redundant switches?

Even the most survivable storage setup is still vulnerable to logical corruption, and the vastly more common, human error (“I just deleted all the .LOG files to save some space!”).

Conversely, for very slim installs, you may only have a single disk or LUN – While there is greatly increased risk when in such a situation, reality dictates that sometimes the circumstances are unavoidable. Until you can grow your storage footprint, multiplexing and mirroring (across directories) becomes even more critical as

Mirroring and multiplexing your control files, redo logs, archived redo logs, and RMAN backups will significantly increase the likelihood of a successful recovery, should the need arise (See Habit 5 – Preparation). The procedure is extremely easy, and the files generally take up very little space, if properly scaled and tuned to your needs.

Here are some best practices for you to tailor to your needs:

  • Control Files: Multiplex two to three times and mirror over two to three disks/LUNs/directories
  • Redo Logs: Three to four members per group with two to three groups spread across disks/LUNs/directories
  • Archived Redo Logs: Mandatory mirroring between at least 2 disks/LUNs/directories
  • RMAN Backup Files: Mirror between at least two disks/LUNsdirectories
  • SPFILE: Periodically create a PFILE from the SPFILE and archive it, along with your backups and control file snapshots

A database administrator worth their salt NEVER loses data, and the best way to maintain this is to avoid a position where data loss is likely. Mirroring and Multiplexing are one of our most effective tools to reduce risk.

Brian Fedorko

May 27

A planned installations always requires...  Plans!Designing the data structure

If there were a more crucial time for a Database Administrator to team with and guide the application developers, I can not think of one. Getting this first step as correct as possible will save rework ranging from an inordinate amount of time dedicated to tuning to total application overhaul. This translates into your company/client hemorrhaging thousands of hours/hundreds-of-thousands-of dollars of dollars of unnecessary spending… or saving that very same amount. This is what a Professional DBA brings to the table. But how do you know if you are doing it well?
You design the database for the data. It is ALWAYS about the data, and how the user interacts with the data. Requirements are a great place to start if they are well-written, but mapping out use cases with the developer and the user is simply the best way to go. By exhaustively examining all the use cases, your structure will practically write itself. A solid understanding of the use cases will tell you:

  • How transactional and dynamic your database will be
  • What data will be input, when, and how
  • Where relationships and data constraints need to be implemented
  • What data will be extracted and how it will be grouped
  • Where locking issues will manifest
  • What data may need special handling (HIPAA, SOX, DoD Sensitive, Privacy Act, etc.)

The use cases, combined with a bit of foresight and communications, you can determine if the data will need warehousing in the future, if the system will require inordinate scalability, and/or the necessity of alternate operational sites. Initially designing the data system for end-game use will help you evolve the system as it is developed, rather than bolting on solutions in an ad-hoc manner as the needs become critical.

Common Pitfalls to Avoid:

Over-Normalization: There is no shame in under-normalizing your database if you have a solid reason to skip some normalization opportunities. Commonly, you can improve performance and maintainability – And if your data will eventually be warehoused, it will need to be (sometimes greatly) denormalized. Being able to efficiently convert your transactional data storage structure into a warehoused structure, optimized for data mining and reporting truly requires a planned, engineered effort.

The Developer Mindset: An excellent developer with a focus on efficiency and optimization is careful to only create and use resources a long as is absolutely necessary. However, an excellent data structure must be extremely static. Creation and destruction of tables is not only a hallmark of suspect design, but also creates a host of security and auditing challenges.

Data Generation: Any data created for storage must be carefully and thoroughly scrutinized. Fields of created data, stored to increase application performance, can reduce the performance of the entire database. If this practice is prevalent enough, storage requirements can increase dramatically! I have seen very few instances where the data manipulation is not best handled during retrieval.

Incremental Primary Keys: Iterative ID fields (‘Auto-Number’) in transactional tables must be avoided! Not only does it compromise our goal of not creating or destroying stored data, but it wreaks havoc on any sort of multi-master, bi-directional replication (ex. Oracle Streams, Advanced Replication, etc.). For example, if two sites are being used to accept transactions, the chances are excellent that the sites will receive separate transactions at the same time. If both create their Primary Key from the last record, incremented by one, they will BOTH have the same ID and a collision will occur.

Sure, you could design logic to constantly monitor for this issue, and gain additional overhead. I’ve also seen the transactions staggered by ‘odds and evens’. But what happens when you add an additional site? Your scalability is inherently limited.

There are very few instances where a natural key cannot be drawn from existing data. Usually, a timestamp combined with 1 or 2 data fields (ex. PRODUCT_ID, LOCATION, SSN - if protected, etc.) will produce an excellent, unique key. In the very RARE cases that it is impossible to generate a unique natural key, the Universal/Global Unique Identifier (UUID/GUID) is a viable alternative. All major databases support the generation of this ID, based on Timestamp, MAC address, MD5 Hash, SHA-1 Hash, and/or Random numbers depending on the version used. Given that there are 3.4 × 10^38 combinations, it is unlikely that you’ll run out. Ever. Every major DBMS has a utility to generate a UUID/GUID - SYS_GUID() in Oracle, UUID() in MySQL, and NEWID() in TSQL. There are also implementations for creating the UUID/GUID in C, Ruby, PhP, Perl, Java, etc.

This is just a light touch of creating a solid, production-grade data structure, but it is a good start. We’ll have plenty of room to explore some additional facets and expand on some of the items mentioned in further articles. Always remember, a good DBA must synergize with the development team, bringing different mindsets with distinct goals together to provide a robust, efficient solution

Brian Fedorko