Strict Standards: Declaration of Walker_Page::start_lvl() should be compatible with Walker::start_lvl(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 576

Strict Standards: Declaration of Walker_Page::end_lvl() should be compatible with Walker::end_lvl(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 576

Strict Standards: Declaration of Walker_Page::start_el() should be compatible with Walker::start_el(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 576

Strict Standards: Declaration of Walker_Page::end_el() should be compatible with Walker::end_el(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 576

Strict Standards: Declaration of Walker_PageDropdown::start_el() should be compatible with Walker::start_el(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 593

Strict Standards: Declaration of Walker_Category::start_lvl() should be compatible with Walker::start_lvl(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 687

Strict Standards: Declaration of Walker_Category::end_lvl() should be compatible with Walker::end_lvl(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 687

Strict Standards: Declaration of Walker_Category::start_el() should be compatible with Walker::start_el(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 687

Strict Standards: Declaration of Walker_Category::end_el() should be compatible with Walker::end_el(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 687

Strict Standards: Declaration of Walker_CategoryDropdown::start_el() should be compatible with Walker::start_el(&$output) in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/classes.php on line 710

Strict Standards: Redefining already defined constructor for class wpdb in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/wp-db.php on line 58

Deprecated: Assigning the return value of new by reference is deprecated in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/cache.php on line 99

Strict Standards: Redefining already defined constructor for class WP_Object_Cache in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/cache.php on line 404

Deprecated: Assigning the return value of new by reference is deprecated in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/query.php on line 21

Deprecated: Assigning the return value of new by reference is deprecated in /homepages/15/d244775686/htdocs/TDBASHome/BlogHome/BFBlog/wp-includes/theme.php on line 576
Apr 22

Once upon a year ago, when Sun Microsystems acquired MySQL, there were many bloggers who theorized that Big Red, who had a long-running, close partnership with Sun, was pulling some strings in this deal.  The people who endorsed the idea that Oracle couldn’t put the kibosh on MySQL without a PR headache (But Sun could), were dismissed as crazy, conspiracy-theory people.  My only surprise so far is the courteous lack of ‘I told you so’ popping up in the expert blogs.

So where do we go from here?  Oracle doesn’t have any experience in hardware.  If they keep most of Sun’s staffing, and continue to fund their innovation efforts, we may continue to see excellent products from them.  But will they retain their stellar brand identity?  Will they abandon the Sparc chip architecture and adopt x86?  It seems the best solution here looks more like a tightly coupled partnership rather than a merging of the two companies.

From Oracle’s whitepaper on the decision, MySQL’s fate seems a little less promising:

MySQL will be an addition to Oracle’s existing suite of database products, which already includes Oracle Database 11g, TimesTen, Berkeley DB open source database, and the open source transactional storage engine, InnoDB.

This doesn’t sound like Oracle is poised to grow MySQL and allow it to flourish.  At this time MySQL 6.0 is in Public Alpha, and has added the Falcon transactional engine as an advanced alternative to InnoDB, and SolidDB.  Looking at the architecture, this engine brings some industrial-grade caching, recovery, and long transaction support to MySQL.  Couple this with the real deal disaster recovery 6.0 is bringing to the table, and you have a free multi-platform database that rivals everything an Oracle database can offer outside of Enterprise Edition, and soundly trounces the latest Microsoft SQLServer offering.

But will Oracle put the resources toward MySQL, to allow it to be all it can be?  Personally, I don’t see it happening, but I hope I am very, very wrong.


Apr 15

The quarterly Oracle CPU hit the streets on Tuesday, 14 April, and patches 16 vulnerabilities in the Oracle RDBMS, including a remotely accessible exploit of the listener without authentication.  Oddly this only scored a 5.0 on the CVSS v2.0.  There was an 8.5  CVSS-scored vulnerability in Resource manager that was patched.  It has been speculated that this vulnerability could be exploited by SQL Injection, but the high score seems odd.  I’ll keep looking for details on this item.

Feb 26

Here’s a fun bit of Oracle database nuance that you may not see unless you are doing a lot of work with LOBS (Large Object Datatypes – BLOB, CLOB, NCLOB, and BFILE).  Indexes of LOB datatype columns will show up in the DBA_INDEXES and USER_INDEXES view – However, if you check the ALL_INDEXES view, they are simply not listed.  This may become an issue if you create all of your objects under one user, which is then locked and another user is granted privileges to select from that object (A good, secure design practice to prevent tables, views, and other objects from being modified).

Oracle does this because LOB indexes cannot be modified, therefore there is really little need to address them at all.  If you try to update the LOB primary index, you’ll run into an ORA-14326, if you try to alter or drop the index, expect an ORA-22864.

Also, LOBs in Oracle 10gR2 and prior are now referred to as BasicFiles.  This is because in 11g, Oracle has made some stout improvements to the handling of LOBs via the new SecureFiles.   SecureFiles offer some incredible benefits over the old LOBs in terms of security and efficiency.  The only thing I can’t figure out is why Oracle doesn’t seem to list this as a huge reason to upgrade!

Feb 10

Safe and SecureReading through the online blogs, I came across a discussion of whether Oracle’s Critical Product Updates are worth the ‘trouble’ of applying.  Of course, I’m always very interested to see what DBAs have to say in regards to Information Assurance and database security in general - and I wasn’t disappointed.  Quite a few DBAs had some great, common-sense guidelines for approaching Oracle’s Critical Product Updates!  A thorough system of analyzing the risk and impact, coupled with thorough testing.  It may take a little discipline to keep that process going, but many were thoughtful and solid.

Surpisingly (to me, at least) I had to take umbrage with Don Burleson.  His advice:

You DON’T have to apply patches, and sometimes patches can CAUSE unplanned issues.  The best practices are always to conduct a full stress in a TEST environment before applying the patches in production… I wait for major releases and re-install at-once, and I only look at patches to fix specific issues.”  - Courtesy of TechTarget

I have personally applied dozens of CPUs on (literally) hundreds of systems of all flavors.  I have yet to see a problem on our production servers that was caused by a CPU.  Of course, I have seen a few weeded out through thorough and careful testing beforehand.

The problem with simply not assessing and applying these CPUs due to FUD (Fear, Uncertainty, and Doubt) comes when things go badly, or when you need to meet SOX, HIPAA, PCI DSS, FDCC, FISMA,etc. compliance.

Even on a closed system, if an insider is able to view/modify/delete sensitive information by exploiting a vulnerability fixed by a CPU, your company will be in a very unenviable legal position, as ignoring security patches is not adequately performing ‘due care’.  Also, when operating under various compliance standards or on a DoD system, there is rarely an option to avoid applying a CPU, unless you can document a specific problem the application will induce and your plan to mitigate or eliminate the risk.  This is where a strong  process, when well-documented would be an excellent solution.

If a DBA has a stringent standard of apathy towards Oracle CPUs, it may be an indicator of systemic security problems in their data stores as well, and may warrent some pointed questions:

  • Are you auditing, at the very least, privilege escalation and access to sensitive objects?
  • Are you making sure that glogin and the system executables are not being tampered with?
  • Do you have a benchmark of defined roles and privileges documented?
  • Are you logging who is connecting and accessing the data, when and from where?

If the answers to these are ‘no’, we wouldn’t be aware of a security breach even if it had happened!

Turning the key of your database and letting it go can be a very perilous practice despite what the remote database administration service vendors may tell you.  If data is the lifeblood of your company, it really should be maintained AND protected as such.  No company wants the bad press that follows a data theft.

Brian Fedorko

Jan 17

LanguageHave you ever run into this situation: You are happily scripting out or designing a new capability, performing maintenance, or providing support. Perhaps you are eating lunch, or are home in bed, soundly sleeping at 3:00AM.

And then it happens.

Something broke somewhere, and it is database-related. No, it is not something you’ve built, maintained, or even seen - It is something from another business area, and their help is not available.

When you arrive, you are greeted by the ever-present group of concerned stake-holders, and a terminal. Will you staunch the flow of money they may be hemorrhaging? Will you bring back the data they may have lost? Will you be able to restore their system to service?

What you don’t want to do is flounder because they don’t have your favorite management software, your preferred shell, or your favorite OS.

Learn to speak the native languages!

There are 3 skill sets every good data storage professional should keep current at all times, outside of their core RDBMS interface languages:

  • Bourne Shell (bash)
  • vi (Unix/inux text editor)
  • CMD Shell

I guarantee that any Linux system you log into will have bash and vi. I personally prefer the korn shell for navigation, and the c shell for scripting - but the bourne shell is on every system. Same with vi - Except, I really prefer vi to anything else.

This means no matter what Linux or Unix server you are presented with, you can become effective immediately.

I’ve included Microsoft Windows command shell is included because it fits in with a parallel reason for learning the native language - you can proactively increase survivability in your data storage and management systems by using the tools and utilities you KNOW will be available - Even if libraries are unavailable, even if interpreters and frameworks are lost/broken.

If the operating system can boot, you can be sure the bourn shell or CMD shell is available for use.

Knowing that, you should consider scripting the most vital system functions using the available shell script, and initiating them with the operating system’s integral scheduling tool (crontab/Scheduled Tasks). This way you can ensure that if the OS is up, your vital scripts will be executed!

And who doesn’t want that?

Dec 20

Bad Things CAN Happen

I was conversing with a colleague of mine who was working with some Oracle DBAs who were deciding to abandon Oracle’s Recovery Manager and replace it with a 3rd party disk-imaging ‘backup’ solution. Not augment RMAN, but replace it entirely.

I was really surprised. Really, REALLY surprised!

After mulling over all the concerns, I put together some items you may want to consider before heading down this path:

  • Are you operating in ARCHIVELOG mode? If you are not, YOU WILL LOSE DATA.
  • If you are in ARCHIVELOG mode – What happens to the old archivelogs? Deleting the old ones before the next RMAN level zero renders the ones you have useless (except for logmining).
  • If you are in NOARCHIVELOG mode, how far back can you troubleshoot unauthorized data modification or application error? How quickly do your redo logs switch? – Multiply that by the number of groups you have, and you have your answer.
  • How do you address block corruption (logical AND physical) without RMAN? With a RMAN-based DR solution, block recovery takes ONE command. No data loss, no downtime. If you take a snapshot using 3rd party tools – Your backups now have that same block corruption. Where do you go from there?
  • If disk space is an issue, do you use the AS COMPRESSED BACKUPSET argument to reduce backup size? Do you pack the archivelogs into daily level ones? I’ve found ways to optimize our Oracle RMAN backups so we can cover 2 weeks with the same disk space that used to cover 2 days.
  • How do you monitor for block corruption? (Waiting for something to break is not valid instrumentation) I check for block corruption automatically, every day, by using RMAN and building it into my daily database backup scripts.

NOTE: Logical corruption happens. Even on a SAN, even on a VM. VMs can crash, power can be lost. I’ve experienced 2 incidents with block corruption in the recent quarter. Of course, since I built the Disaster Recovery system around RMAN – We caught the corruption the next day and fixed it with ZERO downtime and ZERO data loss.

Point-in-Time-Recovery (PITR) is enabled by RMAN - ALL disk imaging backup solutions lack this capability. If you are relying solely on a snapshot backup, you will lose all the data since the last snapshot.

Without tablespace PITR, you have to roll ALL the data in the database back. If you have multiple instances and are using a server snapshot with no RMAN, ALL the databases on that server will lose data! This is usually not acceptable.

Lastly, How much testing have you done with the snapshot solution? REAL TESTING. Have you taken a snapshot during continuous data change? We tried snap-shotting the database server using 3 different pieces of software. NONE took a consistently consistent and usable snapshot of the database. Sometimes it did. If we were lucky, and the DB was quiet. Is it acceptable to sometime get your client’s/company’s data restored?

Remember, the key is a multi-layered DR strategy (where disk imaging and snap-shotting IN CONJUNCTION with RMAN is incredibly effective!) and continuous REAL WORLD testing.

As a parting shot, in case you were wondering, The ‘DBAs’ had decided to rely soley on a disk imaging backup solution, not because they felt it had more to offer, or because it was tested to be more effective. But because they felt RMAN was difficult to use…

Brian Fedorko

Oct 14

Safe and Secure

The Oracle October Critical Product Update (CPU) was released yesterday - it includes 15 security fixes for the core RDBMS, including a fix for a vulnerability allowing DB access without authentication.

Despite the high impact, that particular vulnerability only scored a 4.0 in the Common Vulnerability Scoring System v2.0 (CVSS v2). The vulnerability allows for successful a buffer overflow in the Apache Connector component (mod_weblogic) of a Weblogic Server to be exploited for running external code. This vulnerability effects a broad spectrum of WebLogic Server versions (6.1-10.0MP1), however Oracle had addressed this, along with providing guidence for a workaround, back in July with CVE2008-3257.

Another point of interest - A new post-installation script, catbundle.sql, is available with Critical Patch Updates for Oracle Database and on Microsoft Windows. This script replaces catcpu.sql and catcpu_rollback.sql. For more information, see OracleMetaLink Note# 605795.1, Introduction to catbundle.sql. For UNIX/LINUX Critical Patch Updates, catbundle.sql was released with CPUJUL2008.

Remember, Oracle CPUs are cumulative, so even if you have never applied one to your system, you can catch up on all the bug and security fixes entirely with the application of the latest CPU!

Next scheduled CPU will be released on 13 January 2009

Jul 15

Safe and Secure

It is time once again to eliminate bugs and increase the security posture of our Oracle databases. The Advisories and Risk Matrices can be found on Oracle Technology Network. The full availability information is found at Oracle Metalink under DocID# 579278.1

Points of Interest:

  • This CPU contains 11 security fixes for the Oracle Enterprise Database Server
  • None of the security holes for the Enterprise DBMS are remotely exploitable without authentication
  • Oracle Application Express requires no security fixes (This product continues to impress me)
  • ALL Windows platforms running Oracle Enterprise DB Server v10.2.0.3 will have to wait until 22-July-2008 for their CPU
  • Support for Solaris 32-bit and Oracle Enterprise DB Server v10.2.0.2 seems to have been pulled! There’s no CPU for these, and none planned for the October 2008 Critical Product Update as per Oracle Metalink DocID# 579278.1.

Don’t forget to read the patch notes, test thoroughly, and check to make sure you’re using the proper version of OPatch!

Next CPU: 14-October2008

Brian Fedorko

Jun 16

Paper Cash Money!

Oracle’s latest price list was published today!

Oracle Technology Global Price List

There are increases scattered throughout the various licensing options, most notably:

Oracle Enterprise Edition

  • $7500 increase in the base per-processor licensing
  • $150 increase in per-user licensing

Oracle Standard Edition

  • $2500 increase in the base per-processor licensing
  • $50 increase in per-user licensing

Oracle Standard Edition One

  • $805 increase in the base per-processor licensing
  • $41 increase in per-user licensing


  • $300 increase in the base per-processor licensing
  • $60 increase in per-user licensing

Active Data Guard

  • $800 increase in the base per-processor licensing
  • $20 increase in per-user licensing

Advanced Security Partitioning, Advanced Compression, Real Application Testing, Label Security

  • $1500 increase in the base per-processor licensing
  • $30 increase in per-user licensing

Diagnostics Pack , Tuning Pack, Change Management Pack, Configuration Management Pack, Provisioning Pack for Database

  • $500 increase in the base per-processor licensing
  • $10 increase in per-user licensing

Internet Application Server Enterprise Edition

  • $5000 increase in the base per-processor licensing
  • $100 increase in per-user licensing

Enterprise Single Sign-On Suite

  • $10 increase in per-user licensing

This is certainly not an exhaustive list and I’m sure that there are many, many other changes. Rounding up your Enterprise’s licensing and product use information for acquisition planning purposes may be a prudent and proactive task for this month!

Brian Fedorko

Jun 15

I have always enjoyed the teaching and wisdom of Dr. Steven Covey (especially if he does not litigate for derivative works!). He has a real knack for capturing introspective how-to lessons detailing the simplicity of living a good and productive life.

In homage to Dr. Covey’s amazing work, I’d like to narrow the scope, but offer lessons with a similar impact for database administrators – Expanding on the inobviously obvious to illuminate the good path to success.

Habit One - Multiplex and Mirror Everything!

Mirror, Mirror...

Multiplex and mirror all of your critical files – Is there a reason not to? Today’s SANs have gone a long way to provide for redundancy and reduce I/O contention, but they are definitely not an excuse to abandon this basic key to database survivability!

The SAN Trap: SANs are often used as a panacea for data availability. However, have you taken a close look at your SAN to determine how robust and survivable it really is?

  • How many LUNS are your files spread across?
  • What RAID level are you using and how many simultaneous disk failures will it take to make your files irretrievable? (Anything under 20% incurs quite a bit of risk).
  • Do you have redundant controllers?
  • Redundant switches?

Even the most survivable storage setup is still vulnerable to logical corruption, and the vastly more common, human error (“I just deleted all the .LOG files to save some space!”).

Conversely, for very slim installs, you may only have a single disk or LUN – While there is greatly increased risk when in such a situation, reality dictates that sometimes the circumstances are unavoidable. Until you can grow your storage footprint, multiplexing and mirroring (across directories) becomes even more critical as

Mirroring and multiplexing your control files, redo logs, archived redo logs, and RMAN backups will significantly increase the likelihood of a successful recovery, should the need arise (See Habit 5 – Preparation). The procedure is extremely easy, and the files generally take up very little space, if properly scaled and tuned to your needs.

Here are some best practices for you to tailor to your needs:

  • Control Files: Multiplex two to three times and mirror over two to three disks/LUNs/directories
  • Redo Logs: Three to four members per group with two to three groups spread across disks/LUNs/directories
  • Archived Redo Logs: Mandatory mirroring between at least 2 disks/LUNs/directories
  • RMAN Backup Files: Mirror between at least two disks/LUNsdirectories
  • SPFILE: Periodically create a PFILE from the SPFILE and archive it, along with your backups and control file snapshots

A database administrator worth their salt NEVER loses data, and the best way to maintain this is to avoid a position where data loss is likely. Mirroring and Multiplexing are one of our most effective tools to reduce risk.

Brian Fedorko

May 31

Like responsibility, it grows!The goldfish always grows to the size of the bowl. If you’re a DBA goldfish, you’ll probably script out repetitive tasks until the bowl gets bigger. And then they feed you more databases from various business areas, and you grow some more. How is that for a strained analogy?

Any Oracle DBA has been there - After your initial herd of databases are stable, happy, and well-fed, people notice. And then you reap the true reward of good work: More work! Unfortunately, this is usually when someone fishes a stove-piped database that has become very important internally. You know, the one put together by someone who left 2 years ago. No Critical Product Updates, one or two control files, and the telling 5Mb redo logs that switch every 10 seconds. But you gladly take it in anyway…

A bit of work and now the database is chugging along like a champ! Tuned, Optimized, Mirrored, multiplexed, in ARCHIVELOG mode, and integrated into your RMAN backup scripting.

Everything seems fine, but is it?

Surely you could easily and successfully recover if you had to this very minute, right?


Is logging of all operations enforced on this database, or at least in the user’s tablespace? Use the following to find out:


If forced logging is not or can not be applied to the database, there is a risk that NOLOGGING operations may have been performed on the databases objects. Common operations that are run under NOLOGGING are index builds, index rebuilds, direct load inserts, direct loads with SQL Loader, and partition manipulation. Once a NOLOGGING operation has been performed, we cannot roll forward, past that change in that tablespace! If it is a tablespace only containing indexes, we’ll suffer downtime while the indexes rebuild and bring the database back to a reasonable level of performance. If the database contains objects containing data, the risk grows for losing the transactions since the NOLOGGING operation.

A good first line of defense is to include REPORT UNRECOVERABLE into your RMAN backup scripts, and stay on top of the logs - Or test for the expected return and pipe the results to your dashboard or monitoring software like Big Brother by Quest. This will catch all manners of problems before they become critical:

RMAN> report unrecoverable;
Report of files that need backup due to unrecoverable operations
File Type of Backup Required Name
---- ----------------------- -----------------------------------
4    full or incremental     X:\ORADATA\DATA01\TESTDB\TEST01.DBF

Here’s a quick script I wrote to find when the last NOLOGGING operation occurred (Note: Output has been edited for page fit):

set LINESIZE 120
DEFINE LINE2= 'Check the Change Numbers and times against your backups to determine'
DEFINE LINE3= 'if non-logged operations have occurred'
select  d.NAME as DBF_NAME,
on d.TS# = t.TS#
order by t.NAME;



Check the Change Numbers and times against your backups to determine
if non-logged operations have occurred

-------------------- --------- ----------- ------------------------
J:\...\SYSTEM01.DBF  SYSTEM    0
J:\...\SYSAUX01.DBF  SYSAUX    0
J:\...\TEST01.DBF    TEST      6271597     Tue 02-Jun-2008 18:30:46
J:\...\USERS01.DBF   USERS     0

After that, just make sure your last Level 0 backup is newer than the times listed, and be aware that Point In Time Recovery will be limited to before the NOLOGGING operations occurred and when the last Level 0 backup was taken.

Be sure to set up lines of communication and coordination in the future, so the risk of not being able to recover the entire database to the last transaction is reduced.

Brian Fedorko

May 27

A planned installations always requires...  Plans!Designing the data structure

If there were a more crucial time for a Database Administrator to team with and guide the application developers, I can not think of one. Getting this first step as correct as possible will save rework ranging from an inordinate amount of time dedicated to tuning to total application overhaul. This translates into your company/client hemorrhaging thousands of hours/hundreds-of-thousands-of dollars of dollars of unnecessary spending… or saving that very same amount. This is what a Professional DBA brings to the table. But how do you know if you are doing it well?
You design the database for the data. It is ALWAYS about the data, and how the user interacts with the data. Requirements are a great place to start if they are well-written, but mapping out use cases with the developer and the user is simply the best way to go. By exhaustively examining all the use cases, your structure will practically write itself. A solid understanding of the use cases will tell you:

  • How transactional and dynamic your database will be
  • What data will be input, when, and how
  • Where relationships and data constraints need to be implemented
  • What data will be extracted and how it will be grouped
  • Where locking issues will manifest
  • What data may need special handling (HIPAA, SOX, DoD Sensitive, Privacy Act, etc.)

The use cases, combined with a bit of foresight and communications, you can determine if the data will need warehousing in the future, if the system will require inordinate scalability, and/or the necessity of alternate operational sites. Initially designing the data system for end-game use will help you evolve the system as it is developed, rather than bolting on solutions in an ad-hoc manner as the needs become critical.

Common Pitfalls to Avoid:

Over-Normalization: There is no shame in under-normalizing your database if you have a solid reason to skip some normalization opportunities. Commonly, you can improve performance and maintainability – And if your data will eventually be warehoused, it will need to be (sometimes greatly) denormalized. Being able to efficiently convert your transactional data storage structure into a warehoused structure, optimized for data mining and reporting truly requires a planned, engineered effort.

The Developer Mindset: An excellent developer with a focus on efficiency and optimization is careful to only create and use resources a long as is absolutely necessary. However, an excellent data structure must be extremely static. Creation and destruction of tables is not only a hallmark of suspect design, but also creates a host of security and auditing challenges.

Data Generation: Any data created for storage must be carefully and thoroughly scrutinized. Fields of created data, stored to increase application performance, can reduce the performance of the entire database. If this practice is prevalent enough, storage requirements can increase dramatically! I have seen very few instances where the data manipulation is not best handled during retrieval.

Incremental Primary Keys: Iterative ID fields (‘Auto-Number’) in transactional tables must be avoided! Not only does it compromise our goal of not creating or destroying stored data, but it wreaks havoc on any sort of multi-master, bi-directional replication (ex. Oracle Streams, Advanced Replication, etc.). For example, if two sites are being used to accept transactions, the chances are excellent that the sites will receive separate transactions at the same time. If both create their Primary Key from the last record, incremented by one, they will BOTH have the same ID and a collision will occur.

Sure, you could design logic to constantly monitor for this issue, and gain additional overhead. I’ve also seen the transactions staggered by ‘odds and evens’. But what happens when you add an additional site? Your scalability is inherently limited.

There are very few instances where a natural key cannot be drawn from existing data. Usually, a timestamp combined with 1 or 2 data fields (ex. PRODUCT_ID, LOCATION, SSN - if protected, etc.) will produce an excellent, unique key. In the very RARE cases that it is impossible to generate a unique natural key, the Universal/Global Unique Identifier (UUID/GUID) is a viable alternative. All major databases support the generation of this ID, based on Timestamp, MAC address, MD5 Hash, SHA-1 Hash, and/or Random numbers depending on the version used. Given that there are 3.4 × 10^38 combinations, it is unlikely that you’ll run out. Ever. Every major DBMS has a utility to generate a UUID/GUID - SYS_GUID() in Oracle, UUID() in MySQL, and NEWID() in TSQL. There are also implementations for creating the UUID/GUID in C, Ruby, PhP, Perl, Java, etc.

This is just a light touch of creating a solid, production-grade data structure, but it is a good start. We’ll have plenty of room to explore some additional facets and expand on some of the items mentioned in further articles. Always remember, a good DBA must synergize with the development team, bringing different mindsets with distinct goals together to provide a robust, efficient solution

Brian Fedorko