Skip to content

DBAzine.com

Sections
Personal tools
You are here: Home » Oracle » Oracle Articles Archive » Altering the Master Table in a Snapshot Replication Environment without Recreating the Snapshot
Seeking new owner for this high-traffic DBAzine.com site.
Tap into the potential of this DBA community to expand your business! Interested? Contact us today.
Who Are You?
I am a:
Mainframe True Believer
Distributed Fast-tracker

[ Results | Polls ]
Votes : 3548
 

Altering the Master Table in a Snapshot Replication Environment without Recreating the Snapshot

by Arup Nanda

Learn how to alter a master table in a read only or read write snapshot replication setup without dropping and recreating the snapshot or doing a full refresh, both of which can be extremely time and resource consuming. This leads to a time and effort savings of more than 98 percent.

One of the biggest challenges in administration of a snapshot replication environment (also called materialized view replication) is the usual maintenance of the snapshot after a modification of the master table. An example of this is adding columns or modifying the data type of a column. After a column is added to the master table, the only way the newly added column could be replicated to the replication site is by dropping the snapshot and recreating it. If the master table is large, the recreation process may take several hours as it brings all the data over the network. This also requires a large rollback segment both on the master and the replication sites and it may lead to the ORA-1555, "Snapshot Too Old" problem if the table access and rate of change is high. At a certain point, it may be impossible to even build the snapshot by recreating in this manner. At that point, the only option would be to do a full export and import of the table and recreate the snapshot by using the PREBUILT table option.

This problem, however, is not present in a multi-master setup. While the snapshot replication site presents numerous advantages in setting up and administration, this lack of ability to alter a master table easily poses real challenges to the DBAs maintaining the environment when they try to perform relatively trivial tasks like altering a table to add columns or change data types. This article presents a way to achieve these objectives and not having to recreate the snapshot or do a full refresh.

Background

To best illustrate the technique I'm presenting, let's start with an example: Suppose we have two databases, PROD and REPL, denoting production and the replication databases, respectively. All the activity happens on the production site whereas the replication site can be used as a reporting database only (read-only snapshot) or as a separate data activity point with the changes periodically pushed to master site (updatable snapshot). The technique described in this article applies to both situations. (Note: The snapshot replication is also called materialized view replication. In this example, the terms snapshot and materialized view are used interchangeably.)

Now suppose there are several tables under schema ANANDA. This example focuses on a table named TEST1, with about two million rows and four GB in total size. The table has two columns, COL1 NUMBER (9) and COL2 CHAR(1000), COL3 CHAR(900). The REPL database has a snapshot called TEST1 defined on the same table. The master site has a replication group called TEST1, which has only one object, TEST1. The replication group is owned by the schema REPADMIN. To set up the replication, use the script given in Listing 1 at the master site. On the snapshot site, the snapshot TEST1 is included in a snapshot group called TEST1, owned by schema MVADMIN. It is assumed that there is a public database link from each database to the other in the same name and the global_names parameter in init.ora is set to true.

The goal is to alter the table TEST1 at master site adding a new column called COL4 with CHAR(1) and changing the column COL3 to CHAR(1000).

The Usual Method

For the purpose of demonstration, we first need to set up the replication environment. Listing 2 contains the statements to create the snapshot site. For convenience of discussion, the script is split into several sections, the most important of which, relevant to this article, is Section 3, Creating the Snapshot. This section actually creates the snapshot by getting the data across from the master site over the network. In case of a small table or during periods of light load, this approach might work without filling up the rollback segments or choking the network. However, in large tables, this method of transferring data might fail. Therefore, we have to follow a slightly different approach - using prebuilt tables.

In this approach, you have to drop the snapshot TEST1, if it exists, and create a table by the same name in the schema ANANDA with the same structure as that table at master site, but without data. This is easily done by running CREATE TABLE TEST AS SELECT * FROM TEST1@PROD WHERE 1 = 2. After running the script in Listing 1 at the master, you have to export the table from PROD and import into REPL. This might take quite awhile, but it will take considerably less time than the time required for the snapshot creation method. The table data can be brought over by other methods, too; e.g., by creating a delimited text file from the production database and loading it into the replication database using the SQL*Loader DIRECT path option.

Once the table exists at the replication site, the snapshot can be created on the table simply by making a small change in the script in Listing 2. Change Section 3 of the script as described in Listing 3. Notice the clause ON PREBUILT TABLE. This instructs Oracle that there is a table called TEST1 and that should be used as the segment for the snapshot named TEST1 and it should not create a new segment. The rest of the script in Listing 2 simply make the snapshot ready for replication.

After running the setup for awhile, the table TEST1 was altered as described above previously — the column COL3 was changed to CHAR(1000) and a new column COL4 CHAR(1) was added. These changes need to be reflected at the snapshot site, too. The recommended approach is to drop the snapshot at the snapshot site and run the process from beginning again; i.e., drop the table TEST1, create it, import rows, create snapshot on prebuilt table and finally generating replication support. With a large table, this may lead to several problems like running out of rollback segments, taking considerable time, and slowing down performance. In our case, it took more than four hours, including the table alters and, of course, the time will vary depending on your exact environment.

The Alternative Approach

When the snapshot is created on a table using the PREBUILT option, the snapshot simply takes over control of the segment defined for the table. When the snapshot is dropped, the segment is not dropped; rather it turns the control into the original table. Since the segment is the same, any data changes that occurred during the snapshot operation remain in the segment even after the snapshot is dropped. For example, say, the value of COL2 in the original table for COL1 = 1 was 'A.' After the snapshot creation on the table, the snapshot operation changed the data to 'B' since that was changed to 'B' at the master site. Subsequently, the snapshot was dropped, and the segment reverted to a table called TEST1. At this point, the value of COL2 for COL1 = 1 will still be 'B,' not 'A.' Thus, data in the segment remains exactly same, as it was the moment before the snapshot was dropped. This fact is exploited in the alternative approach.

Since data remains the same, there is no need to drop and rebuild the table from the master. We will use this trick to let the replication setup know that the snapshot was never dropped and so a fast refresh will work. Another detail to take care of at this time is the use of the already present snapshot log entries, which are needed for the fast refresh. When a snapshot is recreated on a prebuilt table, these entries on the master table are deleted. We will have to store them prior to the deletion and insert them after the snapshot is ready for replication.


Detailed Steps

      • At the master site, the table can be altered using sql. Logging in as user ANANDA, issue the statements
        alter table test1 add (col4 char(1);
alter table test1 modify (col3 char(1000));
      • Since DML is still active on the table, you may have to wait until the table can be locked exclusively to add and modify the columns.
      • At the replication site, we need to stop the replication pull for awhile by shutting down the job that does it. Logging in as user MVADMIN, issue
        select job from user_refresh where rname = 'TEST1'; 
      • Note the job number. Again, as we assumed in the beginning, the refresh group is named TEST1. Shut down the job by issuing
        exec dbms_job.broken(<jobnumber>,TRUE);
commit;
      • It's very important to issue a commit here. You will also have to make sure no current sessions are currently active by this job. Check that by issuing
        select sid from dba_jobs_running where job = <jobnumber>;
      • If you see any session, wait for it to finish or kill it before proceeding.
      • Then, logging in as user ANANDA and issue
        DROP SNAPSHOT TEST1;
      • This drops the snapshot but leaves the table in place. Then issue
        alter table test1 add (col4 char(1);
alter table test1 modify (col3 char(1000));

The rest of the operation has been placed in a script as shown in Listing 4. Most of the script is the same as in the previous approach; the differences are explained here.

Section 3 is new. When a snapshot is built on an existing table and replication support is enabled on that, Oracle assumes that the snapshot has gone through a complete refresh; so it deletes the snapshot log entries at the master site. This is not acceptable in our situation, since the master table is undergoing some DML activity and generating snapshot log entries. Even if there is no DML, there could still be some unapplied snapshot log entries that must be preserved. Therefore, we must move the entries in the master table's snapshot log to a temporary table called mlog_bak. If the table name is too long, Oracle uses only the first 20 characters of the name to create the name of the snapshot log table. Although in this case the table name is only five letters, to make it generalized, we have used only the first 20 characters and prefixed it to MLOG$_ to get the name of the snapshot log table.

Sections 4 through 6 are the same as in the previous method.

      • After the replication support is enabled on the table, the snapshot log entries preserved earlier need to be restored back so that they can aid in fast refresh. This is achieved in Section 7.
      • After all these steps are executed issue a fast refresh of the snapshot just to make sure that the fast refresh works. Logging in as ANANDA issue
        execute DBMS_snapshot.refresh('TEST1','F')
      • This will do a fast refresh of the snapshot using the snapshot log entries we just restored back. The snapshot is all set for fast refresh and with the modified table structure.
      • Finally, you have to re-enable the job broken earlier. Logging in as MVADMIN issue the following statement. Use the job number obtained before.
        execute DBMS_job.run(<jobnumber>)

Total elapsed time, including the table alters, was 10 minutes.

Conclusion

As you can see the time to alter the master table was reduced from four hours to ten minutes, a 98 percent savings in time and more substantial savings in terms of effort for the DBA. Of course, your results may vary, depending on the table size and the network transfer speed. Nevertheless, no matter how fast the network speed is, unless the table is very tiny, the time and effort reduction will always be quite substantial.

--

Arup Nanda has been Oracle DBA for 10 years. He is the founder of Proligence, Inc., a New York-area company that provides specialized Oracle database services. He is also an editor of SELECT, the journal of the International Oracle User Group. He has written Oracle technical articles in several national and international journals like SELECT, Oracle Scene, SQLUPDATE and presented in several conferences like Oracle Technology Symposium, Atlantic Oracle Training Conference, IOUG Live! among others. Send feedback to Arup about this article at arup@proligence.com. Based on the feedback, an updated copy of this article can be found at www.proligence.com.


Contributors : Arup Nanda
Last modified 2005-04-20 01:48 PM
Transaction Management
Reduce downtime and increase repeat sales by improving end-user experience.
Free White Paper
Database Recovery
Feeling the increased demands on data protection and storage requirements?
Download Free Report!
 
 

Powered by Plone