DBA Sensation

September 7, 2010

Oracle 10g ASM/RAW storage migration

Filed under: [RAC] — Tags: , , , , , , — zhefeng @ 9:47 am

Objective:
we want to migrate the whole shared storage from old SAN to new SAN without re-installing the whole Oracle RAC

Scenario:
1.Current structure
[Nodes]
## eth1-Public
10.0.0.101 vmrac01 vmrac01.test.com
10.0.0.102 vmrac02 vmrac02.test.com
## eth0-Private
192.168.199.1 vmracprv01 vmracprv01.test.com
192.168.199.2 vmracprv02 vmracprv02.test.com
## VIP
10.0.0.103 vmracvip01 vmracvip01.test.com
10.0.0.104 vmracvip02 vmracvip02.test.com

[Storage]
Both ORACLE_HOME are local:
ORACLE_HOME=/database/oracle/10grac/db
CRS_HOME=/database/oracle/10grac/crs

Shared LUN display (3 partitions, 2*256M for OCR&VOTING, 1*20G for ASM)
Disk /dev/sdb: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 32 257008+ 83 Linux
/dev/sdb2 33 64 257040 83 Linux
/dev/sdb3 65 2610 20450745 83 Linux

OCR and Voting are on RAW device: /dev/sdb1 /dev/sdb2

ASM disks
bash-3.1$ export ORACLE_SID=+ASM1
bash-3.1$ asmcmd
ASMCMD> lsdg
State Type Rebal Unbal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Name
MOUNTED EXTERN N N 512 4096 1048576 19971 17925 0 17925 0 DG1/

2. New storage (sdc 10G)
1). new LUN added
[root@vmrac01 bin]# fdisk -l

Disk /dev/sda: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 535 4192965 82 Linux swap / Solaris
/dev/sda3 536 3263 21912660 83 Linux

Disk /dev/sdb: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 32 257008+ 83 Linux
/dev/sdb2 33 64 257040 83 Linux
/dev/sdb3 65 2610 20450745 83 Linux

Disk /dev/sdc: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

2). Partition the new LUN to 3 partitions
Disk /dev/sdc: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdc1 1 32 257008+ 83 Linux
/dev/sdc2 33 64 257040 83 Linux
/dev/sdc3 65 1305 9968332+ 83 Linux

3). clone data from previous raw disks
**shutdown db and crs first to make sure there is no change for raw disks!
#dd if=/dev/raw/raw1 of=/dev/sdc1
514017+0 records in
514017+0 records out
263176704 bytes (263 MB) copied, 252.812 seconds, 1.0 MB/s

#dd if=/dev/raw/raw2 of=/dev/sdc2
514080+0 records in
514080+0 records out
263208960 bytes (263 MB) copied, 267.868 seconds, 983 kB/s

4).”cheating” the Oracle by re-binding to new device on both nodes
**old binding
Step1: add entries to /etc/udev/rules.d/60-raw.rules
ACTION==”add”, KERNEL==”sdb1″, RUN+=”/bin/raw /dev/raw/raw1 %N”
ACTION==”add”, KERNEL==”sdb2″, RUN+=”/bin/raw /dev/raw/raw2 %N”

Step2: For the mapping to have immediate effect, run below command
#raw /dev/raw/raw1 /dev/sdb1
#raw /dev/raw/raw2 /dev/sdb2

Step3: Run the following commands and add them the /etc/rc.local file.
#chown oracle:dba /dev/raw/raw1
#chown oracle:dba /dev/raw/raw2
#chmod 660 /dev/raw/raw1
#chmod 660 /dev/raw/raw2
#chown oracle:dba /dev/sdb1
#chown oracle:dba /dev/sdb2
#chmod 660 /dev/sdb1
#chmod 660 /dev/sdb2

**new binding on both node
Step1: editing /etc/udev/rules.d/60-raw.rules
ACTION==”add”, KERNEL==”sdc1″, RUN+=”/bin/raw /dev/raw/raw1 %N”
ACTION==”add”, KERNEL==”sdc2″, RUN+=”/bin/raw /dev/raw/raw2 %N”

Step2: mapping immediately
#raw /dev/raw/raw1 /dev/sdc1
#raw /dev/raw/raw2 /dev/sdc2

Step3:permission and edit /etc/rc.local
#chown oracle:dba /dev/raw/raw1
#chown oracle:dba /dev/raw/raw2
#chmod 660 /dev/raw/raw1
#chmod 660 /dev/raw/raw2
#chown oracle:dba /dev/sdc1
#chown oracle:dba /dev/sdc2
#chmod 660 /dev/sdc1
#chmod 660 /dev/sdc2

5). startup crs and oracle db, check the database, everything works fine after switching the raw disks!

3. ASM disk group migration
1). Mark the new disk sdc3 on one node
# /etc/init.d/oracleasm createdisk VOL2 /dev/sdc3
Marking disk “/dev/sdc3” as an ASM disk: [ OK ]

2). scan disk on the other node
[root@vanpgvmrac02 bin]# /etc/init.d/oracleasm scandisks
Scanning system for ASM disks: [ OK ]

3). now verify the new disk was marked on both node
[root@vmrac01 disks]# /etc/init.d/oracleasm listdisks
VOL1
VOL2

[root@vmrac02 bin]# /etc/init.d/oracleasm listdisks
VOL1
VOL2

4). add new disk to DISKGROUP (under asm instance)
$export ORACLE_SID=+ASM1
$sqlplus / as sysdba
sql>alter diskgroup DG1 add disk VOL2
–wait rebalancing
sql>select * from v$asm_operation

5). remove old disk from DISKGROUP
sql>alter diskgroup DG1 drop disk VOL1
–wait until rebalancing finished
sql>select * from v$asm_operation
GROUP_NUMBER OPERATION STATE POWER ACTUAL SOFAR
———— ————— ———— ———- ———- ———-
EST_WORK EST_RATE EST_MINUTES
———- ———- ———–
1 REBAL RUN 1 1 2
1374 30 45

6). verify the database and asm, everything is ok!

7). clean-up the old disk confiruations
[root@vmrac01 bin]# /etc/init.d/oracleasm deletedisk VOL1
Removing ASM disk “VOL1”: [ OK ]
[root@vmrac01 bin]# /etc/init.d/oracleasm listdisks
VOL2

[root@vmrac02 ~]# /etc/init.d/oracleasm scandisks
Scanning system for ASM disks: [ OK ]
[root@vmrac02 ~]# /etc/init.d/oracleasm listdisks
VOL2

8). wipe-off the partitions for sdb.

Reference:
1. Exact Steps To Migrate ASM Diskgroups To Another SAN Without Downtime. [ID 837308.1]
2. Previous doc “VMRAC installation” task 130.2008.09.12
3. OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE), including moving from RAW Devices to Block Devices. [ID 428681.1]
4. ASM using ASMLib and Raw Devices
http://www.oracle-base.com/articles/10g/ASMUsingASMLibAndRawDevices.php

March 30, 2009

keep data consistency when using oracle exp/expdp

Filed under: [backup and recovery] — Tags: , , , , — zhefeng @ 1:25 pm

When we were using old oracle exp, we usually will set exp consistent=y (default is n) to ensure the data consistency (the image taken of the data in the tables being exported represents the committed state of the table data at the same single point-in-time for all of the tables being exported.)

However, started from 10g they decomission this parameter. Today i happened to have a requirement for this and i searched meta link and found this useful piece:

The versions 10gR1 and 10gR2 additionally put the message in the expdp header:

FLASHBACK automatically enabled to preserve database integrity

Does this guarantee export consistency to a single point of time?

Cause
The message:

FLASHBACK automatically enabled to preserve database integrity

only means that some of the tables will be assigned special SCNs (needed for Streams and Logical Standby). There is no consistency guaranteed between exported tables.

The next example demonstrates this:

1. Create the environment

connect / as sysdba

create or replace directory flash as ‘/tmp’;
grant read, write on directory flash to system;

drop user usr001 cascade;
purge dba_recyclebin;

create user usr001 identified by usr001 default tablespace users temporary tablespace temp;
grant connect, resource to usr001;

connect usr001/usr001
create table part001
(
col001 number,
col002 date,
col003 varchar2(1000)
)
partition by range (col001)
(
partition p001 values less than (500001),
partition p002 values less than (1000001)
);

2. Populate the partitioned table: partition P001 contains 500000 rows and partition P002 contains 10 rows

connect usr001/usr001
begin
for i in 1..500010 loop
insert into part001 values (i, sysdate, lpad (to_char(i), 1000, ‘0’));
end loop;
commit;
end;

3. Start expd

#> expdp system/passwd directory=flsh dumpfile=user001_1.dmp logfile =user001_1.log schemas=usr001

4. During point 3. is running, run in a separate session:

connect usr001/usr001
delete from part001 where col001 in (500001, 500002, 500003, 500004, 500005);
commit;

This will delete 5 rows in partition P002.

5. Expdp completes with:

Export: Release 10.2.0.3.0 – 64bit Production on Friday, 05 September, 2008 13:59:59

Copyright (c) 2003, 2005, Oracle. All rights reserved.
;;;
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 – 64bit Production
With the Partitioning, Oracle Label Security, OLAP and Data Mining Scoring Engine options
FLASHBACK automatically enabled to preserve database integrity.
Starting “SYSTEM”.”SYS_EXPORT_SCHEMA_02″: system/******** directory=flash dumpfile=usr001_1.dmp logfile=exp_usr001_1.log schemas=usr001
Estimate in progress using BLOCKS method…
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 568.0 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . exported “USR001”.”PART001″:”P001″ 486.3 MB 500000 rows
. . exported “USR001”.”PART001″:”P002″ 10.50 KB 5 rows
Master table “SYSTEM”.”SYS_EXPORT_SCHEMA_02″ successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_02 is:
/media/usbdisk/TESTS/FLASH/usr001_1.dmp
Job “SYSTEM”.”SYS_EXPORT_SCHEMA_02″ successfully completed at 14:02:47

=> From partition P002 only 5 rows were exported, so the written export dump is not consistent.
Solution
To generate consistent Data Pump’s database or schema export similar to exports generated with exp parameter CONSISTENT=Y, use Data Pump parameters FLASHBACK_SCN and FLASHBACK_TIME for this functionality.

Conform with the example above, running expdp with:

#> expdp system/passwd directory=flsh dumpfile=user001_2.dmp logfile =user001_2.log schemas=usr001 flashback_time=\”TO_TIMESTAMP \(TO_CHAR \(SYSDATE, \’YYYY-MM-DD HH24:MI:SS\’\), \’YYYY-MM-DD HH24:MI:SS\’\)\”

This will end with:

Export: Release 10.2.0.3.0 – 64bit Production on Friday, 05 September, 2008 14:15:38

Copyright (c) 2003, 2005, Oracle. All rights reserved.
;;;
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 – 64bit Production
With the Partitioning, Oracle Label Security, OLAP and Data Mining Scoring Engine options
FLASHBACK automatically enabled to preserve database integrity.
Starting “SYSTEM”.”SYS_EXPORT_SCHEMA_02″: system/******** directory=flash dumpfile=usr001_2.dmp logfile=exp_usr001_2.log schemas=usr001 flashback_time=”to_timestamp (to_char (sysdate, ‘YYYY-MM-DD HH24:MI:SS’), ‘YYYY-MM-DD HH24:MI:SS’)”
Estimate in progress using BLOCKS method…
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 568.0 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . exported “USR001”.”PART001″:”P001″ 486.3 MB 500000 rows
. . exported “USR001”.”PART001″:”P002″ 15.48 KB 10 rows
Master table “SYSTEM”.”SYS_EXPORT_SCHEMA_02″ successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_02 is:
/media/usbdisk/TESTS/FLASH/usr001_2.dmp
Job “SYSTEM”.”SYS_EXPORT_SCHEMA_02″ successfully completed at 14:16:55

=> Partition P002 contains all 10 rows, though 5 rows were deleted during expdp time. The parameter FLASHBACK_TIME guarantees the consistency.

Link:  “Doc ID: 377218.1”

https://metalink2.oracle.com/metalink/plsql/f?p=130:14:4374876471460387797::::p14_database_id,p14_docid,p14_show_header,p14_show_help,p14_black_frame,p14_font:NOT,377218.1,1,0,1,helvetica

Blog at WordPress.com.