DBA Sensation

March 2, 2011

Recreating spfile on ASM storage from pfile

Filed under: [backup and recovery] — Tags: , , , — zhefeng @ 2:46 pm

Sometimes when you strewed up with parameters, you need to use the pfile as stepstone to undo the changes in spfile. How does it happen if your spfile sits on ASM storage? Here is an workaround.

1. try to screw up the db parameters
SQL> show parameter memory

NAME TYPE VALUE
———————————— ———– ——————————
hi_shared_memory_address integer 0
memory_max_target big integer 1520M
memory_target big integer 1520M
shared_memory_address integer 0
SQL> alter system set memory_max_target=0 scope=spfile;
System altered.

2. now bounce the instance, db will complain about the new settings
SQL> shutdown
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORA-01078: failure in processing system parameters
ORA-00837: Specified value of MEMORY_TARGET greater than MEMORY_MAX_TARGET

3. in my case the spfile sits on ASM
ASMCMD> ls -l spfile*
Type Redund Striped Time Sys Name
N spfileorcl.ora => +DATA/ORCL/PARAMETERFILE/spfile.267.744731331

4. what we need to do is creating a pfile from spfile then modify parameter back to valid value, then start db from pfile
1). With db not up, we can create pfile from spfile:
SQL> create pfile from spfile=’+DATA/orcl/spfileorcl.ora’;
2). modify the value in pfile ‘initorcl.ora’
$ vi initorcl.ora
*.memory_max_target=1583349760
3). startup db with pfile
SQL>startup mount –now it will use the pfile

5. create the new spfile to ASM storage from “good” pfile
SQL> create spfile=’+DATA/ORCL/spfileorcl.ora’ from pfile;
File created.

6. watch the file name in ASM storage has been changed, which means we just had a new spfile:
ASMCMD> ls -l spfile*
Type Redund Striped Time Sys Name
N spfileorcl.ora => +DATA/ORCL/PARAMETERFILE/spfile.267.744733351

7. now change the pfile back to be the “bootstrap” of correct spfile
$ cat initorcl.ora
spfile=’+DATA/ORCL/spfileorcl.ora’

8. restart the database, it will pickup the correct spfile again
$ sqlplus / as sysdba
SQL> startup
ORACLE instance started.

Total System Global Area 1586708480 bytes
Fixed Size 2213736 bytes
Variable Size 973080728 bytes
Database Buffers 603979776 bytes
Redo Buffers 7434240 bytes
Database mounted.
Database opened.

SQL> show parameter spfile

NAME TYPE VALUE
———————————— ———– ——————————
spfile string +DATA/orcl/spfileorcl.ora

SQL> show parameter memory

NAME TYPE VALUE
———————————— ———– ——————————
hi_shared_memory_address integer 0
memory_max_target big integer 1520M
memory_target big integer 1520M
shared_memory_address integer 0

Advertisements

September 29, 2010

root.sh failed on 2nd node when installing Grid Infrastructure

Filed under: [RAC] — Tags: , , , — zhefeng @ 12:39 pm

when i was running root.sh for the last step of grid infra installation on second node, it failed (it was success on 1st node):
root.sh failed on second node with following errors
——————————————————-
DiskGroup DATA1 creation failed with the following message:
ORA-15018: diskgroup cannot be created
ORA-15072: command requires at least 1 regular failure groups, discovered only 0

Oracle gives the reason: when you are using multipathing storage for ASM, you have to pre-configure the oracleasm file as below:

On all nodes,

1. Modify the /etc/sysconfig/oracleasm with:

ORACLEASM_SCANORDER=”dm”
ORACLEASM_SCANEXCLUDE=”sd”

2. restart the asmlib by (except 1st node):
# /etc/init.d/oracleasm restart

3. deconfigure the root.sh settings on nodes except 1st node:
$GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig -force

4. Run root.sh again on the 2nd node (or other nodes)

Oracle Metalink Doc:
11GR2 GRID INFRASTRUCTURE INSTALLATION FAILS WHEN RUNNING ROOT.SH ON NODE 2 OF RAC [ID 1059847.1]

September 7, 2010

Oracle 10g ASM/RAW storage migration

Filed under: [RAC] — Tags: , , , , , , — zhefeng @ 9:47 am

Objective:
we want to migrate the whole shared storage from old SAN to new SAN without re-installing the whole Oracle RAC

Scenario:
1.Current structure
[Nodes]
## eth1-Public
10.0.0.101 vmrac01 vmrac01.test.com
10.0.0.102 vmrac02 vmrac02.test.com
## eth0-Private
192.168.199.1 vmracprv01 vmracprv01.test.com
192.168.199.2 vmracprv02 vmracprv02.test.com
## VIP
10.0.0.103 vmracvip01 vmracvip01.test.com
10.0.0.104 vmracvip02 vmracvip02.test.com

[Storage]
Both ORACLE_HOME are local:
ORACLE_HOME=/database/oracle/10grac/db
CRS_HOME=/database/oracle/10grac/crs

Shared LUN display (3 partitions, 2*256M for OCR&VOTING, 1*20G for ASM)
Disk /dev/sdb: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 32 257008+ 83 Linux
/dev/sdb2 33 64 257040 83 Linux
/dev/sdb3 65 2610 20450745 83 Linux

OCR and Voting are on RAW device: /dev/sdb1 /dev/sdb2

ASM disks
bash-3.1$ export ORACLE_SID=+ASM1
bash-3.1$ asmcmd
ASMCMD> lsdg
State Type Rebal Unbal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Name
MOUNTED EXTERN N N 512 4096 1048576 19971 17925 0 17925 0 DG1/

2. New storage (sdc 10G)
1). new LUN added
[root@vmrac01 bin]# fdisk -l

Disk /dev/sda: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 535 4192965 82 Linux swap / Solaris
/dev/sda3 536 3263 21912660 83 Linux

Disk /dev/sdb: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 32 257008+ 83 Linux
/dev/sdb2 33 64 257040 83 Linux
/dev/sdb3 65 2610 20450745 83 Linux

Disk /dev/sdc: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

2). Partition the new LUN to 3 partitions
Disk /dev/sdc: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdc1 1 32 257008+ 83 Linux
/dev/sdc2 33 64 257040 83 Linux
/dev/sdc3 65 1305 9968332+ 83 Linux

3). clone data from previous raw disks
**shutdown db and crs first to make sure there is no change for raw disks!
#dd if=/dev/raw/raw1 of=/dev/sdc1
514017+0 records in
514017+0 records out
263176704 bytes (263 MB) copied, 252.812 seconds, 1.0 MB/s

#dd if=/dev/raw/raw2 of=/dev/sdc2
514080+0 records in
514080+0 records out
263208960 bytes (263 MB) copied, 267.868 seconds, 983 kB/s

4).”cheating” the Oracle by re-binding to new device on both nodes
**old binding
Step1: add entries to /etc/udev/rules.d/60-raw.rules
ACTION==”add”, KERNEL==”sdb1″, RUN+=”/bin/raw /dev/raw/raw1 %N”
ACTION==”add”, KERNEL==”sdb2″, RUN+=”/bin/raw /dev/raw/raw2 %N”

Step2: For the mapping to have immediate effect, run below command
#raw /dev/raw/raw1 /dev/sdb1
#raw /dev/raw/raw2 /dev/sdb2

Step3: Run the following commands and add them the /etc/rc.local file.
#chown oracle:dba /dev/raw/raw1
#chown oracle:dba /dev/raw/raw2
#chmod 660 /dev/raw/raw1
#chmod 660 /dev/raw/raw2
#chown oracle:dba /dev/sdb1
#chown oracle:dba /dev/sdb2
#chmod 660 /dev/sdb1
#chmod 660 /dev/sdb2

**new binding on both node
Step1: editing /etc/udev/rules.d/60-raw.rules
ACTION==”add”, KERNEL==”sdc1″, RUN+=”/bin/raw /dev/raw/raw1 %N”
ACTION==”add”, KERNEL==”sdc2″, RUN+=”/bin/raw /dev/raw/raw2 %N”

Step2: mapping immediately
#raw /dev/raw/raw1 /dev/sdc1
#raw /dev/raw/raw2 /dev/sdc2

Step3:permission and edit /etc/rc.local
#chown oracle:dba /dev/raw/raw1
#chown oracle:dba /dev/raw/raw2
#chmod 660 /dev/raw/raw1
#chmod 660 /dev/raw/raw2
#chown oracle:dba /dev/sdc1
#chown oracle:dba /dev/sdc2
#chmod 660 /dev/sdc1
#chmod 660 /dev/sdc2

5). startup crs and oracle db, check the database, everything works fine after switching the raw disks!

3. ASM disk group migration
1). Mark the new disk sdc3 on one node
# /etc/init.d/oracleasm createdisk VOL2 /dev/sdc3
Marking disk “/dev/sdc3” as an ASM disk: [ OK ]

2). scan disk on the other node
[root@vanpgvmrac02 bin]# /etc/init.d/oracleasm scandisks
Scanning system for ASM disks: [ OK ]

3). now verify the new disk was marked on both node
[root@vmrac01 disks]# /etc/init.d/oracleasm listdisks
VOL1
VOL2

[root@vmrac02 bin]# /etc/init.d/oracleasm listdisks
VOL1
VOL2

4). add new disk to DISKGROUP (under asm instance)
$export ORACLE_SID=+ASM1
$sqlplus / as sysdba
sql>alter diskgroup DG1 add disk VOL2
–wait rebalancing
sql>select * from v$asm_operation

5). remove old disk from DISKGROUP
sql>alter diskgroup DG1 drop disk VOL1
–wait until rebalancing finished
sql>select * from v$asm_operation
GROUP_NUMBER OPERATION STATE POWER ACTUAL SOFAR
———— ————— ———— ———- ———- ———-
EST_WORK EST_RATE EST_MINUTES
———- ———- ———–
1 REBAL RUN 1 1 2
1374 30 45

6). verify the database and asm, everything is ok!

7). clean-up the old disk confiruations
[root@vmrac01 bin]# /etc/init.d/oracleasm deletedisk VOL1
Removing ASM disk “VOL1”: [ OK ]
[root@vmrac01 bin]# /etc/init.d/oracleasm listdisks
VOL2

[root@vmrac02 ~]# /etc/init.d/oracleasm scandisks
Scanning system for ASM disks: [ OK ]
[root@vmrac02 ~]# /etc/init.d/oracleasm listdisks
VOL2

8). wipe-off the partitions for sdb.

Reference:
1. Exact Steps To Migrate ASM Diskgroups To Another SAN Without Downtime. [ID 837308.1]
2. Previous doc “VMRAC installation” task 130.2008.09.12
3. OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE), including moving from RAW Devices to Block Devices. [ID 428681.1]
4. ASM using ASMLib and Raw Devices
http://www.oracle-base.com/articles/10g/ASMUsingASMLibAndRawDevices.php

Create a free website or blog at WordPress.com.