fév 21 2011

Procédure export/import d’un VG

Tag: UnixUggla @ 16 h 58 min

Procédure d’export/import d’un VG d’une machine vers une autre.

  1. Exporte la map du vg
    root@hx000140: /root/home/root # vgexport -s -v -p -m /tmp/stage.map /dev/stage
    Beginning the export process on Volume Group "/dev/stage".
    vgexport: Volume group "/dev/stage" is still active.
    /dev/dsk/c34t0d0
    /dev/dsk/c35t0d0
    vgexport: Preview of vgexport on volume group "/dev/stage" succeeded.
    
  2. Copier la map sur l’autre node
    scp /tmp/stage.map hx000141:/tmp
    
  3. Importer en mode previous (-p) pour verifier que c’est ok
    root@hx000141: /root/home/root # vgimport -s -v -p -m /tmp/stage.map /dev/stage
     
    Beginning the import process on Volume Group "/dev/stage".
    Logical volume "/dev/stage/stagelv" has been successfully created
    with lv number 1.
    vgimport: Volume group "/dev/stage" has been successfully created.
    Warning: A backup of this volume group may not exist on this machine.
    Please remember to take a backup using the vgcfgbackup command after activating the volume group.
    
  4. Importer le vg
    root@hx000141: /root/home/root # vgimport -s -v -m /tmp/stage.map /dev/stage
    Beginning the import process on Volume Group "/dev/stage".
    Logical volume "/dev/stage/stagelv" has been successfully created
    with lv number 1.
    vgimport: Volume group "/dev/stage" has been successfully created.
    Warning: A backup of this volume group may not exist on this machine.
    Please remember to take a backup using the vgcfgbackup command after activating the volume group.
    
  5. Vérifier que le VG est maintenant dans la conf
    root@hx000141: /root/home/root # vgdisplay stage
    vgdisplay: Volume group not activated.
    vgdisplay: Cannot display volume group "stage".
    
  6. Démonter le fs sur le 1er node
    root@hx000140: /root/home/root # umount /staging
    
  7. Désactiver le VG sur le 1er node
    root@hx000140: /root/home/root # vgchange -a n /dev/stage
    Volume group "/dev/stage" has been successfully changed.
    
  8. Exporter le VG (effacement de la conf) sur le 1er node
    root@hx000140: /root/home/root # vgexport /dev/stage
    vgexport: Volume group "/dev/stage" has been successfully removed.
    
  9. Activer le VG sur le 2nd node
    root@hx000141: /root/home/root # vgchange -a y /dev/stage
    Activated volume group.
    Volume group "/dev/stage" has been successfully changed.
    
  10. Vérification que le vg est ok sur le 2nd node
    root@hx000141: /root/home/root # vgdisplay -v /dev/stage
    --- Volume groups ---
    VG Name                     /dev/stage
    VG Write Access             read/write
    VG Status                   available
    Max LV                      255
    Cur LV                      1
    Open LV                     1
    Max PV                      16
    Cur PV                      1
    Act PV                      1
    Max PE per PV               32000
    VGDA                        2
    PE Size (Mbytes)            32
    Total PE                    31999
    Alloc PE                    31360
    Free PE                     639
    Total PVG                   0
    Total Spare PVs             0
    Total Spare PVs in use      0
    VG Version                  1.0
    VG Max Size                 16000g
    VG Max Extents              512000
     
       --- Logical volumes ---
       LV Name                     /dev/stage/stagelv
       LV Status                   available/syncd
       LV Size (Mbytes)            1003520
       Current LE                  31360
       Allocated PE                31360
       Used PV                     1
     
    
       --- Physical volumes ---
       PV Name                     /dev/dsk/c33t0d0
       PV Name                     /dev/dsk/c35t0d0 Alternate Link
       PV Status                   available
       Total PE                    31999
       Free PE                     639
       Autoswitch                  On
       Proactive Polling           On
    
  11. Vérification que le vg est ok sur le 2nd node
    Commenter :
    /dev/stage/stagelv /staging vxfs defaults 0 2
    dans /etc/fstab sur le 1er node
    
    De-commenter :
    /dev/stage/stagelv /staging vxfs defaults 0 2
    dans /etc/fstab sur le 2nd node
    
  12. Vérifier que le point de montage est présent
    root@hx000141: /root/home/root # ll -d /staging
    drwxr-xr-x   2 root       sys             96 Feb  3 15:12 /staging
    
  13. Montage et vérification
    root@hx000141: /root/home/root # mount -aQ
    
    root@hx000141: /root/home/root # mount
    / on /dev/vg00/lvol3 ioerror=mwdisable,largefiles,delaylog,dev=40000003 on Thu Jan  6 14:54:02 2011
    /stand on /dev/vg00/lvol1 ioerror=mwdisable,nolargefiles,log,tranflush,dev=40000001 on Thu Jan  6 14:54:12 2011
    /var on /dev/vg00/lvol8 ioerror=mwdisable,largefiles,delaylog,dev=40000008 on Thu Jan  6 14:54:53 2011
    /var/adm/crash on /dev/vg00/lvol10 ioerror=mwdisable,largefiles,delaylog,dev=4000000a on Thu Jan  6 14:54:53 2011
    /usr on /dev/vg00/lvol7 ioerror=mwdisable,largefiles,delaylog,dev=40000007 on Thu Jan  6 14:54:54 2011
    /tmp on /dev/vg00/lvol4 ioerror=mwdisable,largefiles,delaylog,dev=40000004 on Thu Jan  6 14:54:54 2011
    /opt on /dev/vg00/lvol6 ioerror=mwdisable,largefiles,delaylog,dev=40000006 on Thu Jan  6 14:54:54 2011
    /home on /dev/vg00/lvol5 ioerror=mwdisable,largefiles,delaylog,dev=40000005 on Thu Jan  6 14:54:54 2011
    /appl on /dev/vg01/lvol1 ioerror=mwdisable,largefiles,delaylog,dev=80000001 on Thu Jan  6 14:54:55 2011
    /net on -hosts ignore,indirect,nosuid,soft,nobrowse,dev=4000002 on Thu Jan  6 14:56:19 2011
    /staging on /dev/stage/stagelv ioerror=mwdisable,largefiles,delaylog,dev=40010001 on Thu Feb  3 15:13:08 2011
    

fév 21 2011

Deplacer les « undo » Oracle

Tag: DBUggla @ 15 h 39 min

Quelques commandes pour déplacer les « undo » sur Oracle RAC 10gR2.

  • Requête pour trouver les undo:
    select FILE_NAME,TABLESPACE_NAME,BYTES/1024/1024,AUTOEXTENSIBLE,STATUS,USER_BYTES from dba_data_files where TABLESPACE_NAME like 'UNDO%';
    
  • Créer les nouveaux undo :
    create undo tablespace UNDOTBS3 datafile '/oradata/GFSTEUR1/group01/undotbs03_1.dbf' size 15G autoextend off;
    create undo tablespace UNDOTBS4 datafile '/oradata/GFSTEUR1/group01/undotbs04_1.dbf' size 15G autoextend off;
    
  • Pointer sur les nouveaux undo :

    Attention : ne pas oublier le SID !!!

    ALTER SYSTEM SET undo_tablespace = UNDOTBS3 SCOPE=BOTH SID='GFSTE1R1';
    ALTER SYSTEM SET undo_tablespace = UNDOTBS4 SCOPE=BOTH SID='GFSTE2R1';
    
  • Effacer les anciens undo :
    DROP TABLESPACE UNDOTBS1 INCLUDING CONTENTS AND DATAFILES;
    DROP TABLESPACE UNDOTBS2 INCLUDING CONTENTS AND DATAFILES;
    

fév 21 2011

Script mapping EVA

Tag: ScriptsUggla @ 15 h 07 min

Un petit script perl pour trouver les correspondances entre les luns UUIDs, special files, major numbers, minor numbers.

Dépendances :

evainfo –> outil client pour extraire les informations des EVA.

#!/usr/bin/perl

use strict;
use warnings;
use Data::Dumper;

my @evainfo=`evainfo -a`;
my @ioscan=`ioscan -m dsf`;
my %devmap;
my %devmap2;
my %lunmap;
my %sf;

my $dsf;
my $olddsf;
my $legacy;

# Create map : "legacy device" = "dsf device"
foreach(@ioscan){
        if ( /\/dev/ ){
                ($dsf, $legacy)=split(/\s+/,$_);
                if ( $dsf eq "" ){
                        $dsf=$olddsf;
                }
                $devmap{$legacy}=$dsf;
        $olddsf=$dsf;
        }
}

# Create map : "wwnn id" = "legacy device"
foreach(@evainfo){

        if ( /\/dev/){
                (my $legacy, undef, my $wwnn, my $size, my $ctrl)=split(/\s+/,$_);
                $lunmap{$wwnn}=$legacy;
        }
}

# Create map : "dsf device" = "major and minor"
%devmap2=%devmap; # values() modify the hash, so copy to a new one to avoid %devmap to be modified
foreach(values(%devmap2)){
        open (SHELLCMD,"ll $_ |");
                while (<SHELLCMD>){
                        (undef,undef,undef,undef,my $major, my $minor,undef,undef,undef,my $dev)=split(/\s+/,$_);
                        $sf{$dev}->{"major"}=$major;
                        $sf{$dev}->{"minor"}=$minor;
                }
        close (SHELLCMD)
}

# Debug purpose
#print Dumper(\%devmap);
#print Dumper(\%lunmap);
#print Dumper(\%sf);

# Print output
foreach(sort(keys(%lunmap))){
        my $legacy=$lunmap{$_};
        printf("%s %s %s %s\n",$_ ,$devmap{$legacy}, $sf{$devmap{$legacy}}->{"major"}, $sf{$devmap{$legacy}}->{"minor"});
}

Exemple d'usage :
root@hx000140: /root/home/root # perl eva_mapping.pl
6001-4380-05DE-C738-0000-6000-08B4-0000 /dev/oracle/asm_eva1 13 0x000019
6001-4380-05DE-C738-0000-6000-08B8-0000 /dev/oracle/asm_eva2 13 0x00001a
6001-4380-05DE-C738-0000-6000-08BC-0000 /dev/oracle/asm_eva3 13 0x00001b
6001-4380-05DE-C738-0000-6000-08C0-0000 /dev/oracle/asm_eva4 13 0x00001c
6001-4380-05DE-C738-0000-6000-08C4-0000 /dev/oracle.bak/asm_eva5 13 0x00001d
6001-4380-05DE-C738-0000-6000-08C8-0000 /dev/oracle.bak/asm_eva6 13 0x00001e
6001-4380-05DE-C738-0000-6000-08CC-0000 /dev/oracle.bak/asm_eva7 13 0x00001f
6001-4380-05DE-C738-0000-6000-08D0-0000 /dev/oracle.bak/asm_eva8 13 0x000020