juin 29 2011

HPVM disk physical/virtual mapping script

Tag: Scripts,UnixUggla @ 17 h 02 min

This is a short script to create a physical (host)/virtual (guest) map.
This is a basic script written quickly, that was used to simplify data migration operations. So improvements welcomed.

    Requirements :

  • script must be run from host. (not guest)
  • ssh must be allowed using keys (no passwd) to all guests.
  • hpvmstatus command must be available from path.

#!/usr/bin/perl

use strict;
use warnings;
use Data::Dumper;

my @hpvmstatus=`hpvmstatus`;
my @vms;
my $vm;

my %vm_data;

foreach(@hpvmstatus) {
        ($vm)=$_ =~ /^(.+)\s+[0-9]+\sHPUX\s+On/;
        if (defined($vm)){
                $vm =~ s/\s+//g;
                push (@vms,$vm);
        }
}

foreach $vm (@vms){
        my @hpvmstatus_vm=`hpvmstatus -P $vm`;

        foreach(@hpvmstatus_vm){
                #Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
                #======= ========== === === === === === ========= =========================
                #disk    avio_stor    0   2   0   0   0 disk      /dev/rdisk/disk17

                (my $ftn,my $tgt,my $lun,my $disk)=$_ =~ /^disk\s+avio_stor\s+\d+\s+\d+\s+(\d+)\s+(\d+)\s+(\d+)\s+disk\s+(.+)$/;

                if (defined($ftn)){
                $vm_data{$vm}->{$disk}->{"dev"}=$ftn;
                $vm_data{$vm}->{$disk}->{"tgt"}=$tgt;
                $vm_data{$vm}->{$disk}->{"lun"}=$lun;

                #Convert tgt to the good legacy device
                my $tgtconv=sprintf("%02X",$tgt);
                my $tgtconv_part1;
                my $tgtconv_part2;

                ($tgtconv_part1)=$tgtconv=~/(.)./;
                ($tgtconv_part2)=$tgtconv=~/.(.)/;

                $tgtconv_part1=hex($tgtconv_part1);
                $tgtconv_part2=hex($tgtconv_part2);

                #$vm_data{$vm}->{$disk}->{"legacy"}="c".$ftn."t".$tgtconv_part2."d".$tgtconv_part1;
                $vm_data{$vm}->{$disk}->{"legacy"}="c\\dt".$tgtconv_part2."d".$tgtconv_part1; # Don't know how the instance number is defined (cX) so use a more gereric regexp
                }
        }
}

foreach $vm (@vms){
        my @ioscan=`ssh -q -o stricthostkeychecking=no -o batchmode=yes root\@$vm \"ioscan -m dsf\"`;

        foreach(keys(%{$vm_data{$vm}})){
                my $regex=$vm_data{$vm}->{$_}->{"legacy"}."\$";
                my @vdisk=grep(/$regex/,@ioscan);
                my $vdisk_str=join(",",@vdisk);
                ($vdisk_str)=split(",",$vdisk_str);
                $vdisk_str=~s#\s+/dev/rdsk/.+##g;
                $vdisk_str=~s/\s//g;
                $vm_data{$vm}->{$_}->{"vdisk"}=$vdisk_str;
        }
}

# Debuging purpose
#print Dumper(\%vm_data);

foreach $vm (@vms){
        printf("VM\tPhys\t\t\tVirt\n");

        foreach(keys(%{$vm_data{$vm}})){
                printf("%s\t%s\t%s\n",$vm,$_,$vm_data{$vm}->{$_}->{"vdisk"});

        }
}


mai 24 2011

Rescan tape drives

Tag: Scripts,UnixUggla @ 16 h 12 min

Last week we got an issue with VLS. We had to rescan hw to recover tape drives in CLAIMED status.
So I created this short script/command line to rescan HPUX 11.31.

for system in toto titi tata
do
echo "Processing $system"
ssh root@$system 'ioscan > /dev/null && for i in $(dmesg | grep "replace_wwid" | \
perl -ne '"'"'{(my $get) = $_ =~ m/instance = (.+)\) The/; print "$get\n"}'"'"'); \
do scsimgr -f replace_wwid -C tgtpath -I $i;done | sort -u && ioscan -fnkC tape'
done


avr 14 2011

HPUX / RAC 11gR2 various tips

Tag: DBUggla @ 11 h 36 min

Here is a list of tips useful to install/troubleshoot a 11gR2 RAC cluster with HPUX 11.31.

  • mrouted configuration to allow multicast.
    Activate the mrouted daemon.
    export MROUTED=1
    
    into /etc/rc.config.d/netdaemons
  • Route to 169 from interlink.
    If like me you don’t have the interconnect link routed and known by DNS, ensure you have a route to 169 via your interconnect interface and IP.
    169.254.0.0           172.16.15.140      U     0    lan5002    1500
    
  • Do not forget patch hppac.
    This patch is not common and required by Oracle Clusterware.
    swinstall -p -x logdetail=true -s calisson.osdgre.external.hp.com:/var/depot/hp-ux/11.31/qpk PHSS_37042
    swinstall -x logdetail=true -s calisson.osdgre.external.hp.com:/var/depot/hp-ux/11.31/qpk PHSS_37042
    
  • Restart root.sh script for troubeshooting.
    This is an excellent tip to troubleshoot root.sh, this will allow to restart only root.sh and not the full install.
    Follow the excellent note : http://www.rachelp.nl/index_kb.php?menu=articles&actie=show&id=61
    dd if=/dev/zero of=/dev/oracle/asmdisk_ocr1 bs=1024k count=2048
    dd if=/dev/zero of=/dev/oracle/asmdisk_ocr2 bs=1024k count=2048
    dd if=/dev/zero of=/dev/oracle/asmdisk_ocr3 bs=1024k count=2048
    
    rm /var/opt/oracle/scls_scr/hx004105/grid/cssfatal
    rm /var/opt/oracle/ocr.loc
    /appl/grid/product/11.2.0/cluster/root.sh
    

    The Oracle Metalink articles ID 1050908.1 and ID 1053970.1 may greatly help to find issues.
  • Full cleanup crs to reinstall from scratch.
    I got issues not cleaning up /var/tmp/.oracle and /tmp/.oracle.
    Cleanup inittab, remove : "h1:3:respawn:/sbin/init.d/init.ohasd run >/dev/null 2>&1 </dev/null"
    init q
    rm -rf /appl/oraInventory
    rm -rf /appl/grid
    rm -rf /var/tmp/.oracle
    rm -rf /tmp/.oracle
    rm -rf /var/opt/oracle
    rm /sbin/init.d/init.ohasd
    rm /sbin/init.d/ohasd
    find /etc/rc*.d -name *ohasd -exec rm {} \;
    for i in 1 2 3; do
    dd if=/dev/zero of=/dev/oracle/asmdisk_ocr$i bs=1024k
    done
    rm /etc/oratab
    rm /usr/local/bin/dbhome /usr/local/bin/coraenv /usr/local/bin/oraenv
    

    Rebuild home.
    mkdir /appl/grid
    chown grid:oinstall /appl/grid
    chmod g+w /appl
    
  • cluvrfy command to check cluster status.
    Before adding a node, the following command should exit successfully.
    cluvfy stage -post hwos -n <existing and new nodes> -verbose
    cluvfy stage -pre nodeadd -n <new node> -fixup -verbose
    

    Especially take care of $ORACLE_HOME ownership and rights.
    A good article about cluvfy : http://satya-racdba.blogspot.com/2010/01/cluvfy-cluvfy-help-or-cluvfy-h-cluvfy.html
  • Add node command reminder.
    ./addNode.sh -silent CLUSTER_NEW_NODES={node_name} CLUSTER_NEW_VIRTUAL_HOSTNAMES={node_name-vip}
    
  • Root.sh to run from /appl/grid ($ORACLE_BASE) (strange).
    To be confirmed, but I got issues depending from the running location. So try to run from various location…
  • Copy 2nd DVD
    11gR2 is composed of 3 DVDs (2xDB + 1xGI), installation will be fine although second can not be extracted. But it will lack DBCA assistants etc… that will make next steps painful.
  • Delete engine to restart installation.
    To remove an engine after installation (because of missing DVD as an example. ;) ).

    • Edit oraInventory/ContentsXML/inventory.xml by removing engine entry.
    • Remove engine’s oracle home.

mar 16 2011

HPVM B.04.20.05 update and vlan usage

Tag: UnixUggla @ 16 h 19 min

Article en anglais pour partager à l’international.

I recently got a severe issue upgrading HPVM to B.04.20.05.
Of course the « trap » was notified into the manual, but I worked to fix another issue, missed it and ran into serious problems.

So, the idea of this article is to avoid pitfalls documenting them.

  1. Avoid vlan unsupported configuration.

    Extracted from documentation (http://bizsupport2.austin.hp.com/bc/docs/support/SupportManual/c02023903/c02023903.pdf) :

    Do not use the hpvmnet command to create a virtual switch that
    is associated with a VLAN port on the VM Host (that is, a LAN created with lanadmin -V).
    This “nested VLAN” configuration is not supported.

    In the following configuration :

    root@hx05333: /root/home/root # lanscan
    Hardware Station        Crd Hdw   Net-Interface  NM  MAC       HP-DLPI DLPI
    Path     Address        In# State NamePPA        ID  Type      Support Mjr#
    0/0/0/3/0/0/2 0x64315000A911 4   UP    lan4 snap4     3   ETHER     Yes     119
    0/0/0/3/0/0/3 0x64315000A915 5   UP    lan5 snap5     4   ETHER     Yes     119
    0/0/0/3/0/0/4 0x64315000A912 6   UP    lan6 snap6     5   ETHER     Yes     119
    0/0/0/3/0/0/5 0x64315000A916 7   UP    lan7 snap7     6   ETHER     Yes     119
    0/0/0/3/0/0/6 0x64315000A913 8   UP    lan8 snap8     7   ETHER     Yes     119
    0/0/0/3/0/0/7 0x64315000A917 9   UP    lan9 snap9     8   ETHER     Yes     119
    0/0/0/4/0/0/0 0x64315000A918 2   UP    lan2 snap2     9   ETHER     Yes     119
    0/0/0/4/0/0/1 0x64315000A91C 3   UP    lan3 snap3     10  ETHER     Yes     119
    LinkAgg0 0x0017A477FE00 900 UP    lan900 snap900 12  ETHER     Yes     119
    VLAN5001 0x0017A477FE00 5001 UP    lan5001 snap5001 63  ETHER     Yes     119
    VLAN5000 0x0017A477FE00 5000 UP    lan5000 snap5000 62  ETHER     Yes     119
    LinkAgg1 0x000000000000 901 DOWN  lan901 snap901 13  ETHER     Yes     119
    LinkAgg2 0x000000000000 902 DOWN  lan902 snap902 14  ETHER     Yes     119
    LinkAgg3 0x000000000000 903 DOWN  lan903 snap903 15  ETHER     Yes     119
    LinkAgg4 0x000000000000 904 DOWN  lan904 snap904 16  ETHER     Yes     119
    

    1. lan900 is « bonding » of interface lan0 and lan1 carrying all the vlans.
    2. lan5000 is an interface that select a vlan(113) of lan900
    3. lan5001 is an interface that select a vlan(213) of lan900

    Clearly it means that lan5000 or lan5001 must not be connected to the virtual switch.
    Lan900 (the one with all the vlans) must be connected to the switch, and it is virtual switch job to manage the vlan tagging mechanism for all our VM.

    The configuration should be done in the following way.

    1. Create virtual switch.
      hpvmnet -c -S vmlan -n 900
      
    2. Define 2 ports with vlan 113 and 213
      hpvmnet -S vmlan -u portid:1:vlanid:113
      hpvmnet -S vmlan -u portid:2:vlanid:213
      
    3. Attach VM to the ports just created.
      hpvmmodify -P hx05374 -a network:avio_lan::vswitch:vmlan:portid:1
      hpvmmodify -P hx05374 -a network:avio_lan::vswitch:vmlan:portid:2
      
    4. Check virtual switch and VM are ok.
      root@hx05333: /root/home/root # hpvmnet
      Name     Number State   Mode      NamePPA  MAC Address    IPv4 Address
      ======== ====== ======= ========= ======== ============== ===============
      localnet      1 Up      Shared             N/A            N/A
      vmlan        10 Up      Shared    lan900   0x0017a477fe00
      

      root@hx05333: /root/home/root # hpvmnet -S vmlan
      Name     Number State   Mode      NamePPA  MAC Address    IPv4 Address
      ======== ====== ======= ========= ======== ============== ===============
      vmlan        10 Up      Shared    lan900   0x0017a477fe00
      
      [Port Configuration Details]
      Port    Port         Port     Untagged Number of    Active VM    Tagged
      Number  State        Adaptor  VLANID   Reserved VMs              VLANIDs
      ======= ============ ======== ======== ============ ============ =============
      1       Active       avio_lan 113      1            hx05374      none
      2       Active       avio_lan 213      1            hx05374      none
      

      root@hx05333: /root/home/root # hpvmstatus -p1
      [Virtual Machine Details]
      Virtual Machine Name VM #  OS Type State
      ==================== ===== ======= ========
      hx05374                  1 HPUX    On (OS)
      
      [Authorized Administrators]
      Oper Groups             :
      Admin Groups            :
      Oper Users              :
      Admin Users             :
      
      [Virtual CPU Details]
      #vCPUs Entitlement Maximum
      ====== =========== =======
           4       10.0%  100.0%
      
      [Memory Details]
      Total    Reserved
      Memory   Memory
      =======  ========
      17000 MB     64 MB
      
      [Dynamic Memory Information]
      Minimum     Target      Memory      Maximum
      Memory      Memory      Entitlement Memory
      =========== =========== =========== ===========
         512 MB    17018 MB          -     17000 MB
      
      [Storage Interface Details]
      Guest                                 Physical
      Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
      ======= ========== === === === === === ========= =========================
      disk    avio_stor    0   4   0   0   0 disk      /dev/rdisk/disk11
      disk    avio_stor    0   4   0   1   0 disk      /dev/rdisk/disk3
      disk    avio_stor    0   4   0   2   0 disk      /dev/rdisk/disk8
      disk    avio_stor    0   4   0   3   0 disk      /dev/rdisk/disk17
      
      [Network Interface Details]
      Interface Adaptor    Name/Num   PortNum Bus Dev Ftn Mac Address
      ========= ========== ========== ======= === === === =================
      vswitch   avio_lan   vmlan      1         0   0   0 36-af-e6-d1-20-bc
      vswitch   avio_lan   vmlan      2         0   1   0 8e-b5-a9-a9-14-23
      
      [Misc Interface Details]
      Guest                                 Physical
      Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
      ======= ========== === === === === === ========= =========================
      serial  com1                           tty       console
      
  2. Update HPVM to B.04.20.05.

    1. Patch the VM with QPK bundle.
    2. Add VM patches to avoid dynamic memory issue.

      This will avoid to be in the case defined by this article :
      http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?objectID=c02692588&lang=en&cc=us&taskId=101&prodSeriesId=4146128&prodTypeId=18964

      Add the PHSS_41543 and PHSS_41550, this will give the following patch list due to dependencies.

      PHKL_41227 (2) clock cumulative patch
      PHSS_41191 (2) HPVM B.04.20.05 VMMIGRATE PATCH
      PHSS_41413 (2) HPVM B.04.20.05 vmGuestLib
      PHSS_41543 (2) [S] HPVM B.04.20.05 CORE PATCH
      PHSS_41550 (2) HPVM B.04.20.05 HPVM-VMSGTK
      
    3. Clean up VM.
      1. Remove HPVM product.
        swremove -x autoreboot=true T2767CC VMKernelSW
        

        Note : VMKernelSW should not be available on the VM

      2. Remove HostAVIO* product.
        swremove -p -x autoreboot=true  HostAVIOStor HostAvioLan
        swremove -x autoreboot=true  HostAVIOStor HostAvioLan
        
      3. Upgrade GuestAVIO* product.
        swinstall -p -x autoreboot=true -x logdetail=true -s calisson:/var/depot/hp-ux/11.31/hp-ux GuestAVIOStor GuestAvioLan
        swinstall -x autoreboot=true -x logdetail=true -s calisson:/var/depot/hp-ux/11.31/hp-ux GuestAVIOStor GuestAvioLan
        

      4. VM should now have only the following product.
        root@hx05374: /root/home/root # swlist -l bundle | egrep -i "HPVM|Integ"
          GuestAVIOStor         B.11.31.1009   HPVM Guest AVIO Storage Software
          GuestAvioLan          B.11.31.1009   HPVM Guest AVIO LAN Software
          HPVM-Guest            B.04.20     Integrity VM Guest
          LDAPUX                B.04.20        LDAP-UX Integration
          VMGuestLib            B.04.20     Integrity VM Guest Support Libraries
          VMProvider            B.04.20     WBEM Provider for Integrity VM
        
    4. Install full QPK on host including FEATURE11i bundle.
      Note : FEATURE11i will be checked by HPVM product script before upgrade.
    5. Upgrade AVIO software.
      swinstall -p -x autoreboot=true -x logdetail=true -s calisson:/var/depot/hp-ux/11.31/hp-ux GuestAVIOStor GuestAvioLan HostAVIOStor HostAvioLan
      swinstall -x autoreboot=true -x logdetail=true -s calisson:/var/depot/hp-ux/11.31/hp-ux GuestAVIOStor GuestAvioLan HostAVIOStor HostAvioLan
      
    6. Upgrade HPVM product.

      Do not force install, correct the dependencies issues by bringing the required bundle (VMGuestLib, VMGuestSW, AVIO* …) into the source depot.

      swinstall -p -x autoreboot=true -x logdetail=true -s calisson:/var/depot/hp-ux/11.31/hp-ux T2767CC
      swinstall -x autoreboot=true -x logdetail=true -s calisson:/var/depot/hp-ux/11.31/hp-ux T2767CC
      
    7. Check host, the following command should report something like this.
      root@hx05333: /root/home/root # swlist -l bundle | egrep "Integri|HPVM"
        GuestAVIOStor         B.11.31.1009   HPVM Guest AVIO Storage Software
        GuestAvioLan          B.11.31.1009   HPVM Guest AVIO LAN Software
        HostAVIOStor          B.11.31.1009   HPVM Host AVIO Storage Software
        HostAvioLan           B.11.31.1009   HPVM Host AVIO LAN Software
        T2767CC               B.04.20.05     Integrity VM
        VMGuestLib            B.04.20.05     Integrity VM Guest Support Libraries
        VMGuestSW             B.04.20.05     Integrity VM Guest Support Software
        VMKernelSW            B.04.20        Integrity VM Kernel Software
      

      Note : VMKernelSW remains at B.04.20, this is normal B.04.20.05 does not contain a new « host » kernel.

    8. Copy and register a depot from /opt/hpvm/guest-images/hpux/11iv3/hpvm_guest_depot.11iv3.sd on the host.
      swcopy -p -x enforce_dependencies=false -s /opt/hpvm/guest-images/hpux/11iv3/hpvm_guest_depot.11iv3.sd \* @ /var/depot/hpvm-guest
      swcopy -x enforce_dependencies=false -s /opt/hpvm/guest-images/hpux/11iv3/hpvm_guest_depot.11iv3.sd \* @ /var/depot/hpvm-guest
      swreg -l depot /var/depot/hpvm-guest
      
    9. Install VM tools on the VM.
      swinstall -p -x autoreboot=true -s hx05333:/var/depot/hpvm-guest HPVM-Guest vmProvider
      swinstall -x autoreboot=true -s hx05333:/var/depot/hpvm-guest HPVM-Guest vmProvider
      
    10. Check VM, the output should be similar to the following lines.
      root@hx05374: /root/home/root # swlist -l bundle | egrep -i "HPVM|Integri"
        GuestAVIOStor         B.11.31.1009   HPVM Guest AVIO Storage Software
        GuestAvioLan          B.11.31.1009   HPVM Guest AVIO LAN Software
        HPVM-Guest            B.04.20.05     Integrity VM Guest
        VMGuestLib            B.04.20.05     Integrity VM Guest Support Libraries
        VMProvider            B.04.20.05     WBEM Provider for Integrity VM
      

Now host and VM should be upgraded correctly.


mar 08 2011

Transformer HPUX en OS du 21eme siècle

Tag: UnixUggla @ 14 h 58 min

Un billet avec un peu d’humour, en référence à un collègue qui traite HPUX d’OS du 20eme siècle. (pas totalement faux non plus…)

La suite Internet express, permet d’ajouter plusieurs produits open source courant sur Linux. La liste est disponible sur le site.

https://h20392.www2.hp.com/portal/swdepot/displayProductInfo.do?productNumber=HPUXIEXP1131

Mon trio préféré :

  • lsof
  • rsync
  • sudo

Juste indispensable, mais pas disponible de facto sur HPUX. (oui ca fait peur….)
C’est mieux, mais bon il manque encore beaucoup d’outils pour se simplifier la vie (complétion de commande avancée, logrotate, fs partagé open source (pas cfs), gcc, mutt, etc…)


mar 02 2011

ssh trucs et astuces

Tag: UnixUggla @ 18 h 02 min

Quelques trucs et astuces avec ssh.

  • Ignorer la clef de host :
    ssh -o stricthostkeychecking=no server
    
  • Mode batch, aucun « prompt » (password, clef) n’est demandé, en gros ca marche ou pas :
    ssh -o batchmode=yes server
    
  • Mode quiet , aucun message d’alerte ou diagnostique est affiché :
    ssh -q server
    
  • Utiliser une commande avec des quotes (ex : awk). Il faut protéger les quotes avec des double quotes ("'") :
    ssh server 'vgdisplay | grep "VG Name" | awk '"'"'{print $NF}'"'"''
    

mar 02 2011

Fixer le bdf hpux

Tag: UnixUggla @ 10 h 56 min

Le bdf hpux fait parfois un retour chariot pénible comme  ci dessous avec /appl/autotree.
Le problème est que l’on a plus même nombre de colonnes (5 au lieu de 6), ce qui empêche de  faire des sommes.

root@hx04031a: /root/home/root # bdf
Filesystem          kbytes    used   avail %used Mounted on
/dev/vg00/lvol3    1048576  298432  744424   29% /
/dev/vg00/lvol1    1835008  329800 1493520   18% /stand
/dev/vg00/lvol8    8912896 7199008 1703728   81% /var
/dev/vg00/lvol9    10485760 3833009 6238725   38% /var/adm/crash
/dev/vg00/lvol7    5505024 3088424 2397816   56% /usr
/dev/vg00/lvol4     524288  115672  405496   22% /tmp
/dev/vg00/lvol6    9633792 6518920 3090656   68% /opt
/dev/vg00/lvol5     131072   16248  113960   12% /home
/dev/vg00/lvautotree
1048576   41159  944483    4% /appl/autotree

La « commande » ci dessus utilise perl pour fixer le problème.


root@hx04031a: /root/home/root # bdf  | sed 's/^\//###\//g' | perl -pe 's/\s+/\t/g' | perl -pe 's/\n//g' | perl -pe 's/###/\n/g' && echo
Filesystem      kbytes  used    avail   %used   Mounted on
/dev/vg00/lvol3 1048576 298496  744360  29%     /
/dev/vg00/lvol1 1835008 329800  1493520 18%     /stand
/dev/vg00/lvol8 8912896 7199016 1703728 81%     /var
/dev/vg00/lvol9 10485760        3833009 6238725 38%     /var/adm/crash
/dev/vg00/lvol7 5505024 3088424 2397816 56%     /usr
/dev/vg00/lvol4 524288  115672  405496  22%     /tmp
/dev/vg00/lvol6 9633792 6518920 3090656 68%     /opt
/dev/vg00/lvol5 131072  16248   113960  12%     /home
/dev/vg00/lvautotree            1048576 41159   944483  4%      /appl/autotree

On peut donc maintenant sommer les colonnes.

bdf  | sed 's/^\//###\//g' | perl -pe 's/\s+/\t/g' | perl -pe 's/\n//g' | perl -pe 's/###/\n/g' \
| awk '{sumt+=$2;sumu+=$3;suma+=$4}END{printf("Total : %.2f %.2f %.2f\n",sumt/1024,sumu/1024,suma/1024)}'

Total : 38208.00 20938.31 16731.11


mar 01 2011

Gestion des espaces dans les boucles « for » avec IFS

Tag: ScriptsUggla @ 14 h 17 min

Une astuce pour gérer les espaces dans les boucles « for » en shell (bash, ksh).
Rien ne vaut un petit exemple pour montrer le problème :

root@hx04970: /root/home/root/nene # ll
total 0
-rw-r--r--   1 root       sys              0 Feb 28 14:09 1
-rw-r--r--   1 root       sys              0 Feb 28 14:09 2
-rw-r--r--   1 root       sys              0 Feb 28 14:09 3
drwxr-xr-x   2 root       sys             96 Feb 28 12:04 tata tutu
drwxr-xr-x   2 root       sys             96 Feb 28 12:03 titi
drwxr-xr-x   2 root       sys             96 Feb 28 12:03 toto

Imaginons que je fasse une boucle sur l’ensemble des fichiers de l’arborescence ci dessus.

root@hx04970: /root/home/root/nene # for i in *;do echo $i;done
1
2
3
tata tutu
titi
toto

Ici tout se passe bien, pas de problèmes, le répertoire « tata tutu » est bien interprété.

Maintenant je complique un peu en voulant faire une boucle sur les répertoires seulement :

root@hx04970: /root/home/root/nene # for i in $(find . -type d);do echo $i;done
.
./toto
./titi
./tata
tutu

Et la c’est le drame :) .

L’espace sur le répertoire « tata tutu » est vu comme un séparateur et le shell croit que nous avons 2 répertoires tata et tutu.
Le bon administrateur Unix sait qu’il faut éviter les espaces c’est mal. (C’est comme : on ne croise jamais les effluves : c’est mal –> cf Ghostbuster).

Cool ! Mais si le répertoire est un share cifs avec des utilisateurs de windows, il y a de grande chance que l’on retrouve plein d’espaces dans les noms de fichier.

La meilleure méthode de résolution du problème ci dessus est d’utiliser la « variable » IFS.
Un petit man ksh me donne :

IFS Internal field separators, normally space, tab,
and newline that are used to separate command
words resulting from command or parameter
substitution, and for separating words with the
special command read. The first character of the
IFS parameter is used to separate arguments for
the « $* » substitution (see Quoting below).

IFS est la « variable » qui définie le séparateur, par défaut espace, tab et nouvelle ligne.

En redéfinissant le séparateur, sans l’espace et le tab on obtient :

root@hx04970: /root/home/root/nene # IFS=$'\n' && for i in $(find . -type d);do echo $i;done
.
./toto
./titi
./tata tutu

Le problème est résolu.

Pour aller plus loin : http://tldp.org/LDP/abs/html/internalvariables.html#IFSH


mar 01 2011

Ajouter un group à un utilisateur (hpux)

Tag: UnixUggla @ 11 h 10 min

Petite astuce pour hpux.

La commande usermod de hpux n’a pas de fonction append comme sous Linux.

L’ajout d’un group a un utilisateur oblige a redéfinir entièrement l’utilisateur.

La « commande » ci dessous simplifie cette opération :

account=grid && newgrp=dclicdba && echo "usermod -g $(id -g $account) -G $(id -G $account | sed 's/ /,/g'),$newgrp $account"

Résultat :

usermod -g 107 -G 107,108,109,110,111,112,114,dclicdba grid

Plus qu’a exécuter la ligne ci dessus pour ajouter le groupe.
Si le login est deja utilisé, alors il faut ajouter l’option -F –> usermod -F -g 107 -G 107,108,109,110,111,112,114,dclicdba grid


fév 21 2011

Procédure export/import d’un VG

Tag: UnixUggla @ 16 h 58 min

Procédure d’export/import d’un VG d’une machine vers une autre.

  1. Exporte la map du vg
    root@hx000140: /root/home/root # vgexport -s -v -p -m /tmp/stage.map /dev/stage
    Beginning the export process on Volume Group "/dev/stage".
    vgexport: Volume group "/dev/stage" is still active.
    /dev/dsk/c34t0d0
    /dev/dsk/c35t0d0
    vgexport: Preview of vgexport on volume group "/dev/stage" succeeded.
    
  2. Copier la map sur l’autre node
    scp /tmp/stage.map hx000141:/tmp
    
  3. Importer en mode previous (-p) pour verifier que c’est ok
    root@hx000141: /root/home/root # vgimport -s -v -p -m /tmp/stage.map /dev/stage
     
    Beginning the import process on Volume Group "/dev/stage".
    Logical volume "/dev/stage/stagelv" has been successfully created
    with lv number 1.
    vgimport: Volume group "/dev/stage" has been successfully created.
    Warning: A backup of this volume group may not exist on this machine.
    Please remember to take a backup using the vgcfgbackup command after activating the volume group.
    
  4. Importer le vg
    root@hx000141: /root/home/root # vgimport -s -v -m /tmp/stage.map /dev/stage
    Beginning the import process on Volume Group "/dev/stage".
    Logical volume "/dev/stage/stagelv" has been successfully created
    with lv number 1.
    vgimport: Volume group "/dev/stage" has been successfully created.
    Warning: A backup of this volume group may not exist on this machine.
    Please remember to take a backup using the vgcfgbackup command after activating the volume group.
    
  5. Vérifier que le VG est maintenant dans la conf
    root@hx000141: /root/home/root # vgdisplay stage
    vgdisplay: Volume group not activated.
    vgdisplay: Cannot display volume group "stage".
    
  6. Démonter le fs sur le 1er node
    root@hx000140: /root/home/root # umount /staging
    
  7. Désactiver le VG sur le 1er node
    root@hx000140: /root/home/root # vgchange -a n /dev/stage
    Volume group "/dev/stage" has been successfully changed.
    
  8. Exporter le VG (effacement de la conf) sur le 1er node
    root@hx000140: /root/home/root # vgexport /dev/stage
    vgexport: Volume group "/dev/stage" has been successfully removed.
    
  9. Activer le VG sur le 2nd node
    root@hx000141: /root/home/root # vgchange -a y /dev/stage
    Activated volume group.
    Volume group "/dev/stage" has been successfully changed.
    
  10. Vérification que le vg est ok sur le 2nd node
    root@hx000141: /root/home/root # vgdisplay -v /dev/stage
    --- Volume groups ---
    VG Name                     /dev/stage
    VG Write Access             read/write
    VG Status                   available
    Max LV                      255
    Cur LV                      1
    Open LV                     1
    Max PV                      16
    Cur PV                      1
    Act PV                      1
    Max PE per PV               32000
    VGDA                        2
    PE Size (Mbytes)            32
    Total PE                    31999
    Alloc PE                    31360
    Free PE                     639
    Total PVG                   0
    Total Spare PVs             0
    Total Spare PVs in use      0
    VG Version                  1.0
    VG Max Size                 16000g
    VG Max Extents              512000
     
       --- Logical volumes ---
       LV Name                     /dev/stage/stagelv
       LV Status                   available/syncd
       LV Size (Mbytes)            1003520
       Current LE                  31360
       Allocated PE                31360
       Used PV                     1
     
    
       --- Physical volumes ---
       PV Name                     /dev/dsk/c33t0d0
       PV Name                     /dev/dsk/c35t0d0 Alternate Link
       PV Status                   available
       Total PE                    31999
       Free PE                     639
       Autoswitch                  On
       Proactive Polling           On
    
  11. Vérification que le vg est ok sur le 2nd node
    Commenter :
    /dev/stage/stagelv /staging vxfs defaults 0 2
    dans /etc/fstab sur le 1er node
    
    De-commenter :
    /dev/stage/stagelv /staging vxfs defaults 0 2
    dans /etc/fstab sur le 2nd node
    
  12. Vérifier que le point de montage est présent
    root@hx000141: /root/home/root # ll -d /staging
    drwxr-xr-x   2 root       sys             96 Feb  3 15:12 /staging
    
  13. Montage et vérification
    root@hx000141: /root/home/root # mount -aQ
    
    root@hx000141: /root/home/root # mount
    / on /dev/vg00/lvol3 ioerror=mwdisable,largefiles,delaylog,dev=40000003 on Thu Jan  6 14:54:02 2011
    /stand on /dev/vg00/lvol1 ioerror=mwdisable,nolargefiles,log,tranflush,dev=40000001 on Thu Jan  6 14:54:12 2011
    /var on /dev/vg00/lvol8 ioerror=mwdisable,largefiles,delaylog,dev=40000008 on Thu Jan  6 14:54:53 2011
    /var/adm/crash on /dev/vg00/lvol10 ioerror=mwdisable,largefiles,delaylog,dev=4000000a on Thu Jan  6 14:54:53 2011
    /usr on /dev/vg00/lvol7 ioerror=mwdisable,largefiles,delaylog,dev=40000007 on Thu Jan  6 14:54:54 2011
    /tmp on /dev/vg00/lvol4 ioerror=mwdisable,largefiles,delaylog,dev=40000004 on Thu Jan  6 14:54:54 2011
    /opt on /dev/vg00/lvol6 ioerror=mwdisable,largefiles,delaylog,dev=40000006 on Thu Jan  6 14:54:54 2011
    /home on /dev/vg00/lvol5 ioerror=mwdisable,largefiles,delaylog,dev=40000005 on Thu Jan  6 14:54:54 2011
    /appl on /dev/vg01/lvol1 ioerror=mwdisable,largefiles,delaylog,dev=80000001 on Thu Jan  6 14:54:55 2011
    /net on -hosts ignore,indirect,nosuid,soft,nobrowse,dev=4000002 on Thu Jan  6 14:56:19 2011
    /staging on /dev/stage/stagelv ioerror=mwdisable,largefiles,delaylog,dev=40010001 on Thu Feb  3 15:13:08 2011
    

Page suivante »