Popular Posts

Dec 6, 2018

NFS Entries Not Mounting On Boot In Solaris 10 or Solaris 11

NFS Entries Not Mounting On Boot In Solaris 10 or Solaris 11

APPLIES TO :   Solaris 10 Version 10 3/05 and later or Solaris 11


SYMPTOMS  :   NFS file system that will mount at boot (that /etc/vfstab field is "yes") results in failure to even attempt to mount the configured NFS filesystem upon boot.


CAUSE  :   Solaris 10 11/06 release introduced the "Secure by Default" profile.  One of the services that was disabled by default was svc:/network/nfs/client:default.
Without nfs/client:default enabled, along with all its required dependencies, /etc/vfstab entries for NFS will not be mounted at boot time.


SOLUTION  :

1.           Use the recursive "-r" option to svcadm(1M) to ensure all required dependencies to nfs/client are also started:
                          # svcadm enable -r svc:/network/nfs/client:default

2.           Ensure nfs/client is running normally and all "required" dependencies are met prior to reboot:
                          # svcs -l nfs/client

3.           Reboot the system and check /etc/mnttab or "df -h" output to confirm.


NOTE: this is specific to a particular zone - nfs/client must be enabled in non-global zone NFS clients as well as the global zone.  Each zone is effectively an independent NFS client. 



~Judi~

NFS not mounting in Solaris 10
NFS not mounting in Solaris 11
NFS not mounting at boot 
NFS mount point not mounting at boot 
nfs mounts not mounting on reboot
mount NFS at reboot
mount nfs after boot
vfstab not mounting at boot
nfs mount fails on boot
nfs mount missing after reboot
nfs entries in /etc/fstab not mounting on boot

Nov 9, 2018

Replace a ZFS Root Pool to another disk


Replace a ZFS Root Pool to another disk


APPLIES TO : Solaris 11.3 LDOM

ISSUE : Server is running in 100GB OS LUN, need to move the OS to another disk provided from another array. 

GOAL :  Replace a ZFS Root Pool with another disk, Move the OS LUN to the disk provided from another Storage Array

SOLUTION :  Attach the secondary disk in zpool, boot the server in both disks, detach the primary disk.

EXAMPLE :  

1.      Verify the secondary disk c1d1 is visible in format output
                root@judi-dev-01:~# echo | format
                Searching for disks...done
                
                AVAILABLE DISK SELECTIONS:
                       0. c1d0 <3PARdata-VV-3212-100.00GB>
                          /virtual-devices@100/channel-devices@200/disk@0
                       1. c1d1 <3PARdata-VV-3212-100.00GB>
                          /virtual-devices@100/channel-devices@200/disk@1
                Specify disk (enter its number): Specify disk (enter its number):

                root@judi-dev-01:~#

2.      Check the zpool list
                root@judi-dev-01:~# zpool list
                NAME    SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
                rpool  99.5G  9.37G  90.1G   9%  1.00x  ONLINE  -

                root@judi-dev-01:~#

3.      Check the zpool Status
                root@judi-dev-01:~# zpool status
                  pool: rpool
                 state: ONLINE
                  scan: resilvered 7.37G in 30s with 0 errors on Thu Nov  8 11:51:10 2018

                config:

                        NAME    STATE     READ WRITE CKSUM
                        rpool   ONLINE       0     0     0
                          c1d0  ONLINE       0     0     0

                errors: No known data errors

                root@judi-dev-01:~#

4.      Verify the bootdisk path (The server booted with disk@0)
                root@judi-dev-01:~# prtconf -vp | grep bootpath
                        bootpath:  '/virtual-devices@100/channel-devices@200/disk@0'

                root@judi-dev-01:~#

5.      Attach the secondary disk c1d1 to zpool
                root@judi-dev-01:~# zpool attach rpool c1d0 c1d1
                Make sure to wait until resilver is done before rebooting.

                root@judi-dev-01:~#

6.      Check the status of the pool, Wait till the resilvered completes 
                root@judi-dev-01:~# zpool status
                  pool: rpool
                 state: ONLINE
                  scan: resilvered 7.37G in 34s with 0 errors on Thu Nov  8 12:34:45 2018

                config:

                        NAME        STATE     READ WRITE CKSUM
                        rpool       ONLINE       0     0     0
                          mirror-0  ONLINE       0     0     0
                            c1d0    ONLINE       0     0     0
                            c1d1    ONLINE       0     0     0

                errors: No known data errors

                root@judi-dev-01:~#

7.      Apply the boot blocks after the new disk is resilvered 
                root@judi-dev-01:~# bootadm install-bootloader

                root@judi-dev-01:~#

8.      Bring down the server to OK Prompt (OBP)
                {0} ok show-disks
                a) /reboot-memory@0
                b) /virtual-devices@100/channel-devices@200/disk@1

                c) /virtual-devices@100/channel-devices@200/disk@0

9.      Boot the server with secondary disk from OK Prompt (OBP) [Now server is booting with disk@1]
                {0} ok boot /virtual-devices@100/channel-devices@200/disk@1
                Boot device: /virtual-devices@100/channel-devices@200/disk@1  File and args:
                SunOS Release 5.11 Version 11.3 64-bit
                Copyright (c) 1983, 2018, Oracle and/or its affiliates. All rights reserved.

                Hostname: judi-dev-01

10.      Verify the bootdisk path (The server should have booted through the secondary disk disk@1)
                root@judi-dev-01:~# prtconf -vp | grep bootpath
                        bootpath:  '/virtual-devices@100/channel-devices@200/disk@1'

                root@judi-dev-01:~#

                IF YOU FACE ANY ISSUE IN BOOTING THE SERVER IN SECONDARY DISK THEN ROLL BACK THE CHANGE BY BOOTING THE SERVER WITH PRIMARY DISK
                /virtual-devices@100/channel-devices@200/disk@0

                FOLLOW THE BELOW PROCEDURE IF YOU THE SERVER BOOTS IN SECONDARY DISK WITHOUT ANY ISSUE.

11.      Check the zpool Status
                root@judi-dev-01:~# zpool status
                  pool: rpool
                 state: ONLINE
                  scan: resilvered 7.37G in 34s with 0 errors on Thu Nov  8 12:34:45 2018

                config:

                        NAME        STATE     READ WRITE CKSUM
                        rpool       ONLINE       0     0     0
                          mirror-0  ONLINE       0     0     0
                            c1d0    ONLINE       0     0     0
                            c1d1    ONLINE       0     0     0

                errors: No known data errors

                root@judi-dev-01:~#

12.      Detach the old(primary) disk c1d0 from the pool, since the server is booting without any issue from the secondary disk.
                root@judi-dev-01:~# zpool detach rpool c1d0

                root@judi-dev-01:~#

13.      Check the status of the zpool, the zpool contains only the secondary disk
                root@judi-dev-01:~# zpool status
                  pool: rpool
                 state: ONLINE
                  scan: resilvered 7.37G in 34s with 0 errors on Thu Nov  8 12:34:45 2018

                config:

                        NAME    STATE     READ WRITE CKSUM
                        rpool   ONLINE       0     0     0
                          c1d1  ONLINE       0     0     0

                errors: No known data errors

                root@judi-dev-01:~#

14.      reboot the server again to confirm the change is successful and the bootdisk path.
                root@judi-dev-01:~# prtconf -vp | grep bootpath
                        bootpath:  '/virtual-devices@100/channel-devices@200/disk@1'

                root@judi-dev-01:~#



Known Errors :  You may be getting the below error while booting the server from secondary disk.
                {0} ok boot /virtual-devices@100/channel-devices@200/disk@1
                Boot device: /virtual-devices@100/channel-devices@200/disk@1  File and args:

                ERROR: Last Trap: Fast Data Access MMU Miss

                After attaching the secondary disk you might have used the installboot command as below.
                installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/cwtxdys0
                This command is incorrect to use in this procedure, 
                
                the correct caommnd to install the bootblock is 
                bootadm install-bootloader










~Judi~


Solaris Root Mirror
zpool detach disk and boot
zpool replace disk
Replace a ZFS Root Pool
single disk root pool
solaris 11 zpool mirror
zpool mirror replace disk
zpool mirror status

zpool mirror rpool
Replace a ZFS Root Pool to another disk



Oct 25, 2018

increase tmp Filesystem online with out reboot in Solaris


APPLIES TO : Solaris 10

ISSUE : The /tmp was configured to 512MB mount and it's exceeded and unfortunately that server cannot be rebooted or the /tmp cannot be umounted (bunch of files were opened and used). OR the mount_tmpfs doesn't support the "remount" option ("mount -F tmpfs -o remount /tmp")

GOAL :  Increase /tmp FS space online with out reboot the Solaris 10 server.

SOLUTION :  Increase the /tmp to 3GB

This example shows how I grew the 512MB mounted /tmp to 3GB on a Solaris 10 8/11 u10 SPARC system. 

Please, don't try this in your system. If you don't listen to me then keep in mind that the HEXA numbers might differ on your system.


EXAMPLE :  

1.      Get the relevant info of the /tmp
                JUDI-DEV-001 # df -h /tmp
                Filesystem             size   used  avail capacity  Mounted on
                swap                   512M   152K   512M     1%    /tmp
                JUDI-DEV-001 #


                JUDI-DEV-001 # echo "::fsinfo" | mdb -k | egrep "VFSP|/tmp"
                            VFSP FS              MOUNT
                0000060019aaea80 tmpfs           /tmp
                JUDI-DEV-001 #

2.      Get the address of the tm_anonmax to set its value.
                JUDI-DEV-001 # echo "0000060019aaea80::print vfs_t vfs_data | ::print -ta struct tmount tm_anonmax" | mdb -k
                600167d0810 ulong_t tm_anonmax = 0x10000
                JUDI-DEV-001 #
        // Address and the current value of the tm_anonmax

3.      Set the new value
                JUDI-DEV-001 # echo "600167d0810/Z 0x60000" | mdb -kw
                0x600167d0810:  0x10000                 =       0x60000
                JUDI-DEV-001 #
                NOTE: the 0x60000 is 384KB -> 384KB * 8KB = 3072MB = 3GB.



4.      Check if it's set.
                JUDI-DEV-001 # echo "600167d0810/J" | mdb -k
                0x600167d0810:  60000
                JUDI-DEV-001 #

        OR
                JUDI-DEV-001 # echo "60019aaea80::print vfs_t vfs_data | ::print struct tmount tm_anonmax" | mdb -k
                tm_anonmax = 0x60000
                JUDI-DEV-001 #



5.      Check if it's working or not.
                JUDI-DEV-001 # df -h /tmp
                Filesystem             size   used  avail capacity  Mounted on
                swap                   3.0G   152K   3.0G     1%    /tmp

                JUDI-DEV-001 #

This modification will not exists post reboot, After reboot of  server the default configured tmp size (512MB) will get mounted, Please make sure to change the /etc/vfstab entry. with a - instead of 512 for /tmp

CONCLUSION : 
The Live Example above shows a real example which was done by myself on a Sol10 system.
I think this solution above can be done on any Solaris version which has the feature of the tmpfs-ed /tmp Filesystem


Credits and More details :-
http://ilapstech.blogspot.com/2009/11/grow-tmp-filesystem-tmpfs-on-line-under.html




~Judi~




Popular Posts