Popular Posts

Jan 31, 2018

File system utilization report in mail


File system utilization report in mail

Issue :
Very often file systems are filled with application logs and other data,This space filling makes the server to panic for the servers which do not have proper monitoring in place.

Goal :
Deploy a  script to monitor the file systems usage with a defined threshold and to notify the support team and application team to clear the space.

Solution :
A script has been created to monitor the File System usage with a threshold of 85%, This script will run in cron every 30 minutes and send a mail to mentioned mail ID's if any of the File System is more than 85%

#!/bin/ksh
#disk_usage.sh - Monitor the disk usage and alert the support/applicaiton team
################################
#       Begin               
#       Author : Roselin John
#       Version 0.1
# -
# -
# -
# -
# -
################################

HOST=`uname -n`
> /root/scripts/disk_log
> /root/scripts/disk_log.txt
df -k | sed '1d' | awk '{ if ($5> 85) {print "Filesystem", $6, "on Server '$HOST' is", $5, "used, Please clear space"}}' > /root/scripts/disk_log
if [ -s /root/root_scripts/disk_log ] ; then
unix2dos /root/root_scripts/disk_log /root/scripts/disk_log.txt
mailx -s "Disk Monitor Alert" judi@gmail.com < /root/scripts/disk_log.txt
fi


Update the below entry in cron
30 * * * * /root/scripts/disk_usage.sh 2>&1

File System Monitoring
File System Monitoring script
Monitor File system changes
filesystem usage

Jan 18, 2018

How to Deploy a System From a Unified Archive using AI


How to Deploy a System From a Unified Archive using AI

GOAL : This document will show how to deploy a UAR using Solaris 11.3 using an AI service

Already a AI service is running with default OS deployment.
Install Solaris 11.3, Once the standard configuration is set, Clone the server using archiveadm.

Create a unified archive or UAR :
                  #  archiveadm create --root-only -e -z global -D rpool/UAR /UAR/sparc11.3.uar

The installer service paths are below
HTTP PATH  :  http://192.168.1.10:5555/uar.d/
Physical path :  /var/ai/image-server/images/uar.d/sparc11.3.uar - Place the UAR file in this path.

Create a /tmp//manifet-ARI.xml manifest file with the below content.
<!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1">
<auto_install>
  <ai_instance name="manifet-ARI" auto_reboot="true">
    <target>
      <logical>
        <zpool name="rpool" is_root="true">
          <filesystem name="export" mountpoint="/export"/>
          <filesystem name="export/home"/>
        </zpool>
      </logical>
    </target>
    <software type="ARCHIVE">
      <source>
        <file uri="http://192.168.1.10:5555/uar.d/sparc11.3.uar"/>
      </source>
    <software_data action="install">
      <name>global</name>
    </software_data>
    </software>
  </ai_instance>
</auto_install>


Creating Mainfest under Service :
                  #  installadm create-manifest -n default-sparc -m manifest-UAR -f /tmp//manifet-ARI.xml

Set this mainfest as default :
                  #  installadm set-service -M manifest-UAR -n default-sparc

Add clients to Manifest :
                  #  installadm create-client -n default-sparc -e 00:15:4g:1e:1g:48

Start the client with below command to boot from network installation :
                  OK>   setenv network-boot-arguments host-ip=192.168.1.75,router-ip=192.168.1.1,subnet-mask=255.255.255.0,hostname=judi-dev-01,file=http://192.168.1.10:5555/cgi-bin/wanboot-cgi
                  OK>   printenv network-boot-arguments
                  OK>   boot net - install

OS installation will start from AI servers .uar image


uar deployment
unified archive solaris 11.3
Creating a Unified Archive
How to Deploy a System From Unified Archive
System Recovery and Cloning With the Oracle Solaris Unified Archive
Oracle Solaris 11.3 Downloads - Unified Archives
How to Migrate a Non-Global Zone Using Unified Archives
Unified Archive Types
Create and Deploy a Clone Archive

~Judi~



Jan 4, 2018

Solaris Volume Manager (SVM) Command Line Reference

Solaris Volume Manager (SVM) Command Line Reference

Commands and Configuration files location:
For Solaris Volume Manager Solaris 8, 9, 10:
           Commands are  in /usr/sbin/
           Configuration files are in /etc/lvm/

Configuration files used with metadb.
           SDS:   /etc/system, /etc/opt/SUNWmd/mddb.cf
           SVM:  /etc/lvm/mddb.cf /kernel/drv/md.conf

The md.tab file Located in /etc/lvm/md.tab (SVM) and /etc/opt/SUNWmd/md.tab (SDS).  The file may be used to automatically create metadevices.

The file is empty (by default)
You may populate the file by appending the output of # metastat -p. For example #metastat -p >> /etc/lvm/md.tab
The md.tab file is never used unless the administrator issues a metainit command to read it.
The most common usage is #metainit -a (create all metadevices in md.tab), and #metainit dxx (create metadevice dxx only)]
Best used in recovery of SVM configurations .
Not recommended to be used on the root file system.

Local Database Replicas aka local metadb:
Put 3 replicas on c0t0d0s7.
           # metadb -a -f -c 3 c0t0d0s7

Create 3 more copies each on two disk drive slices.
           # metadb -a -c 3 c0t1d0s7
           # metadb -a -c 3 c0t2d0s7

Deleting Replicas.
           # metadb -d c0t1d0s7

Deleting your last replica (You SVM configuration will be gone.)
           # metadb -d -f c0t0d0s7

Checking meta database status:
           # metadb -i

Creating a Concatenation:

Creating a Concatenation from slice 2 of 3 disk drives:
           # metainit d1 3 1 c0t1d0s2 1 c1t1d0s2 1 c2t1d0s2
           d1 - the metadevice
           3   - the number of components to concatenate together
           1   - the number of devices for each component.

Creating a Simple Stripe from slice 2 of 3 disk drives.
           # metainit d2 1 3 c0t1d0s2 c1t1d0s2 c2t1d0s2 -i 16k
           d2      - the  metadevice
           1        - the number of components to to concatenate
           3        - the number of devices in each stripe.
           -i 16k - the stripe segment size

A more complicated example. 3, "two disk concatenations"  are striped together.
           # metainit d3 3 2 c0t1d0s2 c1t1d0s2  -i 16k 2 c3t1d0s2 c4t1d0s2  -i 16k 2 c6t1d0s2 c7t1d0s2  -i 16k
           d3     - the meatadevice
           3       - the number of stripes
           2       - the number of disk (slices) in each stripe
           -i 16k - the stripe segment size.

Growing, extending a metadevice
           # metattach d1 c3t1d0s2
         
extends a metadevice by concatenating a slice to the end. It does not expand a filesystem. You have to grow UFS filesystem once the metadevice was extended.
           # growfs /dev/md/rdsk/d1
           If the metadevice is not mounted, the above command extends the filesystem to include the added section. You cannot shrink this filesystem later.
         
           # growfs -M /export/home /dev/md/rdsk/d1
           If the metadevice is mounted, the above command will extend the filesystem to include the concatenated section. Again, you cannot make the filesystem smaller later.

Removing a metadevice
           # metaclear d3
           d3 is the metadevice being removed.
           clears, deletes all metadevices. Don't do this unless you want to blow away your entire configuration.
           The devices cannot be open for use.., i.e. mounted

Viewing your configuration and status:
Shows the configuration and status of all metadevices         
           # metastat -p

Will tell the configuration and status of just metadevice d3
           # metastat d3
         
Tells the location and status of locally configured replicas
           # metadb
           Note that: these commands displays the configuration on the local filesystems and not on the disksets, metasets. For metasets add -s <setname>

Hot Spare pools:
Sets up a pool called hsp001. It contains no disks yet.
           # metainit hsp001
         
Adds a slice to the hot spare pool. 
           # metahs -a hsp001 c0t1d0s4

Adds a slice to all pools         
           # metahs -a all c1t1d0s4
         
Makes a hot spare pool available to the metadevice d1 {submirror or RAID5}
           # metaparam -h hsp001 d1
         
Reenables a hot spare that was previously unavailable
           # metahs -e c1t1d0s4

Replaces the first disk slice listed with the second
           # metahs -r hsp001 c1t1d0s4 c2t1d0s4

Removes a disk slice from all hot spare pools
           # metahs -d all c1t1d0s4

Removes a disk slice from hsp001
           # metahs -d hsp001 c1t1d0s4

Removes a hot spare pool
           # metahs -d hsp001

Reports the Disksuite/LVM status
           # metahs -i
           # metastat

Mirrors:

           # metainit d0 -m d1
           Makes a one-way mirror. d0 is the device to mount, but d1 is the only one associated with an actual device.
           A "one-way mirror" is not really a mirror yet. There's only one place where the data is actually
stored, namely d1.

Attaches d2 to the d0 mirror.
           # metattach d0 d2
           Now there are 2 places where the data are  stored, d1 and d2. But you mount the metadevice d0.

Detaches d1 from the d0 mirror
           # metadetach d0 d1

Suspends/resumes use of d2 device on d0 mirror
           # metaoffline d0 d2
           # metaonline d0 d2

Replaces first disk listed with second on the d0 mirror
           # metareplace d0 c1t0d0s2 c4t1d0s2

Re-enables a disk that has been errored.
           # metareplace -e d0 c1t1d0s2
           re-enables a disk that has been errored.


Mirroring root:
You must take a few extra steps to mirror the root partition
           # metainit -f d1 1 1 c0t3d0s0 <-- the root partition
           # metainit d0 -m d1
           # metaroot d0
           The metaroot command updates /etc/system and /etc/vfstab so that the device /dev/md/dsk/d0 is now the root device.
           Note: It is recommended to take a copy of the /etc/system and /etc/vfstab before running metaroot command or making any change in these files.

You must reboot with a one-way mirror : do not create a 2 way mirror before rebooting otherwise the system will crash because of the round robin manner in which data is read.
           # metainit d2 1 1 c0t4d0s0
           # metattach d0 d2
           Now d2 is attached and data is mirrored on d1 and d2.

Note : this procedure mirrors only /, if you want to mirror the whole system disk, do not forget to mirror the swap slice and all other slices where a file system is installed using the above procedure of creating submirrors and mirrors.
In this case, remember that metaroot command only modify / in vfstab, so you have to manually edit other system disk entries in /etc/vfstab to put metadevice paths before rebooting.

Raid 5:
Sets up a RAID 5 configuration.
           # metainit d1 -r c0t1d0s2 c1t1d0s2 c2t1d0s2 -i 16k
           The -i option is the same as in striping.

Replacing disks as in the mirror.
           # metareplace d1 c2t3d0s2 c3t1d0s2
           # metareplace -e d1 c0t1d0s2

Concatenates a disk to the end of the RAID 5 configuration.
           # metattach d1 c4t3d0s2

Adds a hot spare pool
           # metaparam -h hsp001 d1

Removes a metadevice
           # metaclear d1

Tells status
           # metastat



UFS logging:   (obsolete, UFS has this now by default)
Sets up a trans device d0 with d1 as the master and d2 as the logging device.
           # metainit d0 -t d1 d2
           Recomended 1MB logging/1GB data on master
Same as above
           # metainit d0 -t c0t1d0s2 c3t2d0s5

Attaching and detaching a log device on/from d0
           # metattach d0 d1
           # metattach d0 c3t1d0s5
           # metadetach d0

Disksets:
You can do almost everything the same way, except specify -s <deskset>
Disks are repartitioned when put into a diskset unless slice 2 is zeroed out and slice 7 has cylinders 0-4 or 0-5 allocated to it for the diskset metadb

           # command -s <setname> options

Adds hosts to a set
           # metaset -s <setname> -a -h <hostname1> <hostname2>

Adds drives to a set. Notice we do not specify slice
           # metaset -s <setname> -a c2t0d0 c2t1d0 c2t2d0 c2t3d0

Removes hosts and drives.
           # metaset -s <setname> -d c2t3d0
           # metaset -s <setname> -d -h <hostname>

Release control of a diskset:
           # metaset -s <setname> -r

View the errored metadevice alone
           # metastat | awk '/State:/ { if ( $2 != "Okay" ) if ( prev ~ /^d/ ) print prev, $0 } { prev = $0 }' 

View the medadevice status

           # metastat | awk '/State:/ {if ( prev ~ /^d/ ) print prev, $0 } { prev = $0 }' #Will get all Metadevice status


Take control of a diskset. the -f option will force control but will panic other machine, unless it has been released from other host.

Solaris Volume Manager (SVM): Best Practices for Creation and Implementation of Soft Partitions ( Doc ID 1417827.1 )
Solaris Volume Manager (SVM) Command Line Reference ( Doc ID 1011732.1 )
Analyzing Internal non-RAID Disk Failures for x64 Solaris ( Doc ID 1017472.1 )



Popular Posts