Popular Posts

Nov 28, 2017

Fine tune zfs for Solaris application and database servers

Fine tune zfs for Solaris application and database servers

Memory Management Between ZFS and Applications in Oracle Solaris 11.x (Doc ID 1663862.1)

APPLIES TO: Solaris Operating System - Version 11.1 and later

The user_reserve_hint_pct Parameter
Solaris 11.2 and Solaris 11.1 SRU 20.5 or newer includes a new user_reserve_hint_pct tunable parameter to provide a hint to the system about application memory usage. This hint is used to limit growth of the ZFS ARC cache so that more memory stays available for applications.
if user_reserve_hint_pct is tuned appropriately, memory that is returned to the freemem pool is less likely to be reused by the kernel. This, in turn, allows administrators to keep a reserve of free memory for future applications demands by restricting growth of the ZFS ARC cache. While the motivation for using this parameter might include faster application startup and dynamic reconfiguration of memory boards, its primary use is to ensure that large memory pages stay available for demanding applications, such as databases.

It's very important to calculate a suitable value for user_reserve_hint_pct to avoid this situation.  See 'How to calculate a suitable value' heading below.














A scenario where user_reserve_hint_pct is set but the application has consumed more memory than this value.  This is permitted because user_reserve_hint_pct is only a 'hint' for user-land applications and not a hard limit.  If applications use more than the pre-defined value this will usually lead to system performance and hang issues.

Description:
Informs the system about how much memory is reserved for application use, and therefore limits how much memory can be used by the ZFS ARC cache as the cache increases over time.
By means of this parameter, administrators can maintain a large reserve of available free memory for future application demands. The user_reserve_hint_pct parameter is intended to be used in place of the zfs_arc_max parameter to restrict the growth of the ZFS ARC cache.

Data Type 
    Unsigned Integer (64-bit)

Default
    0 (unset)

Range
    0 - 99%of physical memory

The minimum size of the ZFS ARC is 64MB on systems with physmem <= 16GB of RAM or 0.5% of physmem for system >16GB of RAM.

Units 
percent.  Values should be positive whole decimal integers.  Negative numbers or floating points are not permitted.

Dynamic?
yes

Validation 
Yes, the range is validated.

When to Change :
For upward adjustments, increase the value if the initial value is determined to be insufficient over time for application requirements, or if application demand increases on the system. Perform this adjustment only within a scheduled system maintenance window. After you have changed the value, reboot the system.

For downward adjustments, decrease the value if allowed by application requirements. Make sure to use decrease the value only by small amounts, no greater than 5% at a time.

How to calculate a suitable value :
Calculations can be performed one of two ways.
1) If the size of the ARC should be capped; ie if we're converting a previous zfs_arc_max value to user_reserve_hint_pct, use:
user_reserve_hint_pct = USER_RESERVE_HINT_MAX - (((Kernel + Defdump prealloc + ZFS Metadata + desired zfs_arc_max) / Total (physmem))*100)
USER_RESERVE_HINT_MAX = 99.  For this example zfs_arc_max = 1GB.  All values are specified in Megabytes.
99-(((3276 + 925 + 109 + 1024)/16384)*100) = 66.44  (66 rounded down)

2) If the amount of memory needed by applications is known, calculate it this way:
user_reserve_hint_pct = Application Demand
eg: The following assigns 8GB to apps/db leaving the remaining for the ARC and Kernel:
user_reserve_hint_pct = (8192/16384)*100) = 50
Note: Memory must remain for the Kernel, Deferred Dump, etc to avoid system performance issues.

If the value had not been previously set, the default is zero so the -f flag must be used for the initial setting on a live system.

                         Down Load set_user_reserve.sh

# ./set_user_reserve.sh -p 60
./set_user_reserve.sh: 60 greater that 0; use -f to force upward adjustment

The following can take several minutes to complete depending on the current size of the ARC.

# ./set_user_reserve.sh -fp 60
Adjusting user_reserve_hint_pct from 0 to 60
Monday, March 30, 2015 04:59:47 PM BST : waiting for current value : 45 to grow to target : 60

Adjustment of user_reserve_hint_pct to 60 successful.
Make the setting persistent across reboot by adding to /etc/system

* Tuning based on MOS note 1663861.1, script version 1.0
* added Monday, March 30, 2015 05:09:53 PM BST by system administrator : <me>
set user_reserve_hint_pct=60


# vi /etc/system
(Add the 4 lines.  Save and quit vi)

# tail /etc/system
*
*       To set a variable named 'debug' in the module named 'test_module'
*
*               set test_module:debug = 0x13


* Tuning based on MOS note 1663861.1, script version 1.0
* added Monday, March 30, 2015 05:09:53 PM BST by system administrator : <me>
set user_reserve_hint_pct=60



~Judi~
~
zfs arc memory usage
zfs memory consumption
zfs memory usage solaris
zfs arc cache tuning
zfs arc tuning
zfs memory tuning
zfs memory management
manage zfs memory usage
fine tune zfs memory usage
zfs performance
zpool
zfs share
zfs set sharenfs
zfs share nfs
~










CIDR Netmask chart -Subnet Mask Information

CIDR Netmask chart -Subnet Mask Information

cidrnetmask
/1128.0.0.0
80000000
/2192.0.0.0
C0000000
/3224.0.0.0
E0000000
/4240.0.0.0
F0000000
/5248.0.0.0
F8000000
/6252.0.0.0
FC000000
/7254.0.0.0
FE000000
/8255.0.0.0
FF000000
cidrnetmask
/9255.128.0.0
FF800000
/10255.192.0.0
FFC00000
/11255.224.0.0
FFE00000
/12255.240.0.0
FFF00000
/13255.248.0.0
FFF80000
/14255.252.0.0
FFFC0000
/15255.254.0.0
FFFE0000
/16255.255.0.0
FFFF0000
cidrnetmask
/17255.255.128.0
FFFF8000
/18255.255.192.0
FFFFC000
/19255.255.224.0
FFFFE000
/20255.255.240.0
FFFFF000
/21255.255.248.0
FFFFF800
/22255.255.252.0
FFFFFC00
/23255.255.254.0
FFFFFE00
/24255.255.255.0
FFFFFF00
cidrnetmask
/25255.255.255.128
FFFFFF80
/26255.255.255.192
FFFFFFC0
/27255.255.255.224
FFFFFFE0
/28255.255.255.240
FFFFFFF0
/29255.255.255.248
FFFFFFF8
/30255.255.255.252
FFFFFFFC
/31255.255.255.254
FFFFFFFE
/32255.255.255.255
FFFFFFFF


cidr  nets
/0
.0
0
/1
.128
0 128
/2
.192
0 64 128 192
/3
.224
0 32 64 96 128 160 192 224
/4
.240
0 16 32 48 64 80 96 112 128 144 160 176 192 208 224 240
/5
.248
0 8 16 24 32 40 48 56 64 72 80 88 96 104 112 120 128 136 144 152 160 168 176 184 192 200 208 216 224 232 240 248
/6
.252
0 4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76 80 84 88 92 96 100 104 108 112 116 120 124 128 132 136 140 144 148 152 156 160 164 168 172 176 180 184 188 192 196 200 204 208 212 216 220 224 228 232 236 240 244 248 252
/7
.254  
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66 68 70 72 74 76 78 80 82 84 86 88 90 92 94 96 98 100 102 104 106 108 110 112 114 116 118 120 122 124 126 128 130 132 134 136 138 140 142 144 146 148 150 152 154 156 158 160 162 164 166 168 170 172 174 176 178 180 182 184 186 188 190 192 194 196 198 200 202 204 206 208 210 212 214 216 218 220 222 224 226 228 230 232 234 236 238 240 242 244 246 248 250 252 254




CIDR netmask chart
Subnet Mask Information
subnet mask convert hex to decimal
Subnetting, netmasks and slash notation

~Judi~

Nov 23, 2017

Solaris 10 x86 SVM Patching

Solaris 10 x86 SVM Patching
Solaris 10 SVM Patching (x86)

Step 1:     Backup the necessary configuration file and save it
                          df -h
                          metastat -p
                          metadb
                          echo | format
                          prtconf -v | sed -n '/bootpath/{;p;n;p;}'
                  Root Disk - c0t0d0
                  Root Mirror Disk - c0t1d0
                  6 copies of metadb replicas - c0t0d0s7 and c0t1d0s7
                  Current Kernel - 147441-01

Step 2:      Detatch the submirrors and clear them
                  /dev/md/dsk/d0 /
                  /dev/md/dsk/d1 swap
                  /dev/md/dsk/d3 /var
                  /dev/md/dsk/d4 /opt/BMC

                   JUDI-DEV-TEST01# metastat -p
                   d4 -m d14 d24 1
                   d14 1 1 c0t0d0s4
                   d24 1 1 c0t1d0s4
                   d3 -m d13 d23 1
                   d13 1 1 c0t0d0s3
                   d23 1 1 c0t1d0s3
                   d1 -m d11 d21 1
                   d11 1 1 c0t0d0s1
                   d21 1 1 c0t1d0s1
                   d0 -m d10 d20 1
                   d10 1 1 c0t0d0s0
                   d20 1 1 c0t1d0s0
                   JUDI-DEV-TEST01#
                          metastat -p
                          metadetach d0 d20
                          metadetach d1 d21
                          metadetach d3 d23
                          metadetach d4 d24

                          metastat -p

                          metaclear d20
                          metaclear d21
                          metaclear d23
                          metaclear d24

                          metastat -p

Step 3:      Remove replicas added on root mirror disk
                          metadb
                          metadb -d c0t1d0s7
                          metadb

Step 4:      Install grub on root mirror disk to make sure the disk is bootable incase we want to back-out the patching
                          installgrub -m /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c0t1d0s0

Step 5:      Mount root Mirror disk c0t1d0s0 on mnt
                          mount /dev/dsk/c0t1d0s0 /mnt
                          df -h /mnt

Step 6:      Modify /mnt/etc/vfstab & /mnt/etc/system files
                          cat /mnt/etc/vfstab
                          vi /mnt/etc/vfstab
                          /dev/dsk/c0t1d0s1  -       -       swap    -       no      -
                          /dev/dsk/c0t1d0s0  /dev/rdsk/c0t1d0s0 /       ufs     1       no      -
                          /dev/dsk/c0t1d0s3  /dev/rdsk/c0t1d0s3 /var    ufs     1       no      -
                          /dev/dsk/c0t1d0s4  /dev/rdsk/c0t1d0s4 /opt/BMC        ufs     2       yes     -

                          tail /mnt/etc/system
                          vi /mnt/etc/vfstab (Comment the mirror related information)
                          * Begin MDD root info (do not edit)
                          * rootdev:/pseudo/md@0:0,0,blk
                          * End MDD root info (do not edit)
                          * set md:mirrored_root_flag=1

Step 7:      Delete existing boot sign on root mirror disk and create unique one
                          ls -l /mnt/boot/grub/bootsign/
                          -r--r--r--   1 root     root           0 Oct 17  2013 rootfs0

                          rm /mnt/boot/grub/bootsign/rootfs0
                          touch /mnt/boot/grub/bootsign/rootfs1
                          ls -l /mnt/boot/grub/bootsign/

Step 8:      Update root mirror disk menu.lst file
                          ls -lrt /mnt/boot/grub/menu.lst
                          cat /mnt/boot/grub/menu.lst

                  Edit the file and change the rootfs0 to rootfs1 in both lines and change the title as Root Mirror Disk
                          vi /mnt/boot/grub/menu.lst
                          #---------- ADDED BY BOOTADM - DO NOT EDIT ----------
                          title Root Mirror Disk Oracle Solaris 10 8/11 s10x_u10wos_17b X86
                          findroot (rootfs1,0,a)
                          kernel /platform/i86pc/multiboot
                          module /platform/i86pc/boot_archive
                          #---------------------END BOOTADM--------------------
                          #---------- ADDED BY BOOTADM - DO NOT EDIT ----------
                          title Solaris failsafe
                          findroot (rootfs1,0,a)
                          kernel /boot/multiboot -s
                          module /boot/amd64/x86.miniroot-safe
                          #---------------------END BOOTADM--------------------

                          cat /mnt/boot/grub/menu.lst

Step 9:      Update boot environment variable on root mirror disk
                          echo | format
                          cat /mnt/boot/solaris/bootenv.rc
                          ls -ld /dev/dsk/c0t1d0s0
                          lrwxrwxrwx   1 root     root          62 Oct 17  2013 /dev/dsk/c0t1d0s0 -> ../../devices/pci@0,0/pci8086,3c06@2,2/pci1028,1f38@0/sd@1,0:a
                          vi /mnt/boot/solaris/bootenv.rc
                          setprop bootpath '/pci@0,0/pci8086,3c06@2,2/pci1028,1f38@0/sd@0,0:a' ----> Remove this line
                          setprop bootpath '/pci@0,0/pci8086,3c06@2,2/pci1028,1f38@0/sd@1,0:a' ----> Add the secondary disks path here

Step 10:      Update boot disks menu.lst file,
                  This step will allow us to skip the step of configuring BIOS to boot from root mirror disk
                          cat /boot/grub/menu.lst
                  Edit the file and make entry to list out the secondary disk as separate boot disk while grub booting screen comes
Add the below entry to the bottom of the file - secondary disk - rootfs1
                          #---------- ADDED BY BOOTADM - DO NOT EDIT ----------
                          title Root Mirror Disk - Oracle Solaris 10 8/11 s10x_u10wos_17b X86
                          findroot (rootfs1,0,a)
                          kernel /platform/i86pc/multiboot
                          module /platform/i86pc/boot_archive
                          #---------------------END BOOTADM--------------------
                          #---------- ADDED BY BOOTADM - DO NOT EDIT ----------
                          title Solaris failsafe
                          findroot (rootfs1,0,a)
                          kernel /boot/multiboot -s
                          module /boot/amd64/x86.miniroot-safe
                          #---------------------END BOOTADM--------------------

Step 11:      Check the currently booted device
                          prtconf -v | sed -n '/bootpath/{;p;n;p;}'
                  Currently server booted from root disk, Now restart and boot from root mirror disk
                          init 6
                  Server will display as
                  creating boot_archivee for /mnt
                  updating /mnt/platform/i86pc/boot_archive

                  while rebooting slect the root mirror disk from grub menu - to check the root secodary disk is safe to boot.
                  Check from which disk the server is booted and make sure the disk booted from root mirror - now we are good to proceed patching
                          prtconf -v | sed -n '/bootpath/{;p;n;p;}'

Step 12:      Reboot the server to boot from boot disk see the root disk status
                          init 6
                  Check from which disk the server is booted and make sure the disk booted from boot disk - Primary disk
                          prtconf -v | sed -n '/bootpath/{;p;n;p;}'

Step 13:      Bring the server into single user mode and install the patches
                          who -r
                          init s

Step 14:      Start install the patches
                          ./installpatchset --s10patchset

Step 15:      Patch installation completed, Reboot the server
                          init 6
                  Verify the new patch version and smf status
                          uname -v
                          svcs -xv
                          df -h

Step 16:      Now create submirrors and replicas on root mirror disk
                          echo | format
                          metstat -p
                          metainit -f d0 1 1 c0t1d0s0
                          metainit -f 21 1 1 c0t1d0s1
                          metainit -f 23 1 1 c0t1d0s3
                          metainit -f 24 1 1 c0t1d0s4

Step 17:      Attach submirrors to the main mirror
                          metadb -afc3 c2t0d0s7
                          metadb
                          metstat -c

                          metattach d0 d0
                          metattach d1 d21
                          metattach d3 d23
                          metattach d4 d24

                          metastat -c
                  Wait until the resync completes

                          uname -X
                          metastat -c d0

                  Once the resync completed, Reboot the server
                          init 6



Solaris10
Solaris 10 x86 SVM Patching
Solaris 10 SVM Patching (x86);
how to patch solaris 10 x86 server;
Update Solaris 10 kernel version;
Solaris 10 x86 Kernel patching;

~Judi~




Nov 21, 2017

Solaris 11 Alternate Boot Environments

Solaris 11 Alternate Boot Environments

Introduction to Managing Boot Environments :

  • A boot environment is a bootable Oracle Solaris environment consisting of a root dataset and, optionally, other datasets mounted underneath it. Exactly one boot environment can be active at a time.
  • A dataset is a generic name for ZFS entities such as clones, file systems, or snapshots. In the context of boot environment administration, the dataset more specifically refers to the file system specifications for a particular boot environment or snapshot.
  • A snapshot is a read-only image of a dataset or boot environment at a given point in time. A snapshot is not bootable.
  • A clone of a boot environment is created by copying another boot environment. A clone is bootable.
  • Shared datasets are user-defined directories, such as /export, that contain the same mount point in both the active and inactive boot environments. Shared datasets are located outside the root dataset area of each boot environment.

About the beadm Utility : 

  • The beadm utility enables you to perform the following tasks:
  • Create a new boot environment based on the active boot environment
  • Create a new boot environment based on an inactive boot environment
  • Create a snapshot of an existing boot environment
  • Create a new boot environment based on an existing snapshot
  • Create a new boot environment, and copy it to a different zpool
  • Create a new boot environment and add a custom title to the x86 GRUB menu or the SPARC boot menu
  • Activate an existing, inactive boot environment
  • Mount a boot environment
  • Unmount a boot environment
  • Destroy a boot environment
  • Destroy a snapshot of a boot environment
  • Rename an existing, inactive boot environment
  • Display information about your boot environment snapshots and datasets
  • The beadm utility has the following features:
  • Aggregates all datasets in a boot environment and performs actions on the entire boot environment at once. You no longer need to perform ZFS commands to modify each dataset individually.
  • Manages the dataset structures within boot environments. For example, when the beadm utility clones a boot environment that has shared datasets, the utility automatically recognizes and manages those shared datasets for the new boot environment.
  • Enables you to perform administrative tasks on your boot environments in a global zone or in a non-global zone.
  • Automatically manages and updates the GRUB menu for x86 systems or the boot menu for SPARC systems. For example, when you use the beadm utility to create a new boot environment, that environment is automatically added to the GRUB menu or boot menu.

How to Create a Boot Environment
              beadm create BeName
              beadm create solaris-1

Activate the boot environment.
              beadm activate BeName

Listing Existing Boot Environments and Snapshots
              beadm list
                            -a – Lists all available information about the boot environment. This information includes subordinate datasets and snapshots.
                            -d – Lists information about all subordinate datasets that belong to the boot environment.
                            -s – Lists information about the snapshots of the boot environment.
                            -H – Prevents listing header information. Each field in the output is separated by a semicolon.

Viewing Boot Environment Specifications
              beadm list -a solaris-1

The values for the Active column are as follows:
R – Active on reboot.
N – Active now.
NR – Active now and active on reboot.
“-” – Inactive.
“!” – Unbootable boot environments in a non-global zone are represented by an exclamation point.


Viewing Snapshot Specifications
              beadm list -s solaris-1

Changing the Default Boot Environment
              beadm activate BeName
              beadm activate solaris-1

Destroying a Boot Environment
              beadm destroy solaris-1

If a Solaris server not booting after updating the patches, Boot the server from alternated boot environment
SPARC: How to Boot From an Alternate Operating System or Boot Environment
Bring the system to the ok PROM prompt.
Display a list of available boot environments by using the boot command with the -L option.
              boot -L

To boot a specified entry, type the number of the entry and press Return:
Select environment to boot: [1 - 2]:
To boot the selected entry, invoke:
boot [<root-device>] -Z rpool/ROOT/boot-environment

              boot -Z rpool/ROOT/boot-environment
              boot -Z rpool/ROOT/zfs2BE





~Judi~


Popular Posts