Popular Posts

Dec 18, 2017

OVM 3.4.4 Reinstallation

OVM reinstallation

- During the upgrade process, the ovmm service was deleted and the MySQL DB tables were already existing and were not deleted. There was network issue and the upgrade failed.
- Uninstall the OVM manager version 3.4.1 - Uninstall old manager by mounting OVM 3.4.1 ISO.
- Reinstall the OVM manager version 3.4.4 - Re-installing with uuid using the OVM 3.4.4 ISO.
- Regeneration the database by referring to "Oracle VM: How To Regenerate The OVM 3.3.x/3.4.x DB ( Doc ID 2038168.1 )".

Mount the IOS V137364-01.iso in /mnt
Uninstall the OVM manager version 3.4.1 - Uninstall old manager by mounting OVM 3.4.1 ISO.
[root@judi-dev-01 mnt]# ls -ltr
total 157510
-rw-r--r--. 1 root root   4291939 May  7  2016 OvmSDK_3.4.1.1369.zip
-rw-r--r--. 1 root root       230 May  7  2016 oracle-validated.params
-r-xr-x---. 1 root root     11556 May  7  2016 createOracle.sh
-rw-r--r--. 1 root root       372 May  7  2016 sample.yml
-r-xr-x---. 1 root root      1919 May  7  2016 runInstaller.sh
-rw-r--r--. 1 root root      6960 May  7  2016 LICENSE
-rw-r--r--. 1 root root      6960 May  7  2016 EULA
drwxr-xr-x. 7 root root      8192 May  7  2016 components
-r--r--r--. 1 root root      2031 May  7  2016 TRANS.TBL
-r-xr-x---. 1 root root 156958046 May  7  2016 ovmm-installer.bsx
[root@judi-dev-01 mnt]#
[root@judi-dev-01 mnt]#
[root@judi-dev-01 mnt]# chmod 777 createOracle.sh
chmod: changing permissions of `createOracle.sh': Read-only file system
[root@judi-dev-01 mnt]#
[root@judi-dev-01 mnt]#
[root@judi-dev-01 mnt]# ./createOracle.sh
Adding group 'oinstall' with gid '55033' ...
groupadd: group 'oinstall' already exists
Adding group 'dba'
groupadd: group 'dba' already exists
Adding user 'oracle' with user id '111533', initial login group 'dba', supplementary group 'oinstall' and  home directory '/home/oracle' ...
User 'oracle' already exists ...
uid=500(oracle) gid=101(dba) groups=101(dba),201(dba),202(oper),506(asmdba)
Creating user 'oracle' succeeded ...
For security reasons, no default password was set for user 'oracle'. If you wish to login as the 'oracle' user, you will need to set a password for this account.
Verifying user 'oracle' OS prerequisites for Oracle VM Manager ...
oracle  soft    nofile          8192
oracle  hard    nofile          65536
oracle  soft    nproc           2048
oracle  hard    nproc           16384
oracle  soft    stack           10240
oracle  hard    stack           32768
oracle  soft    core            unlimited
oracle  hard    core            unlimited
Setting  user 'oracle' OS limits for Oracle VM Manager ...
Altered file /etc/security/limits.conf
Original file backed up at /etc/security/limits.conf.orabackup
Verifying & setting of user limits succeeded ...
Changing '/u01' permission to 755 ...
Changing '/u01/app' permission to 755 ...
Changing '/u01/app/oracle' permission to 755 ...
Modifying iptables for OVM
Adding rules to enable access to:
     7002  : Oracle VM Manager https
       123 : NTP
     10000 : Oracle VM Manager CLI Tool
service iptables status: stop
iptables: Applying firewall rules:                         [  OK  ]
iptables: Saving firewall rules to /etc/sysconfig/iptables:[  OK  ]
iptables: Setting chains to policy ACCEPT: filter          [  OK  ]
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Unloading modules:                               [  OK  ]
iptables: Applying firewall rules:                         [  OK  ]
iptables: Setting chains to policy ACCEPT: filter          [  OK  ]
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Unloading modules:                               [  OK  ]
Rules added.
[root@judi-dev-01 mnt]#
[root@judi-dev-01 mnt]#
[root@judi-dev-01 mnt]#
[root@judi-dev-01 mnt]# ./runInstaller.sh

Oracle VM Manager Release 3.4.1 Installer

Oracle VM Manager Installer log file:
/var/log/ovmm/ovm-manager-3-install-2017-12-08-183014.log

Please select an installation type:
   1: Install
   2: Upgrade
   3: Uninstall
   4: Help

   Select Number (1-4): 3

Begin uninstalling Oracle VM Manager:
   1: Continue
   2: Abort

   Select Number (1-2): 1

Uninstall Oracle VM Manager

DB component : MySQL RPM package
MySQL RPM package installed by OVMM was found...
Removing MySQL RPM package installation ...

Product component : ovmcore-console RPM package
ovmcore-console RPM Package is installed ...
Removing ovmcore-console RPM Package installation ...

Product component : Java in '/u01/app/oracle/java/'
Java is installed ...

Removing Java installation ...

Product component : Oracle VM Manager in '/u01/app/oracle/ovm-manager-3/'
Oracle VM Manager is not installed

Product component : Oracle WebLogic Server in '/u01/app/oracle/Middleware/'
Oracle WebLogic Server is installed

Removing Oracle WebLogic Server installation ...

Uninstall completed ...
[root@judi-dev-01 mnt]#

Reinstall the OVM manager version 3.4.4 - Re-installing with uuid using the OVM 3.4.4 ISO
Once the uninstallation completed, Mount the 3.4.4 ISO in /mnt ovmm-3.4.4-installer-OracleLinux-b1709.iso
[root@judi-dev-01 mnt]# ls -ltr
total 157675
-rw-r--r--. 1 root root   4292233 Aug 16 07:10 OvmSDK_3.4.4.1709.zip
-rw-r--r--. 1 root root       230 Aug 16 07:13 oracle-validated.params
-r-xr-x---. 1 root root     11556 Aug 16 07:13 createOracle.sh
-rw-r--r--. 1 root root       372 Aug 16 07:14 sample.yml
-r-xr-x---. 1 root root      1919 Aug 16 07:14 runInstaller.sh
drwxr-xr-x. 7 root root      8192 Aug 16 07:14 components
-r--r--r--. 1 root root      1596 Aug 16 07:14 TRANS.TBL
-r-xr-x---. 1 root root 157140866 Aug 16 07:14 ovmm-installer.bsx
[root@judi-dev-01 mnt]# 
[root@judi-dev-01 mnt]# 
[root@judi-dev-01 mnt]# 
[root@judi-dev-01 mnt]# 
[root@judi-dev-01 mnt]# ./createOracle.sh
Adding group 'oinstall' with gid '55033' ...
groupadd: group 'oinstall' already exists
Adding group 'dba'
groupadd: group 'dba' already exists
Adding user 'oracle' with user id '111533', initial login group 'dba', supplementary group 'oinstall' and  home directory '/home/oracle' ...
User 'oracle' already exists ...
uid=500(oracle) gid=101(dba) groups=101(dba),201(dba),202(oper),506(asmdba)
Creating user 'oracle' succeeded ...
For security reasons, no default password was set for user 'oracle'. If you wish to login as the 'oracle' user, you will need to set a password for this account.
Verifying user 'oracle' OS prerequisites for Oracle VM Manager ...
oracle  soft    nofile          8192
oracle  hard    nofile          65536
oracle  soft    nproc           2048
oracle  hard    nproc           16384
oracle  soft    stack           10240
oracle  hard    stack           32768
oracle  soft    core            unlimited
oracle  hard    core            unlimited
Setting  user 'oracle' OS limits for Oracle VM Manager ...
Altered file /etc/security/limits.conf
Original file backed up at /etc/security/limits.conf.orabackup
Verifying & setting of user limits succeeded ...
Changing '/u01' permission to 755 ...
Changing '/u01/app' permission to 755 ...
Changing '/u01/app/oracle' permission to 755 ...
Modifying iptables for OVM
Adding rules to enable access to:
     7002  : Oracle VM Manager https
       123 : NTP
     10000 : Oracle VM Manager CLI Tool
service iptables status: stop
iptables: Applying firewall rules:                         [  OK  ]
iptables: Saving firewall rules to /etc/sysconfig/iptables:[  OK  ]
iptables: Setting chains to policy ACCEPT: filter          [  OK  ]
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Unloading modules:                               [  OK  ]
iptables: Applying firewall rules:                         [  OK  ]
iptables: Setting chains to policy ACCEPT: filter          [  OK  ]
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Unloading modules:                               [  OK  ]
Rules added.
[root@judi-dev-01 mnt]#
[root@judi-dev-01 mnt]#
[root@judi-dev-01 mnt]#
[root@judi-dev-01 mnt]#
[root@judi-dev-01 mnt]#
[root@judi-dev-01 mnt]#

Copy the UUDI from the file /etc/sysconfig/ovmm

[root@judi-dev-01 mnt]# ./runInstaller.sh --uuid=0004db0000010000en93ae3f7825f78v

Oracle VM Manager Release 3.4.4 Installer

Oracle VM Manager Installer log file:
/var/log/ovmm/ovm-manager-3-install-2017-12-08-183346.log

Please select an installation type:
   1: Install
   2: Upgrade
   3: Uninstall
   4: Help

   Select Number (1-4): 1

Verifying installation prerequisites ...

Starting production with local database installation ...

One password is used for all users created and used during the installation.
Enter a password for all logins used during the installation:
Enter a password for all logins used during the installation (confirm):

Please enter your fully qualified domain name, e.g. ovs123.us.oracle.com, (or IP address) of your management server for SSL certification generation 192.168.0.10 [judi-dev-01]:  192.168.0.10

Verifying configuration ...

Start installing Oracle VM Manager:
   1: Continue
   2: Abort

   Select Number (1-2): 1

Step 1 of 7 : Database Software ...
Installing Database Software...
Retrieving MySQL Database 5.6 ...
Unzipping MySQL RPM File ...
Installing MySQL 5.6 RPM package ...
Configuring MySQL Database 5.6 ...
Installing MySQL backup RPM package ...

Step 2 of 7 : Java ...
Installing Java ...

Step 3 of 7 : WebLogic and ADF ...
Retrieving Oracle WebLogic Server 12c and ADF ...
Installing Oracle WebLogic Server 12c and ADF ...
Applying patches to Weblogic ...
Applying patch to ADF ...

Step 4 of 7 : Oracle VM ...
Installing Oracle VM Manager Core ...
Retrieving Oracle VM Manager Application ...
Extracting Oracle VM Manager Application ...

Retrieving Oracle VM Manager Upgrade tool ...
Extracting Oracle VM Manager Upgrade tool ...
Installing Oracle VM Manager Upgrade tool ...

Retrieving Oracle VM Manager CLI tool ...
Extracting Oracle VM Manager CLI tool...
Installing Oracle VM Manager CLI tool ...
Installing Oracle VM Manager WLST Scripts ...

Step 5 of 7 : Domain creation ...
Creating domain ...

Step 6 of 7 : Oracle VM Tools ...

Retrieving Oracle VM Manager Shell & API ...
Extracting Oracle VM Manager Shell & API ...
Installing Oracle VM Manager Shell & API ...

Retrieving Oracle VM Manager Wsh tool ...
Extracting Oracle VM Manager Wsh tool ...
Installing Oracle VM Manager Wsh tool ...

Retrieving Oracle VM Manager Tools ...
Extracting Oracle VM Manager Tools ...
Installing Oracle VM Manager Tools ...

Retrieving ovmcore-console ...
Installing ovmcore-console RPM package ...
Copying Oracle VM Manager shell to '/usr/bin/ovm_shell.sh' ...
Installing ovm_admin.sh in '/u01/app/oracle/ovm-manager-3/bin' ...
Installing ovm_upgrade.sh in '/u01/app/oracle/ovm-manager-3/bin' ...

Step 7 of 7 : Start OVM Manager ...
Enabling Oracle VM Manager service ...
Shutting down Oracle VM Manager instance ...
Starting Oracle VM Manager instance ...

Please wait while WebLogic configures the applications...
Trying to connect to core via ovmwsh (attempt 1 of 20) ...
Trying to connect to core via ovmwsh (attempt 2 of 20) ...
Trying to connect to core via ovm_shell (attempt 1 of 5)...
Oracle VM Manager installed.

Installation Summary
--------------------
Database configuration:
  Database type               : MySQL
  Database host name          : localhost
  Database name               : ovs
  Database listener port      : 49500
  Database user               : ovs

Weblogic Server configuration:
  Administration username     : weblogic

Oracle VM Manager configuration:
  Username                    : admin
  Core management port        : 54321
  UUID                        : 0004db0000010000en93ae3f7825f78v


Passwords:
There are no default passwords for any users. The passwords to use for Oracle VM Manager, Database, and Oracle WebLogic Server have been set by you during this installation. In the case of a default install, all passwords are the same.

Oracle VM Manager UI:
  https://192.168.0.10:7002/ovm/console
Log in with the user 'admin', and the password you set during the installation.

For more information about Oracle Virtualization, please visit:
  http://www.oracle.com/virtualization/

Oracle VM Manager installation complete.

Please remove configuration file /tmp/ovm_configa1GOEu.
[root@judi-dev-01 mnt]# service ovmm status
Oracle VM Manager is running...
[root@judi-dev-01 mnt]#
[root@judi-dev-01 mnt]# cat /etc/sysconfig/ovmm
RUN_OVMM=YES
UUID=0004db0000010000en93ae3f7825f78v
DBBACKUP_CMD=/opt/mysql/meb-3.12/bin/mysqlbackup
JVM_MAX_PERM=512m
JVM_MEMORY_MAX=4096m
DBBACKUP=/u01/app/oracle/mysql/dbbackup
[root@judi-dev-01 mnt]#
[root@judi-dev-01 mnt]#


Regeneration the database by referring to "Oracle VM: How To Regenerate The OVM 3.3.x/3.4.x DB ( Doc ID 2038168.1 )". 
Regenerate The OVM 3.3.x/3.4.x DB

The Oracle VM manager services need to be shutdown, to delete the OVM manager database.
#service ovmm stop

Using ovm_upgrade.sh from /u01/app/oracle/ovm-manager-3/ovm_upgrade/bin and the values from the /u01/app/oracle/ovm-manager-3/.config file, delete the bad database
Obtain the values to substitute from the /u01/app/oracle/ovm-manager-3/.config on a management node:
# cat /u01/app/oracle/ovm-manager-3/.config
DBTYPE=MySQL
DBHOST=localhost
SID=ovs             < --dbsid
LSNR=1521           < --dbport
OVSSCHEMA=ovs       < --dbuser
APEX=8080
WLSADMIN=weblogic
OVSADMIN=admin
COREPORT=54321
UUID=0004fb00000100009bfa6a96c1303e32
BUILDID=3.2.11.775

Sample command based on the above sample .config file:
sh /u01/app/oracle/ovm-manager-3/ovm_upgrade/bin/ovm_upgrade.sh --deletedb --dbuser=ovs --dbpass=Welcome1 --dbhost=localhost --dbport=1521 --dbsid=ovs 

Generate the replacement certificate
#export MW_HOME=/u01/app/oracle/Middleware
#/u01/app/oracle/ovm-manager-3/ovm_upgrade/bin/ovmkeytool.sh setupWebLogic

Start the OVM services and to generate new certificates.
#service ovmm start
#sh /u01/app/oracle/ovm-manager-3/bin/configure_client_cert_login.sh

Stop then start the OVM service apply the new certificate.
#service ovmm stop
#service ovmm start


Repopulate the database
1) Login, the UI should be EMPTY of  any data. The OVM servers and VMs are still up and running, the poolFS and repos already exist
2) Repopulate the database by Rediscovering the environment using the Oracle OVM Manager UI.
The OVM database is rebuilt from the existing servers in the pool, so the relationships will already be established.  The Servers and VMs are up and running, the pool filesystem the storage repositories are on the servers.
a. Discover Server(s) -> Pool/OVM Server(s) will be visible again
        If your storage is network based, validate that your servers are listed under the storage tab.
        If not "Discover server" and enter the name and IP of the storage array.
b. Refresh Repository (right click each storage, and choose refresh
c. Rediscover Server(s) -> VM's will reappear under the OVM Server(s). Non running VM's can be found under "Unassigned Virtual Machines"
d. Run VNIC Manager to recreate a range of MAC addresses, because only the MAC addresses in use will have been rediscovered


Restore the simple names
This Database will be populated, but you will be missing items such as friendly disk names, display names for Vdisk, Vnics etc. (meta data)
Please refer to KM article: Restore OVM Manager "Simple Names" After a Rebuild/Reinstall (Doc ID 2129616.1)   to restore the friendly names.
After the friendly names have been restored,
Logout, and close the browser.
Open a browser, and login.
The data base should be up and working with the friendly names (meta data)



~Judi~
OVM Reinstallation
Uninstall the OVM manager version 3.4.1
Reinstall the OVM manager version 3.4.4
Install OVM manager 3.4.4
V137364-01.iso
Regenerate The OVM 3.3.x/3.4.x DB
Repopulate the database
Restore the simple names
Reinstall OVM 3.4.1
Reinstall OVM 3.4.3
Reinstall OVM 3.4.4
Install OVM 3.4.1
Install OVM 3.4.3
Install OVM 3.4.4
Install Oracle VM Manager 3.4.4
Install Oracle VM Manager 3.4.3

Install Oracle VM Manager 3.4.2
Oracle VM Manager 3.4
Oracle VM Manager 3.3

Oracle VM Manager 3.2

Dec 11, 2017

Restore OVM Manager Simple Names After a Rebuild/Reinstall

Restore OVM Manager "Simple Names" After a Rebuild/Reinstall

APPLIES TO:
Oracle VM - Version 3.3.1 and later
Linux x86-64

GOAL:
There are occasions when a re-install of OVM Manager is required, together with restoration of a previously backed-up database. However, the manager database restore does not recover the "simple names" of objects - e.g. virtual machine and disk names, even though the backup saves these in an xml file.

The automated backup creates the required xml file in the /u01/app/oracle/mysql/dbbackup directory - filename format /u01/app/oracle/mysql/dbbackup/OVMModelExport-yyyymmdd_nnnnnn.xml

Ensure a copy of the backed-up file will be available to perform the restore.The attached ovm_shell scripts will restore these simple names from the xml file created from an automated or manual backup of the manager database.

The xml file will be generated in /u01/app/oracle/mysql/dbbackup (3.3, 3.4)
              cd /u01/app/oracle/mysql/dbbackup
              OVMModelExport-yyyymmdd_nnnnnn.xml

To restore the simple names from the backup, download the appropriate script attached to this document and run it.   
From Google Drive - Download restoreSimpleName script for 3.3 and 3.4(21.22 KB)

              cd /u01/app/oracle/ovm-manager-3/ovm_shell
              ovm_shell.sh -u <USERNAME> -p <PASSWORD> -i /tmp/restoreSimpleName-3.3.py /u01/app/oracle/mysql/dbbackup/OVMModelExport-yyyymmdd_nnnnnn.xml
              JUDI-DEV-01 # : ./ovm_shell.sh -u admin -p Pr0jectFun -i /tmp/restoreSimpleName-3.3-v1.2.py /u01/app/oracle/mysql/dbbackup/OVMModelExport-20171105_896756.bkp..xml

It may be necessary to omit the ".py" suffix when calling the python from ovm_shell, e.g.
              OVM > ovm_shell.sh -u <USERNAME> -p <PASSWORD> -i /tmp/restoreSimpleName-3.3 /u01/app/oracle/mysql/dbbackup/OVMModelExport-yyyymmdd_nnnnnn.xml


Oracle Doc ID 2129616.1


~Judi~
Restore OVM Manager Simple Names After a Rebuild/Reinstall
Restore OVM Manager
OVM Manager Rebuild
OVM Manager Reinstall
OVM Manager Upgrade

Nov 28, 2017

Fine tune zfs for Solaris application and database servers

Fine tune zfs for Solaris application and database servers

Memory Management Between ZFS and Applications in Oracle Solaris 11.x (Doc ID 1663862.1)

APPLIES TO: Solaris Operating System - Version 11.1 and later

The user_reserve_hint_pct Parameter
Solaris 11.2 and Solaris 11.1 SRU 20.5 or newer includes a new user_reserve_hint_pct tunable parameter to provide a hint to the system about application memory usage. This hint is used to limit growth of the ZFS ARC cache so that more memory stays available for applications.
if user_reserve_hint_pct is tuned appropriately, memory that is returned to the freemem pool is less likely to be reused by the kernel. This, in turn, allows administrators to keep a reserve of free memory for future applications demands by restricting growth of the ZFS ARC cache. While the motivation for using this parameter might include faster application startup and dynamic reconfiguration of memory boards, its primary use is to ensure that large memory pages stay available for demanding applications, such as databases.

It's very important to calculate a suitable value for user_reserve_hint_pct to avoid this situation.  See 'How to calculate a suitable value' heading below.














A scenario where user_reserve_hint_pct is set but the application has consumed more memory than this value.  This is permitted because user_reserve_hint_pct is only a 'hint' for user-land applications and not a hard limit.  If applications use more than the pre-defined value this will usually lead to system performance and hang issues.

Description:
Informs the system about how much memory is reserved for application use, and therefore limits how much memory can be used by the ZFS ARC cache as the cache increases over time.
By means of this parameter, administrators can maintain a large reserve of available free memory for future application demands. The user_reserve_hint_pct parameter is intended to be used in place of the zfs_arc_max parameter to restrict the growth of the ZFS ARC cache.

Data Type 
    Unsigned Integer (64-bit)

Default
    0 (unset)

Range
    0 - 99%of physical memory

The minimum size of the ZFS ARC is 64MB on systems with physmem <= 16GB of RAM or 0.5% of physmem for system >16GB of RAM.

Units 
percent.  Values should be positive whole decimal integers.  Negative numbers or floating points are not permitted.

Dynamic?
yes

Validation 
Yes, the range is validated.

When to Change :
For upward adjustments, increase the value if the initial value is determined to be insufficient over time for application requirements, or if application demand increases on the system. Perform this adjustment only within a scheduled system maintenance window. After you have changed the value, reboot the system.

For downward adjustments, decrease the value if allowed by application requirements. Make sure to use decrease the value only by small amounts, no greater than 5% at a time.

How to calculate a suitable value :
Calculations can be performed one of two ways.
1) If the size of the ARC should be capped; ie if we're converting a previous zfs_arc_max value to user_reserve_hint_pct, use:
user_reserve_hint_pct = USER_RESERVE_HINT_MAX - (((Kernel + Defdump prealloc + ZFS Metadata + desired zfs_arc_max) / Total (physmem))*100)
USER_RESERVE_HINT_MAX = 99.  For this example zfs_arc_max = 1GB.  All values are specified in Megabytes.
99-(((3276 + 925 + 109 + 1024)/16384)*100) = 66.44  (66 rounded down)

2) If the amount of memory needed by applications is known, calculate it this way:
user_reserve_hint_pct = Application Demand
eg: The following assigns 8GB to apps/db leaving the remaining for the ARC and Kernel:
user_reserve_hint_pct = (8192/16384)*100) = 50
Note: Memory must remain for the Kernel, Deferred Dump, etc to avoid system performance issues.

If the value had not been previously set, the default is zero so the -f flag must be used for the initial setting on a live system.

                         Down Load set_user_reserve.sh

# ./set_user_reserve.sh -p 60
./set_user_reserve.sh: 60 greater that 0; use -f to force upward adjustment

The following can take several minutes to complete depending on the current size of the ARC.

# ./set_user_reserve.sh -fp 60
Adjusting user_reserve_hint_pct from 0 to 60
Monday, March 30, 2015 04:59:47 PM BST : waiting for current value : 45 to grow to target : 60

Adjustment of user_reserve_hint_pct to 60 successful.
Make the setting persistent across reboot by adding to /etc/system

* Tuning based on MOS note 1663861.1, script version 1.0
* added Monday, March 30, 2015 05:09:53 PM BST by system administrator : <me>
set user_reserve_hint_pct=60


# vi /etc/system
(Add the 4 lines.  Save and quit vi)

# tail /etc/system
*
*       To set a variable named 'debug' in the module named 'test_module'
*
*               set test_module:debug = 0x13


* Tuning based on MOS note 1663861.1, script version 1.0
* added Monday, March 30, 2015 05:09:53 PM BST by system administrator : <me>
set user_reserve_hint_pct=60



~Judi~
~
zfs arc memory usage
zfs memory consumption
zfs memory usage solaris
zfs arc cache tuning
zfs arc tuning
zfs memory tuning
zfs memory management
manage zfs memory usage
fine tune zfs memory usage
zfs performance
zpool
zfs share
zfs set sharenfs
zfs share nfs
~










CIDR Netmask chart -Subnet Mask Information

CIDR Netmask chart -Subnet Mask Information

cidrnetmask
/1128.0.0.0
80000000
/2192.0.0.0
C0000000
/3224.0.0.0
E0000000
/4240.0.0.0
F0000000
/5248.0.0.0
F8000000
/6252.0.0.0
FC000000
/7254.0.0.0
FE000000
/8255.0.0.0
FF000000
cidrnetmask
/9255.128.0.0
FF800000
/10255.192.0.0
FFC00000
/11255.224.0.0
FFE00000
/12255.240.0.0
FFF00000
/13255.248.0.0
FFF80000
/14255.252.0.0
FFFC0000
/15255.254.0.0
FFFE0000
/16255.255.0.0
FFFF0000
cidrnetmask
/17255.255.128.0
FFFF8000
/18255.255.192.0
FFFFC000
/19255.255.224.0
FFFFE000
/20255.255.240.0
FFFFF000
/21255.255.248.0
FFFFF800
/22255.255.252.0
FFFFFC00
/23255.255.254.0
FFFFFE00
/24255.255.255.0
FFFFFF00
cidrnetmask
/25255.255.255.128
FFFFFF80
/26255.255.255.192
FFFFFFC0
/27255.255.255.224
FFFFFFE0
/28255.255.255.240
FFFFFFF0
/29255.255.255.248
FFFFFFF8
/30255.255.255.252
FFFFFFFC
/31255.255.255.254
FFFFFFFE
/32255.255.255.255
FFFFFFFF


cidr  nets
/0
.0
0
/1
.128
0 128
/2
.192
0 64 128 192
/3
.224
0 32 64 96 128 160 192 224
/4
.240
0 16 32 48 64 80 96 112 128 144 160 176 192 208 224 240
/5
.248
0 8 16 24 32 40 48 56 64 72 80 88 96 104 112 120 128 136 144 152 160 168 176 184 192 200 208 216 224 232 240 248
/6
.252
0 4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76 80 84 88 92 96 100 104 108 112 116 120 124 128 132 136 140 144 148 152 156 160 164 168 172 176 180 184 188 192 196 200 204 208 212 216 220 224 228 232 236 240 244 248 252
/7
.254  
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66 68 70 72 74 76 78 80 82 84 86 88 90 92 94 96 98 100 102 104 106 108 110 112 114 116 118 120 122 124 126 128 130 132 134 136 138 140 142 144 146 148 150 152 154 156 158 160 162 164 166 168 170 172 174 176 178 180 182 184 186 188 190 192 194 196 198 200 202 204 206 208 210 212 214 216 218 220 222 224 226 228 230 232 234 236 238 240 242 244 246 248 250 252 254




CIDR netmask chart
Subnet Mask Information
subnet mask convert hex to decimal
Subnetting, netmasks and slash notation

~Judi~

Nov 23, 2017

Solaris 10 x86 SVM Patching

Solaris 10 x86 SVM Patching
Solaris 10 SVM Patching (x86)

Step 1:     Backup the necessary configuration file and save it
                          df -h
                          metastat -p
                          metadb
                          echo | format
                          prtconf -v | sed -n '/bootpath/{;p;n;p;}'
                  Root Disk - c0t0d0
                  Root Mirror Disk - c0t1d0
                  6 copies of metadb replicas - c0t0d0s7 and c0t1d0s7
                  Current Kernel - 147441-01

Step 2:      Detatch the submirrors and clear them
                  /dev/md/dsk/d0 /
                  /dev/md/dsk/d1 swap
                  /dev/md/dsk/d3 /var
                  /dev/md/dsk/d4 /opt/BMC

                   JUDI-DEV-TEST01# metastat -p
                   d4 -m d14 d24 1
                   d14 1 1 c0t0d0s4
                   d24 1 1 c0t1d0s4
                   d3 -m d13 d23 1
                   d13 1 1 c0t0d0s3
                   d23 1 1 c0t1d0s3
                   d1 -m d11 d21 1
                   d11 1 1 c0t0d0s1
                   d21 1 1 c0t1d0s1
                   d0 -m d10 d20 1
                   d10 1 1 c0t0d0s0
                   d20 1 1 c0t1d0s0
                   JUDI-DEV-TEST01#
                          metastat -p
                          metadetach d0 d20
                          metadetach d1 d21
                          metadetach d3 d23
                          metadetach d4 d24

                          metastat -p

                          metaclear d20
                          metaclear d21
                          metaclear d23
                          metaclear d24

                          metastat -p

Step 3:      Remove replicas added on root mirror disk
                          metadb
                          metadb -d c0t1d0s7
                          metadb

Step 4:      Install grub on root mirror disk to make sure the disk is bootable incase we want to back-out the patching
                          installgrub -m /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c0t1d0s0

Step 5:      Mount root Mirror disk c0t1d0s0 on mnt
                          mount /dev/dsk/c0t1d0s0 /mnt
                          df -h /mnt

Step 6:      Modify /mnt/etc/vfstab & /mnt/etc/system files
                          cat /mnt/etc/vfstab
                          vi /mnt/etc/vfstab
                          /dev/dsk/c0t1d0s1  -       -       swap    -       no      -
                          /dev/dsk/c0t1d0s0  /dev/rdsk/c0t1d0s0 /       ufs     1       no      -
                          /dev/dsk/c0t1d0s3  /dev/rdsk/c0t1d0s3 /var    ufs     1       no      -
                          /dev/dsk/c0t1d0s4  /dev/rdsk/c0t1d0s4 /opt/BMC        ufs     2       yes     -

                          tail /mnt/etc/system
                          vi /mnt/etc/vfstab (Comment the mirror related information)
                          * Begin MDD root info (do not edit)
                          * rootdev:/pseudo/md@0:0,0,blk
                          * End MDD root info (do not edit)
                          * set md:mirrored_root_flag=1

Step 7:      Delete existing boot sign on root mirror disk and create unique one
                          ls -l /mnt/boot/grub/bootsign/
                          -r--r--r--   1 root     root           0 Oct 17  2013 rootfs0

                          rm /mnt/boot/grub/bootsign/rootfs0
                          touch /mnt/boot/grub/bootsign/rootfs1
                          ls -l /mnt/boot/grub/bootsign/

Step 8:      Update root mirror disk menu.lst file
                          ls -lrt /mnt/boot/grub/menu.lst
                          cat /mnt/boot/grub/menu.lst

                  Edit the file and change the rootfs0 to rootfs1 in both lines and change the title as Root Mirror Disk
                          vi /mnt/boot/grub/menu.lst
                          #---------- ADDED BY BOOTADM - DO NOT EDIT ----------
                          title Root Mirror Disk Oracle Solaris 10 8/11 s10x_u10wos_17b X86
                          findroot (rootfs1,0,a)
                          kernel /platform/i86pc/multiboot
                          module /platform/i86pc/boot_archive
                          #---------------------END BOOTADM--------------------
                          #---------- ADDED BY BOOTADM - DO NOT EDIT ----------
                          title Solaris failsafe
                          findroot (rootfs1,0,a)
                          kernel /boot/multiboot -s
                          module /boot/amd64/x86.miniroot-safe
                          #---------------------END BOOTADM--------------------

                          cat /mnt/boot/grub/menu.lst

Step 9:      Update boot environment variable on root mirror disk
                          echo | format
                          cat /mnt/boot/solaris/bootenv.rc
                          ls -ld /dev/dsk/c0t1d0s0
                          lrwxrwxrwx   1 root     root          62 Oct 17  2013 /dev/dsk/c0t1d0s0 -> ../../devices/pci@0,0/pci8086,3c06@2,2/pci1028,1f38@0/sd@1,0:a
                          vi /mnt/boot/solaris/bootenv.rc
                          setprop bootpath '/pci@0,0/pci8086,3c06@2,2/pci1028,1f38@0/sd@0,0:a' ----> Remove this line
                          setprop bootpath '/pci@0,0/pci8086,3c06@2,2/pci1028,1f38@0/sd@1,0:a' ----> Add the secondary disks path here

Step 10:      Update boot disks menu.lst file,
                  This step will allow us to skip the step of configuring BIOS to boot from root mirror disk
                          cat /boot/grub/menu.lst
                  Edit the file and make entry to list out the secondary disk as separate boot disk while grub booting screen comes
Add the below entry to the bottom of the file - secondary disk - rootfs1
                          #---------- ADDED BY BOOTADM - DO NOT EDIT ----------
                          title Root Mirror Disk - Oracle Solaris 10 8/11 s10x_u10wos_17b X86
                          findroot (rootfs1,0,a)
                          kernel /platform/i86pc/multiboot
                          module /platform/i86pc/boot_archive
                          #---------------------END BOOTADM--------------------
                          #---------- ADDED BY BOOTADM - DO NOT EDIT ----------
                          title Solaris failsafe
                          findroot (rootfs1,0,a)
                          kernel /boot/multiboot -s
                          module /boot/amd64/x86.miniroot-safe
                          #---------------------END BOOTADM--------------------

Step 11:      Check the currently booted device
                          prtconf -v | sed -n '/bootpath/{;p;n;p;}'
                  Currently server booted from root disk, Now restart and boot from root mirror disk
                          init 6
                  Server will display as
                  creating boot_archivee for /mnt
                  updating /mnt/platform/i86pc/boot_archive

                  while rebooting slect the root mirror disk from grub menu - to check the root secodary disk is safe to boot.
                  Check from which disk the server is booted and make sure the disk booted from root mirror - now we are good to proceed patching
                          prtconf -v | sed -n '/bootpath/{;p;n;p;}'

Step 12:      Reboot the server to boot from boot disk see the root disk status
                          init 6
                  Check from which disk the server is booted and make sure the disk booted from boot disk - Primary disk
                          prtconf -v | sed -n '/bootpath/{;p;n;p;}'

Step 13:      Bring the server into single user mode and install the patches
                          who -r
                          init s

Step 14:      Start install the patches
                          ./installpatchset --s10patchset

Step 15:      Patch installation completed, Reboot the server
                          init 6
                  Verify the new patch version and smf status
                          uname -v
                          svcs -xv
                          df -h

Step 16:      Now create submirrors and replicas on root mirror disk
                          echo | format
                          metstat -p
                          metainit -f d0 1 1 c0t1d0s0
                          metainit -f 21 1 1 c0t1d0s1
                          metainit -f 23 1 1 c0t1d0s3
                          metainit -f 24 1 1 c0t1d0s4

Step 17:      Attach submirrors to the main mirror
                          metadb -afc3 c2t0d0s7
                          metadb
                          metstat -c

                          metattach d0 d0
                          metattach d1 d21
                          metattach d3 d23
                          metattach d4 d24

                          metastat -c
                  Wait until the resync completes

                          uname -X
                          metastat -c d0

                  Once the resync completed, Reboot the server
                          init 6



Solaris10
Solaris 10 x86 SVM Patching
Solaris 10 SVM Patching (x86);
how to patch solaris 10 x86 server;
Update Solaris 10 kernel version;
Solaris 10 x86 Kernel patching;

~Judi~




Nov 21, 2017

Solaris 11 Alternate Boot Environments

Solaris 11 Alternate Boot Environments

Introduction to Managing Boot Environments :

  • A boot environment is a bootable Oracle Solaris environment consisting of a root dataset and, optionally, other datasets mounted underneath it. Exactly one boot environment can be active at a time.
  • A dataset is a generic name for ZFS entities such as clones, file systems, or snapshots. In the context of boot environment administration, the dataset more specifically refers to the file system specifications for a particular boot environment or snapshot.
  • A snapshot is a read-only image of a dataset or boot environment at a given point in time. A snapshot is not bootable.
  • A clone of a boot environment is created by copying another boot environment. A clone is bootable.
  • Shared datasets are user-defined directories, such as /export, that contain the same mount point in both the active and inactive boot environments. Shared datasets are located outside the root dataset area of each boot environment.

About the beadm Utility : 

  • The beadm utility enables you to perform the following tasks:
  • Create a new boot environment based on the active boot environment
  • Create a new boot environment based on an inactive boot environment
  • Create a snapshot of an existing boot environment
  • Create a new boot environment based on an existing snapshot
  • Create a new boot environment, and copy it to a different zpool
  • Create a new boot environment and add a custom title to the x86 GRUB menu or the SPARC boot menu
  • Activate an existing, inactive boot environment
  • Mount a boot environment
  • Unmount a boot environment
  • Destroy a boot environment
  • Destroy a snapshot of a boot environment
  • Rename an existing, inactive boot environment
  • Display information about your boot environment snapshots and datasets
  • The beadm utility has the following features:
  • Aggregates all datasets in a boot environment and performs actions on the entire boot environment at once. You no longer need to perform ZFS commands to modify each dataset individually.
  • Manages the dataset structures within boot environments. For example, when the beadm utility clones a boot environment that has shared datasets, the utility automatically recognizes and manages those shared datasets for the new boot environment.
  • Enables you to perform administrative tasks on your boot environments in a global zone or in a non-global zone.
  • Automatically manages and updates the GRUB menu for x86 systems or the boot menu for SPARC systems. For example, when you use the beadm utility to create a new boot environment, that environment is automatically added to the GRUB menu or boot menu.

How to Create a Boot Environment
              beadm create BeName
              beadm create solaris-1

Activate the boot environment.
              beadm activate BeName

Listing Existing Boot Environments and Snapshots
              beadm list
                            -a – Lists all available information about the boot environment. This information includes subordinate datasets and snapshots.
                            -d – Lists information about all subordinate datasets that belong to the boot environment.
                            -s – Lists information about the snapshots of the boot environment.
                            -H – Prevents listing header information. Each field in the output is separated by a semicolon.

Viewing Boot Environment Specifications
              beadm list -a solaris-1

The values for the Active column are as follows:
R – Active on reboot.
N – Active now.
NR – Active now and active on reboot.
“-” – Inactive.
“!” – Unbootable boot environments in a non-global zone are represented by an exclamation point.


Viewing Snapshot Specifications
              beadm list -s solaris-1

Changing the Default Boot Environment
              beadm activate BeName
              beadm activate solaris-1

Destroying a Boot Environment
              beadm destroy solaris-1

If a Solaris server not booting after updating the patches, Boot the server from alternated boot environment
SPARC: How to Boot From an Alternate Operating System or Boot Environment
Bring the system to the ok PROM prompt.
Display a list of available boot environments by using the boot command with the -L option.
              boot -L

To boot a specified entry, type the number of the entry and press Return:
Select environment to boot: [1 - 2]:
To boot the selected entry, invoke:
boot [<root-device>] -Z rpool/ROOT/boot-environment

              boot -Z rpool/ROOT/boot-environment
              boot -Z rpool/ROOT/zfs2BE





~Judi~


Oct 12, 2017

Patchadd - Solaris patchadd Return/Exit Codes

The following exit codes will return in a failure of patchadd command.

Exit code Meaning
0 No error
1 Usage error
2 Attempt to apply a patch that's already been applied
3 Effective UID is not root
4 Attempt to save original files failed
5 pkgadd failed
6 Patch is obsoleted
7 Invalid package directory
8 Attempting to patch a package that is not installed
9 Cannot access /usr/sbin/pkgadd (client problem)
10 Package validation errors
11 Error adding patch to root template
12 Patch script terminated due to signal
13 Symbolic link included in patch
14 NOT USED
15 The prepatch script had a return code other than 0.
16 The postpatch script had a return code other than 0.
17 Mismatch of the -d option between a previous patch install and the current one.
18 Not enough space in the file systems that are targets of the patch.
19 $SOFTINFO/INST_RELEASE file not found
20 A direct instance patch was required but not found
21 The required patches have not been installed on the manager
22 A progressive instance patch was required but not found
23 A restricted patch is already applied to the package
24 An incompatible patch is applied
25 A required patch is not applied
26 The user specified backout data can't be found
27 The relative directory supplied can't be found
28 A pkginfo file is corrupt or missing
29 Bad patch ID format
30 Dryrun failure(s)
31 Path given for -C option is invalid
32 Must be running Solaris 2.6 or greater
33 Bad formatted patch file or patch file not found
34 The appropriate kernel jumbo patch needs to be installed
35 Later revision already installed
36 Cannot create safe temporary directory
37 Illegal backout directory specified
38 A prepatch, prePatch or a postpatch script could not be executed
39 A compressed patch was unable to be decompressed
40 Error downloading a patch
41 Error verifying signed patch
42 Error unable to retrieve patch information from SQL DB
43 Error unable to update the SQL DB
44 Lock file not available
45 Unable to copy patch data to partial spool directory.



Jun 30, 2017

Solaris Storage Commands



Enable Multipathing for 3PAR storage disks in Solaris 10
Multipathing Using Solaris 10 StorEdge Traffic Manager - 3PAR Storage
Configure Third-Party Storage 3PAR Devices - Only with Solaris 10
Edit the file /kernel/drv/scsi_vhci.conf
Add the vendor ID and product ID entries.
Example:
device-type-scsi-options-list =
"VendorID1ProductID1", "symmetric-option",
"VendorID2ProductID2", "symmetric-option",
symmetric-option = 0x1000000;
Entry to enable Solaris I/O multipathing globally on all the 3PAR StoreServ Storage target ports:
device-type-scsi-options-list =
"VendorID1ProductID1", "symmetric-option",
"VendorID2ProductID2", "symmetric-option",
symmetric-option = 0x1000000;
After enabling multipathing, reboot the system.
         stmsboot –D fp -e
          Or
         # reboot -- -r
          Or
         ok> boot -r


Enabling SSTM/MPxIO Multipathing for Solaris 8 and 9
edit the /kernel/drv/scsi_vhci.conf

file by changing the mpxio-disable parameter to a value of no, and then reboot the host
         mpxio-disable="no";

Find Port-wise disk details in Solaris
         for hba in `fcinfo hba-port | grep WWN | awk '{ print $4 }'` ; do fcinfo remote-port -ls -p $hba >> /tmp/output ; done
                  
         
         

Jun 26, 2017

putty and scp tools in Windows PowerShell and .bat scripts

         
            - plink a command-line interface to the PuTTY back ends,
            - plink is used in windows powershell or .bat scripts to achieve the putty functionality.
                32-bit: DownLoad
                64-bit: DownLoad


            - pscp an SCP client, i.e. command-line secure file copy
            - pscp  is used in windows powershell or .bat scripts to achieve winscp functionality
                32-bit: DownLoad
                64-bit: DownLoad

1.   Example to execute a script in remote unix server from a windows machine using plink
             plink.exe -t -l judi -pw <password> 10.192.168.10 "/export/home/judi/health.sh >/dev/null 2>&1"

2.   Example to copy some log files from windows machine to a remote unix server using pscp
             pscp.exe -l judi -pw <password> D:\logs\$files 10.192.168.10:/export/home/judi/logs/

3.   Execute the powershell script via .bat
      The above two lines needs to be updated in execute.ps1 and execute.ps1 needs to be called in a .bat file using the below line
             C:\WINDOWS\system32\WindowsPowerShell\v1.0\powershell.exe D:\Scripts\execute.ps1

$date= (Get-Date).ToString("ddMMyyyy")







~Judi~
putty in windows powershell script
winscp in windows powershell script
windows powershell scp

scp from powershell

Jun 22, 2017

Solaris user administration



1.     Create a user account in Solaris.
useradd -u 101 -g 200 -d /export/home/judi -m -s /usr/bin/ksh -c "Judi - Test User" Judi
-u : The  UID  of  the new user
-g : An  existing  group's  integer ID or character-string name
-d : Home directory for the user
-m : Create the Home directory for the user
-s : Shell for the user account
-c : Comment for the user account, Any text string

passwd judi; passwd -f judi

2.     Remove all secondary group for a user in solaris
             usermod -G "" testuser

passwd -x 91 -n 7 -w 28 oracle
-x : Max password age
-n : Minimum 7 days required to change password
-w : warn after 28 days


usermod -e 03/31/2017 oracle
- Automatically Account will expire on 31-Mar-2017


Popular Posts