Friday, June 20, 2008

Solaris ZFS to ZFS LiveUpgrade

Regular UFS to UFS LiveUpgrade used to take a while to create the boot environment, etc. Complicated :-).
As of Solaris Express Community Edition 90, you can use LiveUpgrade with ZFS. You can also LU a UFS system to ZFS.

One of the benefits of ZFS root is the ZFS clone command (lucreate -n happens in a second):

# lucreate -n sxce91
Checking GRUB menu...
Analyzing system configuration.
No name for current boot environment.
INFORMATION: The current boot environment is not named - assigning name .
Current boot environment is named .
Creating initial configuration for primary boot environment .
The device
is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name PBE Boot Device .
Comparing source boot environment file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment .
Source boot environment is .
Creating boot environment .
Cloning file systems from boot environment to create boot environment .
Creating snapshot for on .
Creating clone for on .
Setting canmount=noauto for
in zone on .
Creating snapshot for on .
Creating clone for on .
No entry for BE in GRUB menu
Population of boot environment successful.
Creation of boot environment successful.


Mount the DVD image loopback:

# mkdir /mnt/iso
# lofiadm -a /export/home/cmihai/Desktop/SunDownloads/sol-nv-b91-x86-dvd.iso
/dev/lofi/1
# mount -F hsfs /dev/lofi/1 /mnt/iso

Liveupgrade:
# luupgrade -u -n sxce91 -s /mnt/iso/
No entry for BE in GRUB menu
Copying failsafe kernel from media.
Uncompressing miniroot
Uncompressing miniroot archive (Part2)
13371 blocks
Creating miniroot device
miniroot filesystem is
Mounting miniroot at

Mounting miniroot Part 2 at

Validating the contents of the media
.
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains version <11>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE .
Checking for GRUB menu on ABE .
Saving GRUB menu on ABE .
Checking for x86 boot partition on ABE.
Determining packages to install or upgrade for BE .
Performing the operating system upgrade of the BE .
CAUTION: Interrupting this process may leave the boot environment unstable
or unbootable.

Upgrading Solaris: 100% completed
Installation of the packages from this media is complete.
Restoring GRUB menu on ABE .
Adding operating system patches to the BE .
The operating system patch installation is complete.
ABE boot partition backing deleted.
Configuring failsafe for system.
Failsafe configuration is complete.
INFORMATION: The file on boot
environment contains a log of the upgrade operation.
INFORMATION: The file on boot
environment contains a log of cleanup operations required.
INFORMATION: Review the files listed above. Remember that all of the files
are located on boot environment . Before you activate boot
environment , determine if any additional system maintenance is
required or if additional media of the software distribution must be
installed.
The Solaris upgrade of the boot environment is complete.
Installing failsafe
Failsafe install is complete.

# luactivate sxce91
System has findroot enabled GRUB
Generating boot-sign, partition and slice information for PBE
Saving existing file in top level dataset for BE as //etc/bootsign.prev.

Generating boot-sign for ABE
Saving existing file in top level dataset for BE as //etc/bootsign.prev.
Generating partition and slice information for ABE
Boot menu exists.
Generating direct boot menu entries for PBE.
Generating xVM menu entries for PBE.
Generating direct boot menu entries for ABE.
Generating xVM menu entries for ABE.
GRUB menu has no default setting
No more bootadm entries. Deletion of bootadm entries is complete.
GRUB menu default setting is unchanged
Done eliding bootadm entries.


**********************************************************************

The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.

**********************************************************************

In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:

1. Boot from Solaris failsafe or boot in single user mode from the Solaris
Install CD or Network.

2. Mount the Parent boot environment root slice to some directory (like
/mnt). You can use the following command to mount:

mount -Fzfs /dev/dsk/c1t0d0s0 /mnt

3. Run utility with out any arguments from the Parent boot
environment root slice, as shown below:

/mnt/sbin/luactivate

4. luactivate, activates the previous working boot environment and
indicates the result.

5. Exit Single User mode and reboot the machine.

**********************************************************************

Modifying boot archive service
Activation of boot environment successful.


# init 6

8 comments:

  1. Anonymous3:33 PM

    Did you do these steps on a SXCE, or a OpenSolaris 2008.05 (Indiana) version?

    If on a previous SXCE, since when does SXCE support ZFS root?

    Roman.

    ReplyDelete
  2. On SXCE.

    SXCE has ZFS root support since ZFS 90, but you need to use the textmode installer or JumpStart to deploy on ZFS.

    You can also use LiveUpgrade to upgrade from UFS to ZFS.

    See here:

    http://unixsadm.blogspot.com/2008/06/zfs-root-in-solaris-express-community.html

    ReplyDelete
  3. Anonymous1:39 AM

    Is it possible to do the zfs to zfs liveupgrade while you have zones up and running?

    ReplyDelete
  4. I've tried it in 91->92 or something like that and it killed my zones. They say it's been fixed in SXCE 93 or 94 or something, but I haven't tried it yet. Bottom line: try it in VMware or something first.

    ReplyDelete
  5. Anonymous5:29 PM

    Thanks for the information. I am going to use live upgrade from B90 to B94 but I want to create the BE in a separate disk. When using the lucreate command would it suffice to say the following without any options?
    #lucreate -n sxce94 -m /:c0t1d0s0

    Do I need to put a zfs option in the above command if I have z zfs boot environment? For example:
    #lucreate -n sxce94 -m /:c0t1d0s0:zfs
    Thanks again..

    ReplyDelete
  6. Why not add the disk to the ZFS pool and just let ZFS create and
    additional filesystem automagically? Or do you want to remove a disk
    from the pool via ZFS upgrade tricks?

    ReplyDelete
  7. Anonymous9:34 PM

    I actually just tried that, but I think I made a mistake.
    #zpool add -f rpool c2t0d0
    I have more disk space now but I can't remove the disk from the pool.
    #zpool remove rpool c2t0d0
    returned:
    cannot remove c2t0d0: only inactive hot spares or cache devices can be removed.
    I think I made the mistake of adding the device to the top level vdev.
    I am not sure how to remove the disk from the pool after doing the upgrade :)

    ReplyDelete
  8. You can't remove disks from a pool :-). That's one of the downsides of
    ZFS. You can only remove inactive hotspares and cache devices, like
    that error sates :-).

    You can add the empty to a new pool and LU to that I guess.

    Could be a pretty neat experiment for removing disks from a pool.

    ReplyDelete