ZFS: Difference between revisions

From CLONWiki
Jump to navigation Jump to search
Boiarino (talk | contribs)
No edit summary
Boiarino (talk | contribs)
 
(40 intermediate revisions by 8 users not shown)
Line 1: Line 1:
NOTE: I made 2 zfs partitions on clondaq2: raid2 and raid3. At some poitns (after machine was renamed from clonxt3 and several reboots) raid3 dissapeared. When I tried to created it said it exist, but mount did not worked, as well as destroy. Create raid3 again, will watch it ...


to create ''raid3'' on clondaq2:
== ZFS setting on clondaq2 (RHEL9) ==


  zpool create -f raid3 c0t600C0FF0000000000983920939EB6300d0s2
 
Install zfs packages:
 
yum install zfs
 
Get the list of available disks:
 
fdisk -l
 
Load zfs modules:
 
/sbin/modprobe zfs
 
Create raidz-2 storage pool:
 
zpool create data raidz2 nvme2n1 nvme3n1 nvme4n1 nvme5n1 nvme6n1 nvme7n1 nvme8n1 nvme9n1 nvme10n1 nvme11n1
 
Check it by running command ''zfs list'', you should see following:
 
NAME  USED  AVAIL    REFER  MOUNTPOINT
data  823K  106T      219K  /data
 
Setup ZED (ZFS Event Daemon)- zfs notification service. Configure ''/etc/zfs/zed.d/zed.rc'' by setting email address, and also set ZED_NOTIFY_VERBOSE=1 if desired.
Start service: ''systemctl (re)start zed''.
 
 
'''NOTES:'''
 
Useful command: ''zpool status''.
 
Run ''zpool scrub'' on a regular basis to identify data integrity problems.
 
== old Solaris settings ==
 
'''RAID partitions on clondaq2'''
 
Four zfs partitions made on clonfs raid system connected to clondaq2: raid0, raid1, raid2 and raid3.
 
NOTE: at some point (after machine was renamed from clonxt3 and several reboots) raid3 dissapeared. When I tried to created it said it exist, but mount did not worked. Maybe it happened when I was playing with the same partition from clondaq1. Anyway, I created raid3 again, will watch it ...
 
To create ''raid0-raid3'' on clondaq2:
 
zpool create -f raid0 c0t600C0FF00000000009839205A879C400d0s2
zpool create -f raid1 c0t600C0FF0000000000983926ED5760700d0s2
 
zpool create -f raid2 c0t600C0FF000000000098392374595CB00d0s2
  zpool create -f raid3 c0t600C0FF00000000009839252F7ADDC00d0s2
 
#zpool create -f raid2 c0t600C0FF00000000009839260D9481700d0s2
#zpool create -f raid3 c0t600C0FF0000000000983927F9F306900d0s2
 
#zpool create -f raid2 c0t600C0FF00000000009839223AFDC0A00d0s2
#zpool create -f raid3 c0t600C0FF0000000000983920939EB6300d0s2
 
 
 
 
 
zfs set mountpoint=/mnt/raid0 raid0
zfs set mountpoint=/mnt/raid1 raid1
zfs set mountpoint=/mnt/raid2 raid2
  zfs set mountpoint=/mnt/raid3 raid3
  zfs set mountpoint=/mnt/raid3 raid3


to destroy ''raid3'' on clondaq2:
To destroy for example ''raid3'' on clondaq2:


  zfs umount /mnt/raid3
  zfs umount /mnt/raid3
  zpool destroy raid3
  zpool destroy raid3


to display the list of all zfs partitions:
To display the list of all zfs partitions:


  zfs list
  zfs list


=============== from the web =================
NOTE: After reboot zfs partitions maybe (will be ?) unmounted, for example on clon10 'zfs list' returns following:
 
NAME                  USED  AVAIL  REFER  MOUNTPOINT
data                  134G  214G  134G  /data
data1                  131K  182G    49K  /data1
 
but ''/data'' and ''/data1'' do not exists or empty. To mount those two partitions, do following as 'root':
 
zfs mount data
zfs mount data1
 
'''RAIDOLD on clon10'''
 
mkdir /raidold
zpool create -f raidold \
    c1t32d0 c1t33d0 c1t34d0 c1t35d0 c1t36d0 c1t37d0 c1t38d0 c1t39d0 c1t40d0 c1t41d0 c1t42d0 \
    c1t48d0 c1t49d0 c1t50d0 c1t51d0 c1t52d0 c1t53d0 c1t54d0 c1t55d0 c1t56d0 c1t57d0 c1t58d0 \
    c1t64d0 c1t65d0 c1t66d0 c1t67d0 c1t68d0 c1t69d0 c1t70d0 c1t71d0 c1t72d0 c1t73d0 c1t74d0 \
    c1t80d0 c1t81d0 c1t82d0 c1t83d0 c1t84d0 c1t85d0 c1t86d0 c1t87d0 c1t88d0 c1t89d0 c1t90d0
zfs set mountpoint=/raidold raidold
 
 
'''extra disks on clons'''
 
clon01:
zpool create -f space c1t1d0
zfs list
  NAME                            USED  AVAIL  REFER  MOUNTPOINT
  rpool                          17.9G  49.0G    94K  /rpool
  rpool/ROOT                    8.94G  49.0G    18K  legacy
  rpool/ROOT/s10s_u6wos_07b      8.94G  49.0G  6.42G  /
  rpool/ROOT/s10s_u6wos_07b/var  2.53G  49.0G  2.53G  /var
  rpool/dump                    1.00G  49.0G  1.00G  -
  rpool/export                    38K  49.0G    20K  /export
  rpool/export/home                18K  49.0G    18K  /export/home
  rpool/swap                        8G  57.0G    24K  -
  space                          89.5K  66.9G    1K  /space
 
To change swap size for example, do following:
 
zfs set volsize=4G rpool/swap
 
To add new disk to the existing zfs pool, do following:
 
iostat -E (shows all disks available)
zpool add rpool ...
 
 
'''Hardware Maintenance'''
 
Administrator's Manual: [https://clonwiki0.jlab.org/wiki/clondocs/Docs/SunFireX4500AdminGuide.pdf pdf] (hardware map see page 94)
 
To check disks status, run:
zpool status
 
To offline a failing drive, run:
zpool offline pool-name disk-name
(A -t flag allows the disk to come back online after a reboot.)
 
Unconfigure disk. For example if disk name is 'c0t1d0', run:
cfgadm -c unconfigure c0::dsk/c0t1d0
 
On sfs61: Open a cover and look for the disk with blue led on. Replace it with new disk. '''Do not keep cover opened more then 90 sec !'''.
 
Once the drive has been physically replaced, run the replace command against the device:
zpool replace pool-name device-name
(for example 'zpool replace export /dev/dsk/c0t3d0'; it will take a while, 'zpool status' shows that disk 'resilvered').
 
After an offlined drive has been replaced, it can be brought back online:
zpool online pool-name disk-name
 
It is good idea to run:
zpool scrub pool-name
 
Firmware upgrades may cause the disk device ID to change. ZFS should be able to update the device ID automatically, assuming that the disk was not physically moved during the update. If necessary, the pool can be exported and re-imported to update the device IDs.
 
 
 
 
 
 
'''from the web'''
 
Useful link: http://www.princeton.edu/~unix/Solaris/troubleshoot/zfs.html
 


Ok, ZFS is now in the tree, what's now? Below you'll find some
Ok, ZFS is now in the tree, what's now? Below you'll find some

Latest revision as of 15:56, 22 October 2024

ZFS setting on clondaq2 (RHEL9)

Install zfs packages:

yum install zfs

Get the list of available disks:

fdisk -l

Load zfs modules:

/sbin/modprobe zfs

Create raidz-2 storage pool:

zpool create data raidz2 nvme2n1 nvme3n1 nvme4n1 nvme5n1 nvme6n1 nvme7n1 nvme8n1 nvme9n1 nvme10n1 nvme11n1

Check it by running command zfs list, you should see following:

NAME   USED  AVAIL     REFER  MOUNTPOINT
data   823K   106T      219K  /data

Setup ZED (ZFS Event Daemon)- zfs notification service. Configure /etc/zfs/zed.d/zed.rc by setting email address, and also set ZED_NOTIFY_VERBOSE=1 if desired. Start service: systemctl (re)start zed.


NOTES:

Useful command: zpool status.

Run zpool scrub on a regular basis to identify data integrity problems.

old Solaris settings

RAID partitions on clondaq2

Four zfs partitions made on clonfs raid system connected to clondaq2: raid0, raid1, raid2 and raid3.

NOTE: at some point (after machine was renamed from clonxt3 and several reboots) raid3 dissapeared. When I tried to created it said it exist, but mount did not worked. Maybe it happened when I was playing with the same partition from clondaq1. Anyway, I created raid3 again, will watch it ...

To create raid0-raid3 on clondaq2:

zpool create -f raid0 c0t600C0FF00000000009839205A879C400d0s2
zpool create -f raid1 c0t600C0FF0000000000983926ED5760700d0s2
zpool create -f raid2 c0t600C0FF000000000098392374595CB00d0s2
zpool create -f raid3 c0t600C0FF00000000009839252F7ADDC00d0s2
#zpool create -f raid2 c0t600C0FF00000000009839260D9481700d0s2
#zpool create -f raid3 c0t600C0FF0000000000983927F9F306900d0s2
#zpool create -f raid2 c0t600C0FF00000000009839223AFDC0A00d0s2
#zpool create -f raid3 c0t600C0FF0000000000983920939EB6300d0s2



zfs set mountpoint=/mnt/raid0 raid0
zfs set mountpoint=/mnt/raid1 raid1
zfs set mountpoint=/mnt/raid2 raid2
zfs set mountpoint=/mnt/raid3 raid3

To destroy for example raid3 on clondaq2:

zfs umount /mnt/raid3
zpool destroy raid3

To display the list of all zfs partitions:

zfs list

NOTE: After reboot zfs partitions maybe (will be ?) unmounted, for example on clon10 'zfs list' returns following:

NAME                   USED  AVAIL  REFER  MOUNTPOINT
data                   134G   214G   134G  /data
data1                  131K   182G    49K  /data1

but /data and /data1 do not exists or empty. To mount those two partitions, do following as 'root':

zfs mount data
zfs mount data1

RAIDOLD on clon10

mkdir /raidold
zpool create -f raidold \
   c1t32d0 c1t33d0 c1t34d0 c1t35d0 c1t36d0 c1t37d0 c1t38d0 c1t39d0 c1t40d0 c1t41d0 c1t42d0 \
   c1t48d0 c1t49d0 c1t50d0 c1t51d0 c1t52d0 c1t53d0 c1t54d0 c1t55d0 c1t56d0 c1t57d0 c1t58d0 \
   c1t64d0 c1t65d0 c1t66d0 c1t67d0 c1t68d0 c1t69d0 c1t70d0 c1t71d0 c1t72d0 c1t73d0 c1t74d0 \
   c1t80d0 c1t81d0 c1t82d0 c1t83d0 c1t84d0 c1t85d0 c1t86d0 c1t87d0 c1t88d0 c1t89d0 c1t90d0
zfs set mountpoint=/raidold raidold


extra disks on clons

clon01:

zpool create -f space c1t1d0
zfs list
  NAME                            USED  AVAIL  REFER  MOUNTPOINT
  rpool                          17.9G  49.0G    94K  /rpool
  rpool/ROOT                     8.94G  49.0G    18K  legacy
  rpool/ROOT/s10s_u6wos_07b      8.94G  49.0G  6.42G  /
  rpool/ROOT/s10s_u6wos_07b/var  2.53G  49.0G  2.53G  /var
  rpool/dump                     1.00G  49.0G  1.00G  -
  rpool/export                     38K  49.0G    20K  /export
  rpool/export/home                18K  49.0G    18K  /export/home
  rpool/swap                        8G  57.0G    24K  -
  space                          89.5K  66.9G     1K  /space

To change swap size for example, do following:

zfs set volsize=4G rpool/swap

To add new disk to the existing zfs pool, do following:

iostat -E (shows all disks available)
zpool add rpool ...


Hardware Maintenance

Administrator's Manual: pdf (hardware map see page 94)

To check disks status, run:

zpool status

To offline a failing drive, run:

zpool offline pool-name disk-name

(A -t flag allows the disk to come back online after a reboot.)

Unconfigure disk. For example if disk name is 'c0t1d0', run:

cfgadm -c unconfigure c0::dsk/c0t1d0

On sfs61: Open a cover and look for the disk with blue led on. Replace it with new disk. Do not keep cover opened more then 90 sec !.

Once the drive has been physically replaced, run the replace command against the device:

zpool replace pool-name device-name

(for example 'zpool replace export /dev/dsk/c0t3d0'; it will take a while, 'zpool status' shows that disk 'resilvered').

After an offlined drive has been replaced, it can be brought back online:

zpool online pool-name disk-name

It is good idea to run:

zpool scrub pool-name

Firmware upgrades may cause the disk device ID to change. ZFS should be able to update the device ID automatically, assuming that the disk was not physically moved during the update. If necessary, the pool can be exported and re-imported to update the device IDs.




from the web

Useful link: http://www.princeton.edu/~unix/Solaris/troubleshoot/zfs.html


Ok, ZFS is now in the tree, what's now? Below you'll find some instructions how to quickly make it up and running.

First of all you need some disks. Let's assume you have three spare SCSI disks: da0, da1, da2.

Add a line to your /etc/rc.conf to start ZFS automatically on boot:

# echo 'zfs_enable="YES"' >> /etc/rc.conf

Load ZFS kernel module, for the first time by hand:

# kldload zfs.ko

Now, setup one pool using RAIDZ:

# zpool create tank raidz da0 da1 da2

It should automatically mount /tank/ for you.

Ok, now put /usr/ on ZFS and propose some file systems layout. I know you probably have some files already, so we will work on /tank/usr directory and once we ready, we will just change the mountpoint to /usr.

# zfs create tank/usr

Create ports/ file system and enable gzip compression on it, because most likely we will have only text files there. On the other hand, we don't want to compress ports/distfiles/, because we keep compressed stuff already in-there:

# zfs create tank/usr/ports # zfs set compression=gzip tank/usr/ports # zfs create tank/usr/ports/distfiles # zfs set compression=off tank/usr/ports/distfiles

(You do see how your life is changing, don't you?:))

Let's create home file system, my own home/pjd/ file system. I know we use RAIDZ, but I want to have directory where I put extremly important stuff, you I'll define that each block has to be stored in tree copies:

# zfs create tank/usr/home # zfs create tank/usr/home/pjd # zfs create tank/usr/home/pjd/important # zfs set copies=3 tank/usr/home/pjd/important

I'd like to have directory with music, etc. that I NFS share. I don't really care about this stuff and my computer is not very fast, so I'll just turn off checksumming (this is only for example purposes! please, benchmark before doing it, because it's most likely not worth it!):

# zfs create tank/music # zfs set checksum=off tank/music # zfs set sharenfs=on tank/music

Oh, I almost forget. Who cares about access time updates?

# zfs set atime=off tank

Yes, we set it only on tank and it will be automatically inherited by others.

Will be also good to be informed if everything is fine with our pool:

# echo 'daily_status_zfs_enable="YES"' >> /etc/periodic.conf

For some reason you still need UFS file system, for example you use ACLs or extended attributes which are not yet supported by our ZFS. If so, why not just use ZFS to provide storage? This way we gain cheap UFS snapshots, UFS clones, etc. by simply using ZVOLs.

# zfs create -V 10g tank/ufs # newfs /dev/zvol/tank/ufs # mount /dev/zvol/tank/ufs /ufs

# zfs snapshot tank/ufs at 20070406 # mount -r /dev/zvol/tank/ufs at 20070406 /ufs20070406

# zfs clone tank/ufs at 20070406 tank/ufsok # fsck_ffs -p /dev/zvol/tank/ufsok # mount /dev/zvol/tank/ufsok /ufsok

Want to encrypt your swap and still use ZFS? Nothing more trivial:

# zfs create -V 4g tank/swap # geli onetime -s 4096 /dev/zvol/tank/swap # swapon /dev/zvol/tank/swap.eli

Trying to do something risky with your home? Snapshot it first!

# zfs snapshot tank/home/pjd at justincase

Turns out it was more stupid than risky? Rollback your snapshot!

# zfs rollback tank/home/pjd at justincase # zfs destroy tank/home/pjd at justincase

Ok, everything works, we may set tank/usr as our real /usr:

# zfs set mountpoint=/usr tank/usr

Don't forget to read zfs(8) and zpool(8) manual pages and SUN's ZFS administration guide:

http://www.opensolaris.org/os/community/zfs/docs/zfsadmin.pdf

--

================================== from the web =======================================

ZFS and Containers: An Example This section is a step-by-step guide that shows how to perform certain ZFS file system tasks inside of Solaris Containers; for example, taking snapshots and managing data compression. It does this by going through the following steps:

   * Creating a zpool
   * Creating a Zone
   * Allocating a ZFS File System to a Zone
   * Creating New File Systems
   * Applying Quota to the File Systems
   * Changing the Mountpoint of a File System
   * Setting the Compression Property
   * Taking a Snapshot

Each of these steps is described in detail below.

Creating a zpool ZFS uses device names or partition names when dealing with pools and devices. For a device, this will be something like c1t0d0 (for a SCSI device) or c1d1 (for an IDE device). For a partition, it will be something like c1t0d0s0 (for a SCSI device) or c1d1s0 (for an IDE device). This example creates a pool that is mirrored using two disks.

  1. To create a zpool in the global zone, use the zpool create command. Typically, you use two devices to provide redundancy.
     Global# zpool create mypool mirror c2t5d0 c2t6d0
     Note that the zpool create command may fail if the devices are in use or contain some types of existing data (e.g. UFS file system). If they are in use, you will need to unmount them or otherwise stop using them. If they contain existing data you can use the -f(force) flag to override the safety check, but be sure that you are not destroying any data you want to retain.
  2. Examine the pool properties using the zpool list command.
     Global# zpool list
     NAME 	SIZE 	USED 	AVAIL 	CAP 	HEALTH 	ALTROOT
     mypool
     	
     199G
     	
     164K
     	
     199G
     	
     0%
     	
     online
     	
     --
     This shows you that there is one zpool, named mypool,with a capacity of 199GBytes.


Creating a Zone To show ZFS working in an environment that is isolated from other applications on the system, you need to create a zone. To create a zone:

  1. Create a directory where the zone file system will reside, using the mkdir command. Be sure to chose a location where the filesystem has at least 80MBytes of available space.
     Global# mkdir /zones
     Note that in this example, for the sake of brevity, the root file system of the zone is a UFS file system.
  2. Configure the zone (myzone), using the zonecfg command, and specify the location of the zone's files (/zones/myzone). Use the following series of commands.
     Global# zonecfg -z myzone
     myzone: No such zone configured
          Use 'create' to begin configuring a new zone
          zonecfg:myzone< create
          zonecfg:myzone< set zonepath=/zones/myzone
          zonecfg:myzone< verify
          zonecfg:myzone< commit
          zonecfg:myzone< exit
     Again, for the purpose of streamlining, this example uses a very minimal zone. For more details on creating zones see the Solaris Containers How To Do Stuff guide at: sun.com/software/solaris/howtoguides/containersLowRes.jsp
  3. Install the zone by using the zoneadm.
     Global# zoneadm -z myzone install
     Preparing to install zone >myzone<
          [Output from zoneadm, this may take a few mins]
  4. Boot the zone to complete the installation, using the zoneadm command.
     Global# zoneadm -z myzone boot
  5. Use the zlogin command to connect to the zone console.
     Global# zlogin -C myzone
          [Connected to zone 'myzone' console]
          [Initial zone boot output, service descriptions are loaded etc.]
     It may take a short while for the first boot to configure everything, load all the service descriptors, and so on. You will need to answer the system configuration details. Some suggestions are:
     Terminal=(12)X Terminal Emulator (xterms)
     Not Networked
     No Kerberos
     Name service = None
     Time Zone = your-time-zone
     root passwd = (Your choice—remember it though!)
     The zone will reboot after you have provided the configuration information.
  6. Before you can proceed to the next stage, the configured zone needs to be shutdown (configuration changes are only applied when the zone boots).
     Global# zlogin myzone init 5


Allocating a ZFS File System to a Zone Now that you have a zpool (mypool) and a zone (myzone) you are ready to allocate a ZFS file system to the zone.

  1. To create a ZFS file system, use the zfs create command.
     Global# zfs create mypool/myzonefs
  2. To apply a quota to the file system, use the zfs set quota command.
     Global# zfs set quota=5G mypool/myzonefs
     The file system and all of its child file systems can be no larger than the designated quota. Note that both these steps must be performed in the global zone. Also notice that creating the file system in ZFS is much simpler than with a traditional file system/volume manager combination.
     To illustrate the isolation/security properties of containers with ZFS this example now creates a ZFS file system that will remain outside the container. There is no need to apply a quota to this outside file system.
  3. To create this other file system, again use the zfs create command.
     Global# zfs create mypool/myfs
  4. To show the properties of all the pool and the file systems, use the zfs list command.
     Global# zfs list
     NAME 	USED 	AVAIL 	REFER 	MOUNTPOINT
     mypool
     	
     396G
     	
     197G
     	
     99.5K
     	
     /mypool
     mypool/myfs
     	
     98.5K
     	
     197GK
     	
     98.5K
     	
     /mypool/myfs
     mypool/myzonefs
     	
     98.5K
     	
     5G
     	
     98.5K
     	
     /mypool/myzonefs


     To make the file system (myzonefs) available in the zone (myzone), the zone configuration needs to be updated.
  5. To update the zone configuration, use the zonecfg command.
     Global# zonecfg -z myzone
     zonecfg:myzone> add dataset
     zonecfg:myzone:dataset> set name=mypool/myzonefs
     zonecfg:myzone:dataset> end
     zonecfg:myzone> commit
     zonecfg:myzone> exit
     The mypool/myzonefs file system is now added to the zone configuration. Note that these steps must be performed with the zone shut down, otherwise the zone configuration changes would not be visible until the next reboot. To check that the zone is shut down try logging into it using zlogin myzone. If the zone is shut down the login will fail; if the zone is running you will see a login prompt—login as root and shut the zone down with init 5. These steps are performed in the global zone.
  6. Now boot the zone.
     Global# zoneadm -z myzone boot
  7. Log in to the zone. (Leave a few seconds for the zone to boot.)
     Global# zlogin -C myzone
     [Connected to zone 'myzone' pts/3]
     [Usual Solaris login sequence]
  8. List the ZFS file systems in the zone.
     NAME 	USED 	AVAIL 	REFER 	MOUNTPOINT
     mypool
     	
     0M
     	
     200B
     	
     --
     	
     /mypool
     mypool/myzonefs
     	
     8K
     	
     5G
     	
     8K
     	
     /mypool/myzonefs


     Note the 5GByte maximum available from the external quota and that the other file systems in the pool (mypool/myfs) are not visible. This demonstrates the isolation property that Containers provide.


Creating New File Systems Administering ZFS file systems from the non-global zone is done just like it is in the global zone, although you are limited to operating within the file system that is allocated to the zone (mypool/myzonefs). New ZFS file systems are always created as a child of this file system because this is the only ZFS file system the non-global zone can see. The administrator in the non-global zone can create these file systems. There is no need to involve administrator of the global zone, though the administer could do so if it were necessary.

  1. To create a newfile system, use the zfs create command.
     MyZone# zfs create mypool/myzonefs/tim
     MyZone# zfs list
     NAME 	USED 	AVAIL 	REFER 	MOUNTPOINT
     mypool
     	
     594M
     	
     197G
     	
     99K
     	
     /mypool
     mypool/myzonefs
     	
     197K
     	
     5.00G
     	
     98.5K
     	
     /mypool/myzonefs
     mypool/myzonefs/tim
     	
     98.5K
     	
     5.00G
     	
     98.5K
     	
     /mypool/myzonefs/tim


     The non-global zone administrator can create as many child file systems as s/he wants and each child file system can have its own file systems, and in that way form a hierarchy.
     As a demonstration that the non-global zone administrator is limited to the assigned file systems, this example demonstrates trying to break security by creating a file system outside the container's "space".
  2. Try to create another file system outside of mypool/myzonefs, using the zfs create command.
     MyZone# zfs create mypool/myzonefs1
     cannot create 'mypool/myzonefs1': permission denied
     As you can see, ZFS and zones security denies permission for the non-global zone to access resources it has not been allocated and the operation fails.


Applying Quota to the File Systems Typically, to prevent the user consuming all of the space, a non-global zone administrator will want to apply a quota to the new file system. Of course, the child's quota can't be more than 5GByte as that's the quota specified by the global zone administrator to all of the file systems below mypool/myzonefs.

  1. To set a quota on our new file system, use the zfs set quota command.
     MyZone# zfs set quota=1G mypool/myzonefs/tim
     MyZone# zfs list
     NAME 	USED 	AVAIL 	REFER 	MOUNTPOINT
     mypool
     	
     508M
     	
     197G
     	
     99K
     	
     /mypool
     mypool/myzonefs
     	
     198K
     	
     5.00G
     	
     99K
     	
     /mypool/myzonefs
     mypool/myzonefs/tim
     	
     98.5K
     	
     1024M
     	
     98.5K
     	
     /mypool/myzonefs/tim


     The administrator of the non-global zone has set the quota of the child file system to be 1G. They have full authority to do this because they are operating on their delegated resources and do not need to involve the global zone administrator.
     The ZFS property inheritance mechanism applies across zone boundaries, so the non-global zone administrator can specify his/her own property values should s/he wish to do so. As with normal ZFS property inheritance, these override inherited values.


Changing the Mountpoint of a File System Now that the file system is set up and has the correct quota assigned to it, it is ready for use. However, the place where the file system appears (the mountpoint) is partially dictated by what the global zone administrator initially chose as the pool name (in this example, mypool/myzonefs). But typically, a non-global zone administrator would want to change it.

  1. To change the mountpoint, use the zfs set mountpoint command.
     MyZone# zfs set mountpoint=/export/home/tim mypool/myzonefs/tim
     MyZone# zfs list
     NAME 	USED 	AVAIL 	REFER 	MOUNTPOINT
     mypool
     	
     508M
     	
     197G
     	
     99K
     	
     /mypool
     mypool/myzonefs
     	
     198K
     	
     5.00G
     	
     99K
     	
     /mypool/myzonefs
     mypool/myzonefs/tim
     	
     98.5K
     	
     1024M
     	
     98.5K
     	
     /export/home/tim


     Note that the mountpoint can be changed for any file system independently. 


Setting the Compression Property The next example demonstrates the compression property. If compression is enabled, ZFS will transparently compress all of the data before it is written to disk.

The benefits of compression are both saved disk space and possible write speed improvements.

  1. To see what the current compression setting is, use the zfs get command.
     MyZone# zfs get compression mypool mypool/myzonefs mypool/myzonefs/tim
     NAME 	PROPERTY 	VALUE 	SOURCE
     mypool
     	
     compression
     	
     off
     	
     default
     mypool/myzonefs
     	
     compression
     	
     off
     	
     default
     mypool/myzonefs/tim
     	
     compression
     	
     off
     	
     default


     Beaware that the compression property on the pool is inherited by the file system and child file system. So if the non- global zone administrator sets the compression property for the delegated file system, it will set it for everything below, as well.
  2. To set the compression for the file system, use the zfs set command.
     MyZone# zfs set compression=on mypool/myzonefs
  3. Examine the compression property again in the non-global zone.
     MyZone# zfs get compression mypool mypool/myzonefs mypool/myzonefs/tim
     NAME 	PROPERTY 	VALUE 	SOURCE
     mypool
     	
     compression
     	
     off
     	
     default
     mypool/myzonefs
     	
     compression
     	
     on
     	
     local
     mypool/myzonefs/tim
     	
     compression
     	
     off
     	
     Inherited from mypool/myzonefs


     Note the compression property has been inherited by mypool/myzonefs/tim as with normal ZFS administration.


Taking a Snapshot One of the major advantages of ZFS is the ability to create an instant snapshot of any file system. By delegating a file system to a non-global zone this feature becomes available as an option for the non-global zone administrator.

  1. To take a snapshot named "1st" of the file system, use the zfs snapshot command.
     MyZone# zfs snapshot mypool/myzonefs@1st
     MyZone# zfs list
     NAME 	USED 	AVAIL 	REFER 	MOUNTPOINT
     mypool
     	
     512K
     	
     99K
     	
     default
     	
     /mypool
     mypool/myzonefs
     	
     198K
     	
     5.00G
     	
     99K
     	
     /mypool/myzonefs
     mypool/myzonefs@1st
     	
     OK
     	
     --
     	
     99K
     	
     --
     mypool/myzonefs/tim
     	
     98.5K
     	
     1024M
     	
     98.5K
     	
     /export/home/tim


     As with ZFS file systems in the global zone, this snapshot is now accessible from the root of the file system in .zfs/snapshot/1st.

Back To Top


Summary Once a zone has been created and a ZFS file system has been allocated to it, the administrator for that (non-global) zone can create file systems, take snapshots, create clones, and perform all the other functions of an administrator—within that zone. Yet the global zone, and any other zones, are fully isolated from whatever happens in that zone.

The integration of Solaris Containers and Solaris ZFS is just another way that the Solaris 10 OS is providing cost benefits to customers by allowing them to safely consolidate applications and more easily manage the data those applications use.