Updates to Xsan in macOS require change in the Xsan connection URL

When Mac OS X El Capitan was released, Xsan wouldn’t always automount as the Xsan mouting routines took place ahead of time before certain SAN controllers would initialise and OS X wouldn’t then re-attempt to mount later. Users had to use Terminal to manually mount the Xsan volume using:

sudo xsanctl mount <volumeName>

A script was released by Apple’s engineering as a workaround – read more on that here!

In later macOS builds, this script can’t (easily) be installed, and in Monterey, basically not at all… The suggested workaround is to edit the connection address string on affected workstations, in the file:

/Library/Preferences/Xsan/fsnameservers

The connection string (whether that be a hostname or IP address), needs to be followed up with @_cluster0/_addom0 – the 0’s need to be confirmed using CVADMIN

When you launch cvadmin (sudo cvadmin) it will tell you about the connected volume and show both the cluster number and the addom number.

Note these down, and issue the quit command to exit CVADMIN.

In the above example, the connection string is bccxserve01.bccmedia.private and the cluster & addom values are both 0.

So you would edit the fsnameservers file to contain:

bccxserve01.bccmedia.private@_cluster0/_addom0

 

Fix the deployment profile:

If you used Profile Manager to configure & deploy the profile to the workstations, then you could edit the connection string in there instead, and deploy that to the workstations after removing the old profile.

 

PreDelay Xsan Mount on Macs running OS X El Capitan (10.11.x) connected via Promise SanLink2

It has been noted that Macs running El Capitan will attempt to auto mount Xsan volume(s) on startup only once. If the volume(s) fail, unlike previous versions of OS X, the system will not re-attempt to auto mount any Xsan volumes.
It has also been noted that the Promise SanLink2 FC units, although faster and more stable, are slower at startup initialisation and LUN scanning also takes longer that say internal PCIe cards or first-generation SanLink units.
With the two above combinations, users are forced to manually mount the Xsan volume via a terminal command, or create a shell script with the terminal command inserted and placed in the user’s startup items folder. Terminal command is:
sudo xsanctl mount voluneName
(volumeName is the actual Xsan volume name that appears when mounted, can also work without sudo)
A set of scripts have been developed (and are attached to this article) which force the Xsan service (xsand) to delay auto mount until it appears to the system that LUN information has stopped changing.
To install the scripts, download the file to say your downloads folder.
Open a Terminal window and enter in:
sudo tar xvzpf ~/Downloads/xsandelay.tgz -C /
You will see output very similar to the below, you can ignore the errors:
x ./
x ./._Library
x ./Library/: Can't set user=0/group=0 for Library
x ./usr/: Can't set user=0/group=0 for usrCan't update time for usr
x ./usr/local/
x ./usr/local/libexec/
x ./usr/local/libexec/xsandelay.py
x ./Library/LaunchDaemons/
x ./Library/LaunchDaemons/com.apple.support.ht205706.xsandelay.plist
tar: copyfile unpack (./Library) failed: Operation not permitted
Then we need to activate the script with:
sudo launchctl load -w /Library/LaunchDaemons/com.apple.support.ht205706.xsandelay.plist
Once this has been done, reboot the system, login, and wait 30-60 seconds. As the SanLink2 performs LUN scanning (as indicated by the amber LED), the Xsan auto mount will be delayed until the LUN information has stopped changing for a small amount of time, then the Xsan volume(s) should auto mount to the Desktop / Finder.
Download xsandelay.tgz

Xsan CVADMIN Man Page

CVADMIN

Section: Maintenance Commands (8) Updated: August 2015

NAME

cvadmin – Administer a StorNext File System

SYNOPSIS

cvadmin [-H FSMHostName] [-F FileSystemName] [-M] [-f filename] [-e command1 -e command2…] [-x]

DESCRIPTION

cvadmin is an interactive command used for general purposeadministration of a StorNext File System including:
1. displaying file system and client status
2. activating a file system currently in stand-by mode
3. viewing and modifying stripe group attributes
4. enabling File System Manager (FSM) tracing
5. displaying disk and path information for the local system
6. forcing FSM failover
7. fetching FSM usage and performance statistics
8. temporarily enabling or disabling global file locking
9. generating a report of open files
10. listing currently held file locks
11. starting, restarting and stopping of daemon processes
12. resetting RPL information

OPTIONS

Invoke cvadmin to start the interactive session and list the running File System Managers (FSMs). (Note: StorNext system services must be started prior to running cvadmin. In particular, the local fsmpm(8) process must be active.)
Then (optionally) use the select command described below to pick an FSM to connect to. Once connected, the command will display basic information about the selected file system and prompt for further commands.
Note that a few commands such as pathsdisksstart, and stop require information obtained from the local fsmpm(8) only, so there is is no need to select an FSM prior to using them.

USAGE

-H FSMHostName
Connect to the FSM located on the machine FSMHostName. By default cvadmin will attempt to connect to an FSM located on the local machine.
-F FileSystemName
Automatically set the file system FileSystemName as the active file system in cvadmin.
-M
When listing file systems with the select command, display [managed] next to each file system with DataMigration enabled. This option is currently only intended for use by support personnel.
-f filename
Read commands from filename
-e command
Execute command(s) and exit
-x
Enable extended commands.

COMMANDS

The cvadmin commands can be used to display and modify the SNFS active configuration. When a modification is made, it exists only as long as the FSM is running. More permanent changes can be made in the configuration file. Refer to the snfs_config(5) man page for details. The following commands are supported.
activate file_system_name [hostname_or_IP_address]
Activate a file system file_system_name. This command may cause an FSM to activate. If the FSM is already active, no action is taken.
activate file_system_name number_of_votes
Quantum Internal only. Bypass the election system and attempt to activate the fsm on this node.
debug [[+|–] flag [ … ]]
View or set the File System Manager’s debugging flags. Entering the command with no flag will return current settings, the location of the FSM log file and a legend describing what each setting does. By entering the command with a flag list, the FSM Debugging Flags will be set accordingly. Each flag can be either a name or numeric value. Names will be mapped to their numeric value, and may be abbreviated as long as they remain unique. Numeric values are specified using a standard decimal or hexadecimal (0x) value of up to 32 bits. Using ‘+‘ or ‘‘ enables (‘+‘) or disables (‘‘) only the selected flags, leaving all other flags unchanged.NOTE – Setting Debugging Flags will severely impact the FSM‘s performance! Do this only when directed by an Quantum specialist.
disks [refresh]
Display the StorNext disk volumes local to the system that cvadmin is attached to. Using the optional refresh argument will force the fsmpm to re-scan all volumes before responding. If the fsmpm’s view of the disks in any file system changes compared with the FSM’s view of that client’s disks as a result of the refresh, a disconnect and reconnect to the FSM will take place to resynchronise the file system state.
disks [refresh] fsm
Display the StorNext meta-data disk volumes in use by the fsm. If the optional refresh argument is used, additional paths to these volumes may be added by the fsm.
down groupname
Down the stripe group groupname. This will down any access to the stripe group.
fail [file_system_name|index_number]
Initiate an FSM Failover of file system file_system_name. This command may cause a stand-by FSM to activate. If an FSM is already active, the FSM will shut down. A stand-by FSM will then take over. If a stand-by FSM is not available the primary FSM will re-activate after failover processing is complete.
files
Report counts of files, directories, symlinks and other objects which are anchored by a user type inode. These include named streams, block and character device files, fifos or pipes and named sockets. If the file system is undergoing conversion to StorNext 5.0, conversion progress is displayed and counters reflect the count of converted objects.
fsmlist [file_system_name] [ on hostname_or_IP_address]
Display the state of FSM processes, running or not. Optionally specify a single file_system_name to display. Optionally specify the host name or IP address of the system on which to list the FSM processes.
filelocks
Query cluster-wide file/record lock enforcement. Currently cluster-wide file locks are automatically used on Unix. Windows file/record locks are optional.If enabled, byte-range file locks are coordinated through the FSM, allowing a lock set by one client to block overlapping locks by other clients. If disabled, then byte-range locks are local to a client and do not prevent other clients from getting byte-range locks on a file, however they do prevent overlapping lock attempts on the same client.
help (?)
The help or ? command will display a command usage summary.
latency-test [index_number|all] [seconds]
Run an I/O latency test between the FSM process and one client or all clients. The default test duration is 2 seconds.
metadata
Report metadata usage. Also provide an estimate on the value of bufferCacheSize that will allow all metadata to be cached.
metadump { status | rebuild | suspend | resume }
Manage the metadump functionality of the selected FSM.The status command prints the progress of the current metadump activity, if any. If capturing a new metadump or restoring an existing one, the percentage complete will be displayed. Otherwise, the current update backlog is displayed.
The rebuild command will force the FSM to discard the existing metadump and capture a new one. This is performed online.
The suspend and resume commands are used internally to facilitate backups of the metadump files. They should not be invoked manually except under direction from support.
multipath groupname {balance|cycle|rotate|static|sticky}
StorNext has the capability of utilizing multiple paths from a system to the SAN disks.This capability is referred to as “multi-pathing”, or sometimes “multi-HBA support”. (HBA := Host Based Adaptor).
At “disk discovery” time, for each physical path (HBA), a scan of all of the SAN disks visible to that path is initiated, accumulating information such as the SNFS label, and where possible, the disk (or LUN) serial number.
At mount time, the visible set of StorNext labeled disks is matched against the requested disks for the file system to be mounted.
If the requested disk label appears more than once, then a “multi-path” table entry is built for each available path.
If the disk (or LUN) device is capable of returning a serial number, then that serial number is used to further verify that all of the paths to that StorNext labeled device share the same serial number.
If the disk (or LUN) device is not capable of returning a serial number then the device will be used, but StorNext will not be able to discern the difference between a multi-path accessible device, and two or more unique devices that have been assigned duplicate StorNext labels.
The presence of serial numbers can be validated by using the “cvlabel -ls” command. The “-s” option requests the displaying of the serial number along with the normal label information.
There are five modes of multi-path usage which can also be specified in the filesystem config file. In cases where there are multiple paths and an error has been detected, the algorithm falls back to the rotate method. The balance and cycle methods will provide the best aggregate throughput for a cluster of hosts sharing storage.
balance
The balance mode provides load balancing across all the available, active, paths to a device. At I/O submission time, the least used HBA/controller port combination is used as the preferred path. All StorNext File System I/O in progress at the time is taken into account.
cycle
The cycle mode rotates I/O to a LUN across all the available, active, paths to it. As each new I/O is submitted, the next path is selected.
rotate
The rotate mode is the default for configurations where the operating system presents multiple paths to a device.In this mode, as an I/O is initiated, an HBA controller pair to use for this I/O is selected based on a load balance method calculation.
If an I/O terminates in error, a “time penalty” is assessed against that path, and another “Active” path is used. If there are not any “Active” paths that are not already in the “error penalty” state, then a search for an available “Passive” path will occur, possibly triggering an Automatic Volume Transfer to occur in the Raid Controller.
static
The “default” mode for all disks other than Dual Raid controller configurations that are operating in Active/Active mode with AVT enabled.
As disks (or LUNs) are recognized at mount time, they are statically associated with an HBA in rotation.i.e.   given 2 HBA’s, and for disks/LUNs:
disk 0 -> HBA 0
disk 1 -> HBA 1
disk 2 -> HBA 0
disk 3 -> HBA 1

and so on...
sticky
In this mode, the path to use for an I/O is based on the identity of the target file. This mode will better utilize the controller cache, but will not take advantage of multiple paths for a single file.
The current mode employed by a stripe group can be viewed via the “cvadmin” command “show long”, and modified via the “cvadmin” command “multipath”.Permanent modifications may be made by incorporating a “MultiPathMethod” configuration statement in the configuration file for a stripe group.
In the case of an I/O error, that HBA is assessed an “error penalty”, and will not be used for a period of time, after which another attempt to use it will occur.
The first “hard” failure of an HBA often results in a fairly long time-out period (anywhere from 30 seconds to a couple of minutes).
With most HBA’s, once a “hard” failure (e.g. unplugged cable) has been recognized, the HBA immediately returns failure status without a time-out, minimizing the impact of attempting to re-use the HBA periodically after a failure. If the link is restored, most HBA’s will return to operational state on the next I/O request.
paths
Display the StorNext disk volumes visible to the local system. The display is grouped by <controller> identity, and will indicate the “Active” or “Passive” nature of the path if the Raid Controller has been recognized as configured in Active/Active mode with AVAT enabled.
proxy [long]
Display Disk Proxy servers and optionally display the disks they serve for this file system.
proxy who hostname
The “who” option displays all proxy connections for the specified host.
qos
Display per-stripe group QOS statistics. Per-client QoS statistics are also displayed under each qos-configured stripe group.
quit
This command will disconnect cvadmin from the FSM and exit.
ras enq event “detail string”
Generate an SNFS RAS event. For internal use only.
ras enq event reporting_FRU violating_FRU “detail string”
Generate a generic RAS event. For internal use only.
repfl
Generate a report that displays the file locks currently held. Note: this command is only intended for debugging purposes by support personnel. In future releases, the format of the report may change or the command may be removed entirely. Running the repfl command will write out a report file and display the output filename.
repof
Generate a report that displays all files that are currently open on each StorNext client. Only file inode numbers and stat information are displayed, filenames are not displayed. Running the repof command will write out a report file and display the output filename. In future releases, the format of the report may change.
resetrpl [clear]
Repopulate Reverse Path Lookup (RPL) information. The optional clear argument causes existing RPL data to be cleared before starting repopulation. Note: resetrpl is only available when cvadmin is invoked with the -x option. Running resetrpl may significantly delay FSM activation. This command is not intended for general use. Only run resetrpl when recommended by Technical Support.
restartd daemon [once]
Restart the daemon process. For internal use only.
select [file_system_name|N]
Select an active FSM to view and modify. If no argument is specified, a numbered list of FSMs and running utilities will be displayed. If there is only one active file system in the list, it will automatically be selected.When a running utility is displayed by the select command, it will show the following information. First the name of the file system is displayed. Following that, in brackets “[]”, is the name of the utility that is running. Third, a letter indicating the access type of the operation. The options here are (W) for read-write access, (R) for read-only access and (U) for unique access. Finally, the location and process id of the running utility is displayed.
If file_system_name is specified, then cvadmin will connect to the current active FSM for that file system. If N (a number) is specified, cvadmin will connect to the Nth FSM in the list. However, only active FSMs may be selected in this form.
show [groupname] [long]
Display information about the stripe groups associated with the selected file system. If a stripe group name groupname is given only that stripe group’s information will be given. Omitting the groupname argument will display all stripe groups associated with the active file system. Using the long modifier will additionally display detailed information about the disk units associated with displayed stripe groups.
start file_system_name [on hostname_or_IP_address]
Start a File System Manager for the file system file_system_name. When the command is running on an MDC of an HA cluster, the local FSM is started, and then an attempt is made to start the FSM on the peer MDC as identified by the /usr/cvfs/config/ha_peer file. When the optional hostname_or_IP_address is specified, the FSM is started on that MDC only. The file system’s configuration file must be operational and placed in /usr/cvfs/config/<file_system_name>.cfgx before invoking this command. See snfs_config(5) for information on how to create a configuration file for an SNFS file system.
startd daemon [once]
Start the daemon process. For internal use only.
stat
Display the general status of the file system. The output will show the number of clients connected to the file system. This count includes any administrative programs, such as cvadmin. Also shown are some of the static file-system-wide values such as the block size, number of stripe groups, number of mirrored stripe groups and number of disk devices. The output also shows total blocks and free blocks for the entire file system.
stats client_IP_address [clear]
Display read/write statistics for the selected file system. This command connects to the host FSMPM who then collects statistics from the file system client. The ten most active files by bytes read and written and by the number of read/write requests are displayed. If clear is specified, zero the stats after printing.
stop file_system_name [on hostname_or_IP_address]
Stop the File System Manager for file_system_name. This will shut down the FSM for the specified file system on every MDC. When the optional hostname or IP address is specified, the FSM is stopped on that MDC only. Further operations to the file system will be blocked in clients until an FSM for the file system is activated.
stopd daemon
Start the daemon process. For internal use only.
up groupname
Up the stripe group groupname. This will restore access to the stripe group.
who
Query client list for the active file system. The output will show the following information for each client.SNFS I.D. – Client identifier Type – Type of connection. The client types are: FSM – File System Manager(FSM) process ADM – Administrative(cvadmin) connection CLI – File system client connection. May be followed by a CLItype character: S – Disk Proxy Server C – Disk Proxy Client H – Disk Proxy Hybrid Client. This is a client that has been configured as a proxy client but is operatingas a SAN client. Location – The clients hostname or IP address Up Time – The time since the client connection was initiated License Expires – The date that the current client license will expire

EXAMPLES

Invoke the cvadmin command for FSM host cornice, file system named default.
spaceghost% cvadmin -H k4 -F snfs1
StorNext File System Administrator

Enter command(s)
For command help, enter "help" or "?".

List FSS

File System Services (* indicates service is in control of FS):
1>*snfs1[0]         located on k4:32823 (pid 3988)

Select FSM "snfs1"

Created           :    Fri Jul 25 16:41:44 2003
Active Connections:    3
Fs Block Size     :    4K
Msg Buffer Size   :    4K
Disk Devices      :    1
Stripe Groups     :    1
Mirror Groups     :    0
Fs Blocks         :    8959424 (34.18 GB)
Fs Blocks Free    :    8952568 (34.15 GB) (99%)
Show all the stripe groups in the file system;
snadmin (snfs1) > show
Show stripe group(s) (File System "snfs1")

Stripe Group 0 [StripeGroup1] Status:Up,MetaData,Journal
  Total Blocks:8959424 (34.18 GB) Free:8952568 (34.15 GB) (99%)
  MultiPath Method:Rotate
    Primary Stripe 0 [StripeGroup1] Read:Enabled Write:Enabled
Display the long version of the RegularFiles stripe group;
snadmin (snfs1) > show StripeGroup1 long
Show stripe group "StripeGroup1" (File System "snfs1")

Stripe Group 0 [StripeGroup1] Status:Up,MetaData,Journal
  Total Blocks:8959424 (34.18 GB) Free:8952568 (34.15 GB) (99%)
  MultiPath Method:Rotate
  Stripe Depth:1 Stripe Breadth:16 blocks (64.00 KB)
  Affinity Set:
  Realtime limit IO/sec:0 (~0 mb/sec) Non-Realtime reserve IO/sec:0
    Committed RTIO/sec:0 Non-RTIO clients:0 Non-RTIO hint IO/sec:0
  Disk stripes:
    Primary Stripe 0 [StripeGroup1] Read:Enabled Write:Enabled
      Node 0 [disk002]
Down the stripe group named stripe1;
snadmin (snfs1) > down stripe1
Down Stripe Group "stripe1" (File System "snfs1")

Stripe Group 0 [stripe1] Status:Down,MetaData,Journal
  Total Blocks:2222592 (8682 Mb) Free:2221144 (8676 Mb) (99%)
  Mirrored Stripes:1 Read Method:Sticky
    Primary Stripe 0 [stripe1] Read:Enabled Write:Enabled

FILES

/usr/cvfs/config/*.cfgx

SEE ALSO

cvfs(8), snfs_config(5), fsmpm(8), fsm(8), mount_cvfs(8)

Upgrading Server Services post Mac OS X Upgrade hangs on Updating Wiki Service

If you’re willing to give up your Wiki service data or if you’ve never even used it in the first place, the solution is easy, just delete the /Library/Server/Wiki folder. For example you can issue the Terminal command:
sudo rm -r /Library/Server/Wiki
and then reboot the computer. You may have to force-kill the Server.app process to be able to reboot.
Run the Server App again to restart the configuration process. You’ll end up with all services intact and Server App will create a new default wiki configuration for you.

Use the terminal to enable or disable sleep functions in OS X

We all know how to use the System Preferences to enable and disable system sleep in OS X, but what about remotely via SSH or locally in the Terminal?
Also, what about preventing users from being able to put a system to sleep via the Apple menu in OS X?
Use Terminal to prevent system from going to sleep when idle:
sudo systemsetup -setcomputersleep Never
Use Terminal to set a predetermined idle time to put the system to sleep:
sudo systemsetup -setcomputersleep 60
Use Terminal to get the current sleep settings:
sudo systemsetup -getcomputersleep
Use Terminal to disable the sleep function via the Apple Menu:
sudo defaults write /Library/Preferences/SystemConfiguration/com.apple.PowerManagement SystemPowerSettings -dict SleepDisabled -bool YES
Use Terminal to re-enable the sleep function via the Apple Menu:
sudo defaults write /Library/Preferences/SystemConfiguration/com.apple.PowerManagement SystemPowerSettings -dict SleepDisabled -bool NO
These functions can be vert handy for Xsan connected Macs where you have unsupported storage subsystems that maybe affected by Macs entering sleep mode.

Useful Xsan administrative commands for Mac OS X MDC’s

  • cvadmin – Administer an Xsan File System (show volumes, stop volumes, start volumes, view storage pools, manage quotas
  • cvaffinity – List or set affinities for storage pools
  • cvcp – Xsan copy utility (use at your own risk)
  • cvdb – Xsan debugger
  • cvdbset – Used to control the Xsan debugger
  • [slider title=”cvfail”]cvfail[/slider]- Fail volumes over between metadata controllers
  • cvfsck – Check consistency and repair Xsan volumes
  • cvfsdb – FileSystem debugging tool
  • cvfsid – Display system identification information (useful when integrating StorNext clients)
  • cvgather – Compile debugging information for Xsan
  • cvlabel – Label LUNs for use with Xsan
  • cvmkdir – Directory creation tool for Xsan
  • cvmkfile – File creation tool for Xsan
  • cvmkfs – File system (volume) creation tool for Xsan
  • cvuntrespass – Recover files that trespass into restricted areas (like when you got caught jumpin’ the fence of that sorority house back in the day)
  • cvupdatefs – Make a change to an existing volume
  • cvversions – Obtain patch level and versioning information for the system, Xsan and when the patches were compiled.
  • fsm – Volume controller process
  • fsmpm – Volume manager that runs on each node
  • mount_acfs – Xsan mounter
  • sndiskmove – Migrate the contents of one LUN to another LUN
  • sndpscfg – Disk proxy server config tool
  • snfsdefrag – Defragment the volume
  • snmetadump – Save and/or process metadata
  • snmetatar – Backup metadata to tar
  • xsanctl – Xsan control tool allows you to quickly and easily ping (check a client), mount/unmount volumes and check that volumes and disks haven’t changed, all client-side
  • xsand – Xsan client daemon, reads the /Library/Filesystems/Xsan/config/automount.plist and mounts volumes as indicated in the property list

Notes and usage for the CVADMIN utility:

Of all the command line tools available to assist you in the management of an Xsan volume, cvadmin serves as the primary command line tool to interface with, troubleshoot, and ultimately administer your Xsan. It provides for capabilities beyond what is available through Apple-provided GUI tools, which can be extremely useful for resolving the more complex problems that can crop up. Given the reported issues with Xsan Admin occasionally misreporting information, cvadmin is one of the tools that can help to make your experience administering an Xsan much less frustrating.

Read more below…

The basics:
Located at /Library/Filesystems/Xsan/bin/cvadmin, this tool can be run either interactively or non-interactively with the ‘-e’ flag. For the purposes of this article, we’ll be using primarily the interactive function of cvadmin. In it’s most basic form, cvadmin is started with no arguments, but must run as root. If you fire up cvadmin from a different user account, the tool will run, but you will not be able to access your filesystems. To get started we’ll launch the program with elevated privileges:

fig. 1
$ sudo /Library/Filesystems/Xsan/bin/cvadmin
Xsan Administrator
Enter command(s)
For command help, enter "help" or "?".

List FSS

File System Services (* indicates service is in control of FS):
1>*MyVolume[0] located on 192.168.56.5:51520 (pid 512)
2> MyVolume[1] located on 192.168.56.6:51520 (pid 509)

Select FSM "MyVolume"

Created : Tue Jan 13 15:33:57 2009
Active Connections: 1
Fs Block Size : 16K
Msg Buffer Size : 4K
Disk Devices : 2
Stripe Groups : 2
Fs Blocks : 61277616 (935.02 GB)
Fs Blocks Free : 61006893 (930.89 GB) (99%)


Xsanadmin (MyVolume) >

When cvadmin is first invoked it displays all of the valid File System Services (which in this context means volumes per metadata controller) and selects our only volume. In this particular instance, you see that there are two entries for “MyVolume”. This is completely normal as you should see one entry per volume per metadata controller. In this case, we have one volume and 2 metadata controllers, so we have 2 entries. The asterisk denotes the active FSS (or active metadata controller), in this case 192.168.56.5.

In order to perform any worth while tasks using these tools, we next need to ‘select’ a volume. In this particular instance, there is only one volume and so cvadmin ‘selected’ the active volume and displayed the statistics for that volume. But in environments where there are multiple volumes you will need to ‘select’ a volume before you proceed with using cvadmin. The cvadmin prompt will always display the active volume (in this case ‘MyVolume’), so there is no confusion as to which is being administered:

Xsanadmin (MyVolume) >

If more than one volume existed, we may desire to operate on a different volume, to do so we can simply run the command “>select volumeName” which would then select the volume “volumeName” and output statistics for it. Alternatively, the select command can be run with no arguments to output a list of all availableFile System Services.

fig 2.
Xsanadmin (MyVolume) > select
List FSS

File System Services (* indicates service is in control of FS):
1>*MyVolume[0] located on 192.168.56.5:51520 (pid 512)
2> MyVolume[1] located on 192.168.56.6:51520 (pid 509)

Practical Administration:
Ok, so we now know how to run the tool and select a particular volume, but what else can we do with it? Well, the answer is “quite a bit”; for the most part, my day-to-day Xsan administration happens only in cvadmin, the only reason that I typically fire up Xsan Admin is to help end users who rely on it. A full list of commands is available in the cvadmin man pages, or by typing “help” in an interactive cvadmin session. That being said, the commands listed below are my most frequently used:

>who

Query client list for the active file system. The output will show the following information for each client.SNFS I.D. – Client identifier Type – Type of connection. The client types are: FSM – File System Manager(FSM) process ADM – Administrative(cvadmin) connection CLI – File system client connection. May be followed by a CLI type character: S – Disk Proxy Server C – Disk Proxy Client H – Disk Proxy Hybrid Client. This is a client that has been configured as a proxy client but is operating as a SAN client. Location – The clients hostname or IP address Up Time – The time since the client connection was initiated License Expires – The date that the current client license will expire

>stats

Prints out volume statistics.

>stop/start MyVolume
>stop/start MyVolume on 192.168.5.5

The stop and start commands are equivalent to starting or stopping the volume in Xsan. However, by specifying a hostname/ip, we can stop file system services only on that particular MDC, which can be handy for maintenance purposes.

>fail MyVolume

This will failover the volume “MyVolume” and initiate a FSS vote amongst your metadata controllers. The metadata controller providing services for this volume with the highest failover priority should win this election. If no failover is available, the volume will failback to the original host that was hosting the volume.

>fsmlist

This will output a list of FSM processes on the machine that is selected, which is useful when determining which volumes it is capable of hosting as a metadata controller.

>repof

This will generate an open file report to /Library/Filesystems/Xsan/data/MyVolume/open_file_report.txt. This report includes a slew of information, but noticeably absent is the actual file name. Arg! It does offer an inode number for the file in question though, so you can use a command such as `find /Volumes/MyVolume -inum X` to determine the actual file from the published inode number. The >repof command can be very useful when attempting to determine why a client will not unmount a volume.

Troubleshooting:
There are also a number of options to help with more complex troubleshooting.

>show
>show long

This will output information about the stripe groups/storage pools used by this volume. It is useful for cross referencing index numbers outputted in system.log to human readable storage pool names. It also provides various statistics and configuration, such as stripe group role, corresponding LUNs, affinity tags, multipath method, and other useful bits of information.

>paths

This will output a list of LUNs visible to the node and the corresponding HBA port used to communicate with that LUN. This option can be helpful when you are getting those pesky “stripe group not found” errors.

>debug 0x01000000

This will output IO latency numbers to /Library/Filesystems/Xsan/data/MyVolume/log/cvlog immediately (which is otherwise done every hour). The key figure to look at in this output is the “sysavg” number for “PIO HiPriWr SUMMARY”. If your metadata is hosted on an Xserve RAID volume, this number should be below 500ms. If you’re using Promise based storage, the active/active controller setup introduces additional latency, you will see numbers in the 805-1000ms range.

>latency-test

Use this command to run latency tests between the FSM and clients. It can be used to isolate problematic clients on the SAN.

>activate MyVolume 192.168.56.5

Use this command to activate FSS services for the specified IP/hostname. Alternatively you can leave off the IP and it will activate the local server (if applicable). This command can be run on an MDC if it is not showing appropriate FSS services available. If you see errors to the effect that an MDC is on ‘standby’, activating the volume on the respective server will often address this issue.

When to Use Another Tool:
The cvadmin tool is very useful when troubleshooting metadata controller behavior. But cvadmin isn’t used when you want to perform Xsan setup or client management operations. To label LUNs you would use cvlabel. To mount and unmount volumes you would likely use the new xsanctl tool or ‘mount -t acfs’. To perform defrag operations and volume maintenance you would use the snfsdefrag and cvfsck tools, respectively. While you can add serial numbers and create volumes from the command line, it is probably much easier to continue performing these operations through the Xsan Admin GUI tool.

All in all, cvadmin can be a very useful tool. It is fast and can help to ease the administrative burden of managing an Xsan once you get used to it. Unlike the GUI, the information that it displays can always be trusted to be an accurate representation of the state of your SAN. It can also provide more insight when you’re troubleshooting and help you to become a more lucid Xsan administrator.

Archive and restore Open Directory data

You archive and restore Open Directory data using the Server app or the command line. To archive or restore a copy of your Open Directory data using the command line, use the slapconfig command. You can archive a copy of the data while the Open Directory master is in service.

The following files are archived:

  • The LDAP directory database (includes password data) and configuration files
  • Kerberos configuration files
  • Keychain data needed by Open Directory

Archives are only used by Open Directory masters. If a replica develops a problem, you can remove it as a replica from the Open Directory master, set up the replica as if it were a new server (with a new host name), then set it up again as a replica of the same master.Important: Carefully safeguard the archive media that contains a copy of the Open Directory password database, the Kerberos database, and the Kerberos keytab file. The archive contains sensitive information. Your security precautions for the archive media should be as stringent as for the Open Directory master server.

If you enable Time Machine on the server, directory and authentication data is automatically archived.

Archive Open Directory data using the Server app

  1. In the Open Directory pane, click Servers.
  2. Choose Archive Open Directory Master from the Action pop-up menu (looks like a gear).
  3. In the Archive File field, enter or choose the path to the folder where you want the Open Directory data archived.
  4. Enter a password for the archive, then click Next.
  5. Confirm your settings, then click Archive.

Archive Open Directory data using the command line

You can archive Open Directory data from the command line.

To archive Open Directory data, open the Terminal app (located in the Other folder in Launchpad), then enter the following command:

$ sudo slapconfig -backupdb /full/path/to/archive For example, /full/path/to/archive could be /Volumes/Data/myODArchive.

Enter a password to encrypt the disk image. Encrypting the image protects the sensitive data in the Open Directory database.

The archive file will have the file extension “.sparseimage”.

Restore Open Directory data using the Server app

  1. In the Open Directory pane, turn Open Directory on.
  2. Select “Restore Open Directory domain from an archive,” then click Next.
  3. In the Archive File field, enter or choose the path to the Open Directory archive file.
  4. Enter the password for the archive, then click Next.
  5. Click Restore.

Restore Open Directory data using the command line

You can restore Open Directory data from the command line.

To restore Open Directory data, open the Terminal app (located in the Other folder in Launchpad), then enter the following command:

sudo slapconfig -restoredb /full/path/to/archive.sparseimage

For example, /full/path/to/archive.sparseimage could be /Volumes/Data/myODArchive.sparseimage.If you entered a password to encrypt the data when you archived it, enter that password when prompted.