Xsan Fresh Install on macOS Sequoia

In this artical I will cover the steps I used to build a fresh Xsan deployment on freshly installed servers and clients using macOS Sequoia – which no longer has a native GUI Server app like former macOS versions.

All configuration is now performed via the CLI in Terminal.

Prerequisites:

  • At least one Mac to act as the “server” – the roles on this will include OpenDirectory Master and Xsan MDC.
  • At least one Mac to act as the “client” – this is the workstation, assuming users will use the Xsan volume – this could also be another Mac as a server that users don’t use as a desktop
  • Each Mac is to have at least one fibre channel HBA such as a Promise SANLink FC or ATTO ThunderLink (these are the Thunderbolt variants – ATTO also make PCIe HBAs for Mac too)
  • A fibre channel storage subsystem such as a Promise VTRak or VessR3600 series
  • A fibre channel switch (if your subsystem will utilise more than one controller and/or will have more than 4 devices connected to it)
  • Compatible SFP+ adapters
  • Fibre leads – typically OM3 or OM4
  • Network switch to connect each Mac (not WiFi)
  • Administrator access to each Mac

 

Setting Up Storage:

Ideally, we need at least two storage LUNs presented by the storage subsystem – the first one will usually just be made up of two disks mirrored – this is to be used for meta data & journal, then at least another LUN made up of a number of disks that will be used to store data.

In my lab, I have a Promise VTrak E630d that has two controllers and 16x 2TB SAS disks. Connected to this is a Promise J630d with dual exansion controllers and 16x 3TB SAS disks.

The storage is configured as such:

  • LUN 0 = 2x 2TB disks in RAID 1 mirror (total logical size: 2TB)
  • LUN 1 = 14x 2TB disks in RAID 10 array (total logical size: 14TB)
  • LUN 2 = 16x 3TB disks in RAID 10 array (total logical size: 24TB)

In my testing I found it simpler to make LUN0 the smaller LUN that will be used for both metadata and the journal – I will go into more detail about this further down…

 

LUN masking set to allow access from any WWN to all three LUNS:

 

Setting Up Physical Connections:

Next step is to make all the physical connections which include:

  • SAS between the VTrak primary subsystem and a storage expansion unit – this will allow me to add 16 more disks to the 16 disk internal to the subsystem
  • Fibre Channel links between the VTrak subsystem and the FC switches
  • Fibre Channel links between the SANLink Thunderbolt FC HBAs and the FC switches
  • Thunderbolt connections between each SANLink and a Mac
  • Ethernet connections between all devices

In my lab, the topology is set out very similarly to the below diagram:

Technical Details of the Macs:

Just some notes on the Macs I have in use. As I am doing this in a lab environment, I was being concious of my own costs and wasn’t about to spend good money on a couple of newer Macs. Instead, I upcycled a pair of 2011 Mac Minis to act as the Xsan servers. I upgraded the internal components so they both have 16GB of RAM and a Samsung 870 EVO 500GB SSD – this is about as good as it will get from a hardware perspective for these. They are otherwise identical with an Intel i5 CPU.

Out of the box, these 2011 Mac Minis do not support macOS Sequoia, so I leveraged Open Core Legacy Patcher to get Sequoia running!

The client Mac is a 2020 MacBook Pro 13″ with two Thunderbolt ports, 8GB RAM and 250GB SSD – nothing is user upgradable on this model, but it does natively support macOS Sequoia.

The MacBook Pro also has a USB-C Ethernet adapter to make a physical connection to the switch as WiFi is not recommended at all for Xsan traffic. I’ll set this to DHCP and have it connected to the SANLink3 FC16 so that its just one connection for all things Xsan related.

The two Mac Minis were freshly installed – no upgrade, new SSDs.

 

General Setup:

I configured static IP addresses on the two Mac Minis and configured their hostnames.

  • Hostname of the primary server: XSAN-MDC
  • Hostname of the secondary server: XSAN-BDC
  • IP Address of the primary server: 10.172.54.31/24
  • IP Address of the secondary server: 10.172.54.32/24

The admin user account on both Mac Minis share the same username and password of localadmin | Password123

I installed the latest version of the SANLink2 FC drivers and SANLink Utility on the two Mac Minis. The driver required access to be granted in System Settings security area upon installation completion (prompted for this), followed by a reboot. The SANLink Utility is used to configure things like port speed & operation and allow updating of the firmware. All I needed to do here is ensure the SANLinks didn’t arrive to me (from an eBay seller) with ports hard set to 4gb or something silly).

Once the two Macs had the driver installed & rebooted, the link LEDs on the SANLinks lit up as they were already connected to the FC switch & storage. The LUNs showed up as uninitialised disks in Disk Utility and prompted for formating. Ignnore this as they can’t be touched yet – the Xsan configuration steps will take care of this. The important thing here is that the two Mac Minis could see all three LUNs, the sizes were representative of what the configuration on the VTrak indicated, and Disk Utility could show the device path. For me, the device paths were:

  • /dev/rdisk2
  • /dev/rdisk3
  • /dev/rdisk4

 

Configuration:

Now we get to the fun part – configuration – its all pretty much terminal commands from here on out…

But before we do, we need to properly configure our local DNS. Xsan is heavily reliant of properly configure DNS on the local network.

Ideally, our Xsan servers will sit in a domain.private zone, hosting their own OpenDirectory services. My DNS servers are local AD servers with a domain.local DNS zone. On my local AD domain controller, I created a new DNS zone called xsan.private.

I created the two static hosts for the two Mac Minis – including reverse DNS (setting up reverse DNS is important here too – do not overlook this steps).

 

So on the primary server, launch Terminal.

We need to configure fully qualified hostnames for the two servers

Enter the below command on the primary server:

sudo scutil --set HostName xsan-mdc.xsan.private

On the second server I entered:

sudo scutil --set HostName xsan-bdc.xsan.private

We can verify the host name changes on each server using:

sudo scutil --get HostName

Now back on the primary server we need to export out the LUN labels, change them to something meaningful, and import the new labels back.

sudo cvlabel -c >> /Users/localadmin/Desktop/cvlabels.txt

Open the text file on the desktop. You’ll see the entries for each LUN and they will have names like:

CvfsDisk_UNKNOWN /dev/rdisk2 3906231263 EFI 0 # host 2 lun 0 sectors 3906231263 sector_size 512 inquiry [Promise VTrak E630f 1018] serial 22D8000155D6B8D6
CvfsDisk_UNKNOWN /dev/rdisk3 27343729631 EFI 0 # host 2 lun 1 sectors 27343729631 sector_size 512 inquiry [Promise VTrak E630f 1018] serial 22AD0001552E445C
CvfsDisk_UNKNOWN /dev/rdisk4 46874978271 EFI 0 # host 2 lun 2 sectors 46874978271 sector_size 512 inquiry [Promise VTrak E630f 1018] serial 2260000155002184

I changed the labels at the start of each line to descriptions that made more sense like:

metadata1 /dev/rdisk2 3906231263 EFI 0 # host 2 lun 0 sectors 3906231263 sector_size 512 inquiry [Promise VTrak E630f 1018] serial 22D8000155D6B8D6
datapool1 /dev/rdisk3 27343729631 EFI 0 # host 2 lun 1 sectors 27343729631 sector_size 512 inquiry [Promise VTrak E630f 1018] serial 22AD0001552E445C
datapool2 /dev/rdisk4 46874978271 EFI 0 # host 2 lun 2 sectors 46874978271 sector_size 512 inquiry [Promise VTrak E630f 1018] serial 2260000155002184

Don’t forget to save the file. Then I had to import the new names back in using:

sudo cvlabel /Users/localadmin/Desktop/cvlabels.txt

This will prompt with a Y / N for each entry

Verify the LUN names are registered and none are showing any errors or statuses such as unusable using commands:

sudo cvlabel -c

and

sudo cvlabel -la

While we’re testing stuff, let’s ensure DNS is now working properly.

From each of the Mac Minis, each [Windows] Domain controller, and our MacBook Pro, we’ll check we get proper name resolution against each of the Mac Minis using:

nslookup xsan-mdc.xsan.private

nslookup xsan-bdc.xsan.private

 

Now its time to prep the OD Master & SAN (as in the Storage Area Network part of the SAN) on the primary server.

sudo xsanctl createSan LABXSAN createMaster --cert-auth-name xsanlabcert --cert-admin-email info@askflorey.com --user localadmin --account localadmin --pass 'Password123'

If successful, you’ll see output not too disimilar from the below:

2025-06-03 22:41:33.996 xsanctl[50653:1029029] create OD master succeed
Unload failed: 5: Input/output error
Try running `launchctl bootout` as root for richer errors.
2025-06-03 22:41:37.244 xsanctl[50653:1029029] buildSanConfig started
2025-06-03 22:41:37.244 xsanctl[50653:1029029] buildSanConfig about to check LDAP
2025-06-03 22:41:37.247 xsanctl[50653:1029029] buildSanConfig: getXsanConfig said nothing
2025-06-03 22:41:37.922 xsanctl[50653:1029029] buildSanConfig step 4 with {
}
2025-06-03 22:41:37.922 xsanctl[50653:1029029] New config time
2025-06-03 22:41:37.922 xsanctl[50653:1029029] We are a new controller inside of buildGlobalConfig
2025-06-03 22:41:37.924 xsanctl[50653:1029029] Hosted set is {(
)}
2025-06-03 22:41:37.924 xsanctl[50653:1029029] needStart is {(
)}
2025-06-03 22:41:37.925 xsanctl[50653:1029029] buildSanConfig saving config {
    globals =     {
        certSetRevision = "DBF0471F-AA39-4B4E-8E2F-A7437E1DE836";
        controllers =         {
            "4E0961C1-D862-5F13-B410-EB23CEC01328" =             {
                IPAddress = "10.172.54.31";
                hostName = "xsan-mdc.xsan.private";
            };
        };
        fsnameservers =         (
                        {
                addr = "10.172.54.31";
                uuid = "4E0961C1-D862-5F13-B410-EB23CEC01328";
            }
        );
        notifications =         {
            FreeSpaceThreshold = 20;
        };
        revision = "00000000-0000-0000-0000-000000000000";
        sanAuthMethod = "auth_secret";
        sanConfigURLs =         (
            "ldaps://xsan-mdc.xsan.private:389"
        );
        sanConfigVersion = 100;
        sanName = LABXSAN;
        sanState = active;
        sanUUID = "B898F1B7-AC5D-41CF-97C9-43DCB605E338";
        sharedSecret = "********";
    };
    volumes =     {
    };
}
SAN successfully created

I’m not too certain why I received an input output error at the start, but in the end, this did succeed. Almost any time I have worked on OD via the CLI, I have had similar errors at the start of the output.

Now on the second server, we need to add this as a OD replica and add it to the SAN using:

sudo xsanctl joinSan LABXSAN --controller-name xsan-mdc.xsan.private --controller-user localadmin --controller-pass 'Password123' createReplica --master xsan-mdc.xsan.private --account localadmin --pass 'Password123'

Again, we should see a successful output:

2025-06-03 22:45:32.675 xsanctl[50423:998078] create OD replica succeed
Unload failed: 5: Input/output error
Try running `launchctl bootout` as root for richer errors.
2025-06-03 22:45:36.239 xsanctl[50423:998078] buildSanConfig started
2025-06-03 22:45:36.239 xsanctl[50423:998078] buildSanConfig about to check LDAP
2025-06-03 22:45:36.241 xsanctl[50423:998078] buildSanConfig step 4 with {
    globals =     {
        certSetRevision = "DBF0471F-AA39-4B4E-8E2F-A7437E1DE836";
        controllers =         {
            "4E0961C1-D862-5F13-B410-EB23CEC01328" =             {
                IPAddress = "10.172.54.31";
                hostName = "xsan-mdc.xsan.private";
            };
        };
        fsnameservers =         (
                        {
                addr = "10.172.54.31";
                uuid = "4E0961C1-D862-5F13-B410-EB23CEC01328";
            }
        );
        notifications =         {
            FreeSpaceThreshold = 20;
        };
        revision = "2874CE08-6D64-4F77-9700-D9588E48B0E6";
        sanAuthMethod = "auth_secret";
        sanConfigURLs =         (
            "ldaps://xsan-mdc.xsan.private:389"
        );
        sanConfigVersion = 100;
        sanName = LABXSAN;
        sanState = active;
        sanUUID = "B898F1B7-AC5D-41CF-97C9-43DCB605E338";
        sharedSecret = "********";
    };
    volumes =     {
    };
}
2025-06-03 22:45:36.242 xsanctl[50423:998078] We are a new controller inside of buildGlobalConfig
2025-06-03 22:45:36.244 xsanctl[50423:998078] Hosted set is {(
)}
2025-06-03 22:45:36.244 xsanctl[50423:998078] needStart is {(
)}
2025-06-03 22:45:36.244 xsanctl[50423:998078] buildSanConfig saving config {
    globals =     {
        certSetRevision = "7FCA87B4-142E-4C5C-BCB7-4FE0F904A580";
        controllers =         {
            "4E0961C1-D862-5F13-B410-EB23CEC01328" =             {
                IPAddress = "10.172.54.31";
                hostName = "xsan-mdc.xsan.private";
            };
            "66C07B8A-6DB7-524C-9785-660EB2382DAE" =             {
                IPAddress = "10.172.54.32";
                hostName = "xsan-bdc.xsan.private";
            };
        };
        fsnameservers =         (
                        {
                addr = "10.172.54.31";
                uuid = "4E0961C1-D862-5F13-B410-EB23CEC01328";
            },
                        {
                addr = "10.172.54.32";
                uuid = "66C07B8A-6DB7-524C-9785-660EB2382DAE";
            }
        );
        revision = "2874CE08-6D64-4F77-9700-D9588E48B0E6";
        sanAuthMethod = "auth_secret";
        sanConfigURLs =         (
            "ldaps://xsan-mdc.xsan.private:389",
            "ldaps://xsan-bdc.xsan.private:389"
        );
        sanConfigVersion = 100;
        sanName = LABXSAN;
        sanState = active;
        sanUUID = "B898F1B7-AC5D-41CF-97C9-43DCB605E338";
        sharedSecret = "********";
    };
    volumes =     {
    };
}
SAN successfully joined

Now we can get into creating the Xsan volume

Initially I attempted to perform the documented process to create the Xsan volume based on a shared metadata & journal LUN, along with two data LUNs merged together to create one large volume of 14TB plus 24TB totalling 38TB – but it kept coming out as either 4TB of 28TB and for the life of me, I’m not too certain why this was the case, using the below:

sudo xsanctl addVolume XsanVolume --defaultFirstPool --addLUN metadata1 --storagePool DataPool --data --addLUN datapool1 --addLUN datapool2

Note: earlier I mentioned creating the first LUN as the RAID 1 mirror for the meta data & journal – this is so the switch –defaultFirstPool can be used to allow both the meta data & journal to share the same LUN. If I attempted to use say LUN 2 as the meta data & journal (assuming LUN 2 was the 2TB mirror), it would fail – also not certain why at this stage.

After stopping & dropping the Xsan volume (using below commands) and recreating it a few times using the same above command and not getting the full 38TB, I opted to create the volume with just the 14TB LUN first then add the 24TB later

To stop the volume – which takes it offline for all servers & clients for maintenance purposes without data destruction, use:

sudo xsanctl stopVolume <volumeName>, eg:

sudo xsanctl stopVolume XsanVolume

To destroy the volume, which removes both the volume configuration and and data you may have on there, use:

sudo xsanctl dropVolume <volumeName>, eg:

sudo xsanctl dropVolume XsanVolume

Now, let’s create the Xsan volume with just the 14TB data LUN first off:

sudo xsanctl addVolume XsanVolume --defaultFirstPool --addLUN metadata1 --storagePool DataPool --data --addLUN datapool1

Now its working at 14TB!

(I also tested using only datapool2 to get a 24TB Xsan volume successfully)

Note that each time we create the Xsan volume, after a short while (maybe 10 seconds or so) the disk appears mounted on the desktop of BOTH servers with the disk having a fibre channel icon on it. Disk Utility on the other hand, still only shows the LUNs and has no capability what so ever to display the virtual or Xsan volume.

 

Time to add an additional 24TB LUN to the storage pool:

Lets stop (but not drop) the volume:

sudo xsanctl stopVolume XsanVolume

Now lets add the final LUN:

sudo xsanctl editVolume XsanVolume --storagePool DataPool1 --addLUN datapool2

Once this has completed, the Xsan volume will be reactivated and mounted on the desktop of both servers again!

And we measure – 38TB! WOOHOO!

Note that adding additional LUNs to an Xsan volume is a one way affair – to remove the LUN would simply require destruction of the entire volume configuration and ALL data contained within.

 

Some preliminary disk speed testing:

From the 2011 Mac Mini, the below screen shot shows what speeds I achieved against the Xsan volume – definitely nothing amazing given just how many spinning disks there are in the pool:

Noting that when I performed this test, I only had one of the two FC switches connected (due to reasons) AND the 2011 Mac Mini only has Thunderbolt 1, so its limited to a line speed maximum of 10gbps. While that should be plenty enough its closer to the limits of my 8gpbs fibre channel links and without multi paths (due to a single link), I don’t think I will see speeds greater than that.

When I configure my MacBook Pro with its Thunderbolt 3 SANLink, we’ll see if I get any better performance from it – especially with both FC switches back online again!

 

Connecting Clients to our SAN:

Now its time to connect the client Macs.

Physically connect the client Macs to their Thunderbolt HBAs and the same LAN you’ve configured the MDC & BDC on, installed the required drivers for the HBA followed by a reboot.

Once the Mac clients boot up, they too should see the LUNs in Disk Utility just like on the MDC & BDC:

 

In Terminal on the the MDC (or BDC), run the following command:

sudo xsanctl exportClientProfile

This will export the client profile to the home directory of the account you’re logged into the server with (in my case, localadmin)

Now copy this to each of the Mac clients or use a provisioning tool, like MDM.

If you manually copied the file to the Mac client, double-click on it. This will simply load it into System Preferences:

 

Head to System Preferences and click on Profile downloaded on the left side menu – this will take you to the Profiles management in the General section. Click the + button:

 

You’ll be shown a preview of the contents within the profile – this should show the name of the SAN (not the Xsan volume name), the paths to the server(s), and the Auth method will show auth_secret (which contains a key or passphrase hidden from view)

Click Install. Click Continue when prompted

 

You’ll be prompted for credentials – as far as I am concerned, this will be the localadmin account of the server or another admin account configured during the Xsan configuration above:

 

You’ll be prompted again to Install, so click Install, then you’ll be prompted for the local admin account of the Mac client – authenticate this and then if all is working & configured, and DNS is working, the Xsan volume will show up on the desktop:

 

Lets test the disk speed:

 

We get some okay results here, I did expect this to run a bit better, but maybe some further tuning will help…

This is based on a Thunderbolt 3 connection between the Mac and the HBA. The HBA is capable of 16G, but has 8G fibre links and everything else is 8G. I have both FC switches connected in this test and the topology is back to looking just like the diagram near the start.

 

Soon I will be looking to obtaining a pair of 16G FC switches, a load of 16G SFP+ mudules, more 16G HBAs and a storage subsystem capable of 16G FC. I will do a new lab based on 16G FC when I have all the equipment ready, but will likely then repurpose the 16G FC storage subsystem & switches for a hypervisor SAN. I will try do these 16G labs first with the same Mac Minis with Thunderbolt 1 and 8G fibre channel, but the clients and the rest of the environment running at 16G, then I will see about upgrading the Xsan MDC (and maybe BDC) to 16G FC as well to see what impact (if any) this might have on the client’s disk speed test results.

 

References:

https://developer.apple.com/support/downloads/Xsan-Management-Guide.pdf

 

WordPress | Installing plugin updates requesting FTP login details

Recently I migrated a large number of WordPress sites from cPanel to Cloud Panel. Once I got all the sites operational, I decided to go and install pending plugin updates on a number of them.

Upon updating, WordPress requested FTP details – something I hadn’t come across before as it should be able to install them without requiring FTP access.

I checked file & directory permissions – correct!

In the end, a few posts online led me to two things to add to the wp-config.php file to address this issue:

Edit the wp-config.php and find a good place to add three lines of code – maybe where the rest of the define statements are located.

First line is to force WordPress to update directly instead of deciding on the fly how to access the file system:

define('FS_METHOD', 'direct');

Next two lines force permissions on files & directories when working on them:

define('FS_CHMOD_DIR',0755);
define('FS_CHMOD_FILE',0644);

Save the changes to the wp-config.php file, then refresh the updates page. Now you should be able to install the updates directly in the web without requiring FTP access.

ADPREP Error When Promoting New Domain Controller

When attempting to promote a new domain controller into an existing active directory environment, an error was encountered that wasn’t previously seen.

Error: The DN is CN=Send-As,CN=Extended-Rights,CN=Configuration,DC=. 
The error logs were located: C:\Windows\debug\adprep\logs\

Looking into these folders, find the file ending in .87 and open in Notepad

Note the Attribute 0 appliesTo value

On a functional domain controller, launch ADSI Edit and connect to Configuration

Inside of CN=Extended-Rights, edit the “appliesTo” attribute for the below list of entries to remove the value data mentioned in the log file

List of items to edit:

CN=Send-As,CN=Extended-Rights,CN=Configuration,DC=domain,DC=tld 
CN=Receive-As,CN=Extended-Rights,CN=Configuration,DC=domain,DC=tld 
CN=Personal-Information,CN=Extended-Rights,CN=Configuration,DC=domain,DC=tld 
CN=Public-Information,CN=Extended-Rights,CN=Configuration,DC=domain,DC=tld 
CN=Allowed-To-Authenticate,CN=Extended-Rights,CN=Configuration,DC=domain,DC=tld 
CN=MS-TS-GatewayAccess,CN=Extended-Rights,CN=Configuration,DC=domain,DC=tld 
CN=Validated-SPN,CN=Extended-Rights,CN=Configuration,DC=domain,DC=tld

Once these have been done, attempt to run ADPREP again.

Perform AD Health Check & Clean UP – Before Adding New DC

This guide is intended to demonstrate the key tasks to perform before adding a new Domain Controller to Active Directory, or, when performing extended maintenance on older AD environments where domain controllers have been added & removed in its lifetime.

Get a list of all Domain Controllers using an administrative level PowerShell session:

Get-ADDomainController -Filter * | ft Name,Hostname,OperatingSystem,IPv4Address

Confirm in Active Directory Users and Computers (ADUC) that this list corresponds with the Domain Controllers OU (typical in simple environments that they all live in the same OU)

Note there are three domain controllers and they’re all Global Catalog located in the same site

Next, launch ADSI Edit

Right-click ADSI Edit in the top of the left tree, click Connect to…

Under the drop down list labelled “Select a well known Naming Context:” select Configuration

Click OK

Expand out:

Configuration > CN=Configuration > CN=Sites > CN=(site name) > CN=Servers

Confirm that this list matches that of what you got out of PowerShell earlier

Check each of the CN=Sites for all other CN=Servers – you’re confirming all sites for the presence of their DC (if they have them) or for any remnants of previous domain controllers still present in ADSI Configuration…

Example: This AD environment has multiple sites, and the site named DRFS54 no longer has any domain controllers, so after demoting them, a clean up was performed to ensure there are no remnants of them…

Next, launch the DNS Manager

You’ll need to expand out EVERY folder for both forward and reverse lookup zones on the left – you’re going to comb through EVERY one of these – checking for previous, no longer existent domain controllers listed, and ensuring that each zone has only current domain controllers listed…

So in the various AD-specific zones & meta data, check that no old DC’s remain – remove any old ones

For each zone, right click and select Properties, then click the Name Servers tab

Check & confirm that only current servers are listed – remove any old ones

Repeat the same for all Reverse Lookup Zones. If there are none, it would be good to create any required reverse lookup zones – but do this after the full health check has been completed & errors fixed… A reverse lookup zone will be required for any LAN & VPN subnet that might contain other PCs & servers that will communicate with AD, so in the case where there are multiple sites and remote access, each site and the VPN subnets will be created here.

Next we’ll move onto DCDIAG

Using an elevated command prompt from one of the domain controllers, issue the command:

DCDIAG

Check the output for any errors

Repeat this on every domain controller

If the domain controller has a VSS-level backup job or has been rebooted in the last 24 hours, you may receive the following error:

Starting test: DFSREvent
There are warning or error events within the last 24 hours after the SYSVOL has been shared. Failing SYSVOL replication problems may cause Group Policy problems.

This is normal and expected, as during either of the above mentioned scenarios, the DFSR service is stopped or paused, a VSS (volume snap shot) is taken, then the service is started again.

Check for other errors and fix up as best as you can – if you’ve had to perform a clean up, then you may need to wait up to 48 hours before some of the errors clear up and no longer show up in DCDIAG results. Google and ChatGPT are your friends here when it comes to errors.

Next, in an elevated command prompt, issue the command:

netdom query fsmo

This will output what AD believes is the primary role holding domain controller. If the AD is relatively old, has had servers added & removed in its lifetime, then. perform this check on every domain controller to ensure congruency.

On every domain controller – edit the NIC DNS settings to ensure that every DC has each domain controller’s IP Address as a DNS server, ideally with the PDC/FSMO as the primary IP Address, then all other DCs, followed by its own IP address, fianlly followed up with 127.0.0.1 as the last DNS lookup IP address.

Ensure the AD domain name is in the DNS suffix with registration enabled.

After ensuring this is completed on every domain controller, wait up to 24 hours and perform a final DCDIAG on each domain controller. What you’re looking to achieve is a clean bill of health in DCDIAG as pictured in the above examples!

Now go back and add in the missing Reverse Lookup Zones in DNS manager – check this has replicated across to each DC.

Note: The example AD environment above is for illustration purposes only, may not be reflective of the environment you’re attempting to follow this guide in, and is a testing lab only.

Setup Hyper-V Replication Between Hosts Not AD Domain Joined

Scope:

In this post, we cover the steps required to allow Hyper-V VM replication to occur between Hyper-V hosts that are not domain joined, not in a cluster, not managed by anything else, and are completely standalone.

 

Preparations:

Hostname Changes:

As Kerberos is only available in the active directory domain, we will need to use HTTPS / SSL certificates to authenticate the Hyper-V hosts with one another. We will do this using Self-Signed SSL certificates that are signed by a test CA.

First thing to do is to ensure that each host has a fully qualified domain name (FQDN) even if its a local FQDN, and is resolvable using DNS over IPv4.

Open a command prompt or powershell session, change directory to: C:\Windows\system32

Enter in:

control sysdm.cpl

This will launch the Control Panel’s System CPL file – the legacy version before modern versions of Windows swapped this out for the System page in the Settings app.

Under the Computer Name tab, click the Change button, the in the name change window, click the More button.

In this screen, add a local or private domain name & suffix to the system.

This will require a reboot of the Hyper-V host for the name change to take effect.

Repeat this on the other Hyper-V host(s). Ideally, you want the domain.tld to match across all hosts.

Ensure the ethernet adapters have this domain in the DNS search suffix list and that the local DNS server is aware of these FQDNs & has A records for each hosts entry.

Generate SSL Certificates:

The next step is to generate the SSL certificates for each host. Pick a host – just the one. Using PowerShell, issue the command:

New-SelfSignedCertificate -DnsName "host1.domain.local" -CertStoreLocation "cert:\LocalMachine\My" -TestRoot
  • The DnsName in quote marks should be reflective of the FQDN hostname of the server.
  • The CertStoreLocation tells the PS SSL generator where to place the generated certificate – we’re installing it in the Personal store on the Computer account.
  • The -TestRoot flag is really key to this, as we’re going to use the built in testing CA to co-sign the certs – which will need to be trusted also.

Repeat the same step on the same host to generate a certificate for each other Hyper-V host that you will be using in the replications, so:

New-SelfSignedCertificate -DnsName "host2.domain.local" -CertStoreLocation "cert:\LocalMachine\My" -TestRoot

Now launch mmc.exe

In the File menu select Add/Remove Snapin

Choose Certificates from the list and add

Choose Computer account when prompted

You will see the certs you created listed.

Double-clicking on these will show they’re not signed. This is because we need to move the CertReq Test Root certificate from the folder:

Intermediate Certificate Authorities > Certificates

To:

Trusted Root Certificate Authorities > Certificates

Now that you’ve done this, export each SSL certificate out from the Personal > Certificates folder as PFX files.

  • Right-click on each one, All Tasks > Export…
  • Yes to exporting the private key…
  • Export with all extended properties
  • Set a password and save the file.

Repeat for each of the certificates you generated.

Export the CertReq Test Root CA certificate as well – selecing the DER encoded binary X.509 format

  • Copy these files to each of the other Hyper-V hosts
  • Import the CA into the same Trusted Root Certification Authorities > Certificates folder on each host
  • Import the SSL certificates into the Personal > Certificates folder on each server.

Remember, done from the Computer account, not service or personal accounts.

Opening the self-signed certificates on each host should show up as a valid SSL certificate.

Registry Changes:

We need to tell Hyper-V not to do any revocation checks on the SSL certificates

In an elevated Command Prompt or PowerShell session, enter in:

reg add “HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Replication” /v DisableCertRevocationCheck /d 1 /t REG_DWORD /f

 

Enable HTTPS Hyper-V Replication:

Now we can enable Hyper-V replication on each of the Hyper-V hosts.

  • Open Hyper-V Manager, select the host, and click on Hyper-V Settings…
  • Select Replication Configuration
  • Tick to Enable this computer as a Replica server
  • Untick Use Kerberos
  • Tick Use certificate-based Authentication (HTTPS)
  • Click Select Certificate… and, if everything above was done correctly, the Specify certificate box will auto-populate with the details of the certificate of this server.
  • Click to allow replication from any authenticated server

Repeat these steps on the other Hyper-V hosts

 

Configure Hyper-V Replication:

Right-click on the VM you need to replicate

Specify the replica server with its FQDN (ensure that you can first ping the replica server with its FQDN eg: ping host1.domain.local)

Next, choose certificate based authentication

Select the certificate and it should prompt to auto fill this data

From here, the rest should be pretty straight forward – options such as choosing how often, how many versions to retain, etc.

 

Updates to Xsan in macOS require change in the Xsan connection URL

When Mac OS X El Capitan was released, Xsan wouldn’t always automount as the Xsan mouting routines took place ahead of time before certain SAN controllers would initialise and OS X wouldn’t then re-attempt to mount later. Users had to use Terminal to manually mount the Xsan volume using:

sudo xsanctl mount <volumeName>

A script was released by Apple’s engineering as a workaround – read more on that here!

In later macOS builds, this script can’t (easily) be installed, and in Monterey, basically not at all… The suggested workaround is to edit the connection address string on affected workstations, in the file:

/Library/Preferences/Xsan/fsnameservers

The connection string (whether that be a hostname or IP address), needs to be followed up with @_cluster0/_addom0 – the 0’s need to be confirmed using CVADMIN

When you launch cvadmin (sudo cvadmin) it will tell you about the connected volume and show both the cluster number and the addom number.

Note these down, and issue the quit command to exit CVADMIN.

In the above example, the connection string is bccxserve01.bccmedia.private and the cluster & addom values are both 0.

So you would edit the fsnameservers file to contain:

bccxserve01.bccmedia.private@_cluster0/_addom0

 

Fix the deployment profile:

If you used Profile Manager to configure & deploy the profile to the workstations, then you could edit the connection string in there instead, and deploy that to the workstations after removing the old profile.

 

Delete all TimeMachine local snapshots

Issue:

Internal boot volume is full or nearly full and the source of the data usage is the local snapshots

Cause:

When TimeMachine is configure to automatically backup, but hasn’t been able to backup to the TimeMachine destination disk.

Workaround:

Perform a backup while connected to the TimeMachine disk to allow macOS to perform a safe backup & perform maintenance on the local snapshots.

Once this is done, any local snapshots remaining can be deleted using the Terminal.

Launch Terminal and issue the command:

tmutil listlocalsnapshots /
  • tmutil is the TimeMachine command line utility
  • listlocalsnapshots does what it sounds like
  • the / is the volume where the localshotshots are stored

This will produce a list of all the locally stored snapshots that can be deleted.

You can issue the command:

sudo tmutil deletelocalsnapshots / <snapshot date / time + id string>

This command is preceded with sudo, so it will require elevation using your password.

The string after the forward slash is the numerical part between com.apple.TimeMachine. and .local

If you have a larger list and don’t want to delete them one at a time, good news, you can script this to purge all of the local snapshots using:

for x in $(tmutil listlocalsnapshots /); do sudo tmutil deletelocalsnapshots $(cut -d '.' -f 4 <<<"$x"); done

 

Exchange Server 2019 Unattended Setup Switches

Primary command line switches for unattended mode

The primary (top-level, scenario-defining) command line switches that are available in unattended Setup mode in Exchange 2016 or Exchange 2019 are described in the following table:

SwitchDescription
/IAcceptExchangeServerLicenseTermsNote: Beginning with the September 2021 Cumulative Updates, this switch is no longer available in Exchange Server 2016 or Exchange Server 2019.

This switch is required in all unattended setup commands (whenever you run Setup.exe with any additional switches). If you don't use this switch, you'll get an error.
/IAcceptExchangeServerLicenseTerms_DiagnosticDataON

/IAcceptExchangeServerLicenseTerms_DiagnosticDataOFF
Note: These switches are available beginning with the September 2021 Cumulative Updates for Exchange Server 2016 and Exchange Server 2019.

One of these switches is required in all unattended setup commands (whenever you run Setup.exe with any additional switches). If you don't use one of these switches, you'll get an error.

To accept the license terms and send diagnostic data to Microsoft use the switch with suffix DiagnosticDataON.

To accept the license terms but not send diagnostic data to Microsoft use the switch with suffix DiagnosticDataOFF.
/Mode:
(or /m:)
Valid values are:
> Install: Installs Exchange on a new server using the Exchange server roles specified by the /Roles switch. This is the default value if the command doesn't use the /Mode switch.
> Uninstall: Uninstalls Exchange from a working server.
> Upgrade: Installs a Cumulative Update (CU) on an Exchange server.
> RecoverServer: Recovers an Exchange server using the existing Exchange server object in Active Directory after a catastrophic hardware or software failure on the server. For instructions, see Recover Exchange servers.
/Roles:
(or /Role: or /r:)
This switch is required in /Mode:Install commands. Valid values are:
> Mailbox (or mb): Installs the Mailbox server role and the Exchange management tools on the local server. This is the default value. You can't use this value with EdgeTransport.
> EdgeTransport (or et): Installs the Edge Transport server role and the Exchange management tools on the local server. You can't use this value with Mailbox.
> ManagementTools (or mt or t): Installs the Exchange management tools on clients or other Windows servers that aren't running Exchange.
/PrepareAD (or /p)
/PrepareSchema (or /ps)
/PrepareDomain: (or /pd:)
/PrepareAllDomains (or /pad)
Use these switches to extend the Active Directory schema for Exchange, prepare Active Directory for Exchange, and prepare some or all Active Directory domains for Exchange.
/NewProvisionedServer[:] (or /nprs[:]
/RemoveProvisionedServer: (or /rprs:)
The /NewProvisionedServer switch creates the Exchange server object in Active Directory. After that, a member of the Delegated Setup role group can install Exchange on the server.
The /RemoveProvisionedServer switch removes a provisioned Exchange server object from Active Directory before Exchange is installed on the server.
/AddUmLanguagePack:,...
/RemoveUmLanguagePack:,...
Note: These switches aren't available in Exchange 2019. They're only available in Exchange 2016.

Adds or removes Unified Messaging (UM) language packs from existing Exchange 2016 Mailbox servers. UM language packs enable callers and Outlook Voice Access users to interact with the UM system in those languages. You can't add or remove the en-US language pack.
You can install language packs on existing Mailbox servers by using the /AddUmLanguagePack switch or by running the UMLanguagePack..exe file directly. You can only remove installed language packs by using the /RemoveUmLanguagePack switch.

 

Optional command line switches for unattended mode

The optional (supporting) command line switches that are available in unattended Setup mode in Exchange 2016 or Exchange 2019 are described in the following table:

SwitchValid ValuesDefault ValuesAvailable With
/ActiveDirectorySplitPermissions:True or FalseFalse/Mode:Install /Roles:Mailbox or /PrepareAD commands for the first Exchange server in the organization.
/AdamLdapPort:A valid TCP port number50389/Mode:Install /Roles:EdgeTransport commands
/AdamSslPort:A valid TCP port number50636/Mode:Install /Roles:EdgeTransport commands
/AnswerFile:""
(or af:"")
The name and location of a text file (for example,"D:\Server data\answer.txt").n/a/Mode:Install /Roles:Mailbox or /Mode:Install /Roles:EdgeTransport commands
/CustomerFeedbackEnabled:True or FalseFalse/Mode:Install and /PrepareAD commands
/DbFilePath:"\.edb"A folder path and an .edb filename (for example, "D:\Exchange Database Files\DB01\db01.edb"). %ExchangeInstallPath%Mailbox\\.edb where:
is Mailbox Database <10DigitNumber> that matches the default name of the database or the value you specified with the /MdbName switch (without the .edb file name extension).
%ExchangeInstallPath% is %ProgramFiles%\Microsoft\Exchange Server\V15\ or the location you specified with the /TargetDir switch.
/Mode:Install /Roles:Mailbox commands
/DisableAMFilteringn/an/a/Mode:Install /Roles:Mailbox commands
/DomainController:
(or /dc:)
The server name (for example, DC01) or FQDN (for example, dc01.contoso.com) of the domain controller.A randomly-selected domain controller in the same Active Directory site as the target server where you're running Setup.All /Mode commands (except when you're installing an Edge Transport server) or /PrepareAD, /PrepareSchema, /PrepareDomain and /PrepareAllDomains commands
/DoNotStartTransportn/an/a/Mode:Install /Roles:Mailbox, /Mode:Install /Roles:EdgeTransport, and /Mode:RecoverServer commands.
/EnableErrorReportingn/aDisabled /Mode:Install, /Mode:Upgrade, and /Mode:RecoverServer commands
/InstallWindowsComponentsA folder path (for example, "E:\Exchange Database Logs").%ExchangeInstallPath%Mailbox\ where:
is Mailbox Database <10DigitNumber> that matches the default name of the database or the value you specified with the /MdbName switch (without the .edb file name extension).
%ExchangeInstallPath% is %ProgramFiles%\Microsoft\Exchange Server\V15\ or the location you specified with the /TargetDir switch.
/Mode:Install /Roles:Mailbox commands
/LogFolderPath:""A folder path (for example, "E:\Exchange Database Logs").%ExchangeInstallPath%Mailbox\ where:
is Mailbox Database <10DigitNumber> that matches the default name of the database or the value you specified with the /MdbName switch (without the .edb file name extension).
%ExchangeInstallPath% is %ProgramFiles%\Microsoft\Exchange Server\V15\ or the location you specified with the /TargetDir switch.
/Mode:Install /Roles:Mailbox commands
/MdbName:""A database filename without the .edb extension (for example, "db01")Mailbox Database <10DigitNumber> (for example, Mailbox Database 0139595516)./Mode:Install /Roles:Mailbox commands
/OrganizationName:""
(or /on:"")
A text string (for example, "Contoso Corporation").Blank in command line setup; First Organization in the Exchange Setup wizard./Roles:Mailbox or /PrepareAD commands for the first Exchange server in the organization.
/SourceDir:""
(or /s:"")
A folder path (for example, "Z:\Exchange).The ServerRoles\UnifiedMessaging folder on the Exchange installation media./AddUmLanguagePack commands in Exchange 2016 (not available in Exchange 2019)
/TargetDir:""
(or /t:"")
A folder path (for example, "D:\Program Files\Microsoft\Exchange").%ProgramFiles%\Microsoft\Exchange Server\V15\/Mode:Install and /Mode:RecoverServer commands
/TenantOrganizationConfig:""A folder path (for example "C:\Data")n/a/Mode:Install or /PrepareAD commands.
/UpdatesDir:""
(or /u:"")
A folder path (for example, "D:\Downloads\Exchange Updates").The Updates folder at the root of the Exchange installation media./Mode:Install, /Mode:Upgrade, /Mode:RecoverServer, and /AddUmLanguagePack commands.

 

Information source: https://learn.microsoft.com/en-us/exchange/plan-and-deploy/deploy-new-installations/unattended-installs?view=exchserver-2019

Exchange On-Premise – Handy Commands

Some handy commands for Microsoft Exchange Server on-premise – a growing list…

List all mailboxes across all databases:

Get-Mailbox

This doens’t include system & arbitration mailboxes.

 

List all arbitration mailboxes across all databases:

Get-Mailbox -Arbitration

This doesn’t include user or resource mailboxes.

 

List all mailbox databases:

Get-MailboxDatabase

 

Move a single mailbox to another mailbox database:

New-MoveRequest -TargetDatabase <databaseName> -Identity <mailboxID>

Notes:

  • The mailbox identity can be in quote marks
  • The mailbox identity can be one of the following:
    • GUID
    • Distinguished name (DN)
    • Domain\Account
    • User principal name (UPN)
    • LegacyExchangeDN
    • SMTP address
    • Alias

 

Get the current status of all mailbox move requests:

Get-MoveRequest -ResultSize Unlimited | Get-MoveRequestStatistics

 

Get the current status of a single mailbox move request:

Get-MoveRequest -Identity "john.doe@example.com" | Get-MoveRequestStatistics

 

Remove completed mailbox move requests:

Get-MoveRequest -MoveStatus Completed | Remove-MoveRequest -Confirm:$false

 

Resume all suspended mailbox move requests:

Get-MoveRequest -MoveStatus Suspended | Resume-MoveRequest

 

 

DCDIAG Result – Warning: Attribute userAccountControl of DC is: 0x82020

Issue:

When performing a DCDIAG, you get a warning with output:

Warning: Attribute userAccountControl of DC is: 0x82020 = ( UF_PASSWD_NOTREQD | UF_SERVER_TRUST_ACCOUNT | UF_TRUSTED_FOR_DELEGATION )
Typical setting for a DC is 0x82000 = ( UF_SERVER_TRUST_ACCOUNT | UF_TRUSTED_FOR_DELEGATION )
This may be affecting replication?

Cause:

The computer account was pre-created in Active Directory Users and Computers (ADUC) before being added to the domain and promoted as a domain controller.

Fix 1:

  • Launch ADSI Edit and connect to the Default Naming Context
  • Drill down into the Domain Controllers OU
  • Double-click the domain controller and in the Attribute Editor tab, scroll down to userAccountControl
  • You will likely see the value set to something like:
0x82020 = (PASSWD_NOTREQD...
  • Highlight the value and click Edit
  • Change the value in the edit box from 532512 to 532480
  • Click OK and OK again to save the changes

Now running DCDIAG should show this error to have been cleared.

Fix 2:

You can use an elevated PowerShell session to automate the process of the above steps in Fix 1 to all domain controllers in the Domain Controllers OU.

Change the -searchbase criteria in the below and execute:

get-adobject -filter "objectcategory -eq 'computer'" -searchbase "ou=Domain Controllers,dc=domain,dc=tld" -searchscope subtree -properties distinguishedname,useraccountcontrol | select distinguishedname, name, useraccountcontrol | where {$_.useraccountcontrol -ne 532480} | %{set-adobject -identity $_.distinguishedname -replace @{useraccountcontrol=532480} -whatif}

Even if the above command fails to fix the values, it will at least list servers with the incorrect values – handy where there are numerous domain controllers present.

Extra Information:

Some typical values:

  • Standard user: 0x200 (512)
  • Domain controller: 0x82000 (532480)
  • Workstation / member server (non-DC): 0x1000 (4096)