Haven't seen many new ones lately. There are so many options out there now, just curious what other people are doing now.
New Xsan Installs?
Xsan volume disconnects
So, we migrated from Snow Leopard on a 2009 xServe to a Mac Mini running Mavericks Server and Xsan 3.1. We're using a SanLink2 for our fibre channel connection and also for the metadata network.
All went smoothly, except that we discovered that moving largish files from the single volume (about 30TB) will cause that volume to unmount.
The logs look like everything is fine until the following sequence:
[20140910 15:07:47] 0x7fff75b87310 (debug) find_fsm fsm Pool ipaddr 192.168.0.1 port 49177 TestLink failed: getsockopt(SO_ERROR) returned error 61 [errno 61]: Connection refused
[20140910 15:07:49] 0x7fff75b87310 (debug) find_fsm fsm Pool ipaddr 192.168.0.1 port 49177 TestLink failed: getsockopt(SO_ERROR) returned error 61 [errno 61]: Connection refused
[20140910 15:07:50] 0x7fff75b87310 (debug) find_fsm fsm Pool ipaddr 192.168.0.1 port 49177 TestLink failed: getsockopt(SO_ERROR) returned error 61 [errno 61]: Connection refused
[20140910 15:07:50] 0x7fff75b87310 (debug) PortMapper: FSS 'Pool' disconnected.
[20140910 15:07:50] 0x7fff75b87310 (debug) PortMapper: kicking diskscan_thread 4446003200.
[20140910 15:07:50] 0x7fff75b87310 (debug) FSS: State Change 'Pool' REGISTERED: (no substate) -> DYING: (no substate) , next event in 60s (/SourceCache/XsanFS/XsanFS-508.4/snfs/fsmpm/fsmpm.c#5597)
[20140910 15:07:50] 0x10900a000 INFO Starting Disk rescan
[20140910 15:07:50] 0x10900a000 (debug) Disk rescan delay completed
[20140910 15:07:50] 0x10900a000 INFO Disk rescan found 0 disks
[20140910 15:07:50] 0x10900a000 NOTICE compare_disks: WARNING! Unable to find raw device /dev/rdisk4 from current disk scan in newly created list.
[20140910 15:07:50] 0x10900a000 NOTICE compare_disks: label transition for disk /dev/rdisk4
[20140910 15:07:50] 0x10900a000 NOTICE compare_disks: WARNING! Unable to find raw device /dev/rdisk7 from current disk scan in newly created list.
[20140910 15:07:50] 0x10900a000 NOTICE compare_disks: label transition for disk /dev/rdisk7
[20140910 15:07:50] 0x10900a000 NOTICE compare_disks: WARNING! Unable to find raw device /dev/rdisk8 from current disk scan in newly created list.
[20140910 15:07:50] 0x10900a000 NOTICE compare_disks: label transition for disk /dev/rdisk8
[20140910 15:07:50] 0x10900a000 NOTICE compare_disks: WARNING! Unable to find raw device /dev/rdisk5 from current disk scan in newly created list.
[20140910 15:07:50] 0x10900a000 NOTICE compare_disks: label transition for disk /dev/rdisk5
[20140910 15:07:50] 0x10900a000 NOTICE compare_disks: WARNING! Unable to find raw device /dev/rdisk6 from current disk scan in newly created list.
[20140910 15:07:50] 0x10900a000 NOTICE compare_disks: label transition for disk /dev/rdisk6
[20140910 15:07:51] 0x10940d000 (debug) FSS: State Change 'Pool' DYING: (no substate) -> RELAUNCH: (no substate) , next event in 1s (/SourceCache/XsanFS/XsanFS-508.4/snfs/fsmpm/fsmpm.c#6555)
[20140910 15:07:51] 0x7fff75b87310 (debug) NSS: Active FSS 'Pool[0]' at 10.0.6.15:49177 (pid 459) - dropped.
[20140910 15:07:51] 0x10940d000 (debug) FSS: State Change 'Pool' RELAUNCH: (no substate) -> LAUNCHED: (no substate) , next event in 60s (/SourceCache/XsanFS/XsanFS-508.4/snfs/fsmpm/fsmpm.c#2452)
[20140910 15:07:51] 0x10940d000 (debug) PortMapper: FSM 'Pool' queued for restart on host core.cyclecontact.com (pri=0).
[20140910 15:07:51] 0x10940d000 NOTICE PortMapper: Starting FSS service 'Pool[0]' on core.cyclecontact.com.
[20140910 15:07:51] 0x10940d000 NOTICE PortMapper: Started FSS service 'Pool' pid 2483.
[20140910 15:07:53] 0x10940d000 INFO Portmapper: FSS 'Pool' will not be restarted until its disk(s) are available
[20140910 15:07:53] 0x10940d000 (debug) FSS: State Change 'Pool' LAUNCHED: (no substate) -> BLOCKED: (required disk(s) missing) , next event in 60s (/SourceCache/XsanFS/XsanFS-508.4/snfs/fsmpm/fsmpm.c#6516)
[20140910 15:08:21] 0x7fff75b87310 (debug) Entering slow heartbeat mode.
At the moment, there is only one computer connected to the SAN, the Mac Mini MDC. The metadata port is connected to a switch and is pingable after this event.
I've contacted Apple support, but it seems they barely know what an Xsan is these days.
One thing that I've noticed is odd is that the Xsan Admin "overview" pane does not display the metadata network info like it used to - perhaps this is a symptom. Additionally, I can not add any clients to this network from the MDC admin. It will add the client, but the volume mount fails.
Any help at all would be appreciated, as I am totally stumped.
XSan - Corrupted inode 0xXXXXXXXXXXXXX (Bad Marker(s)).
Help please!
- Error*: buildinodes: Corrupted inode 0x22000000635765 (Bad Marker(s)).
cvadmin can't see SAN volume, but can start by sudo cvadmin start SAN
20TB Unaccounted for on Xsan
We are running a 216TB Xsan volume. I recently calculated the size of all directories on the Xsan volume, which came out to 175TB. Finder reports that 20TB are available on the Xsan volume. Where are the remaining 21TB? I've emptied the trashes on all clients (we have mac and windows clients [I've even deleted the RECYCLE.BIN directory windows creates]), but there still seems to be 20TB of space unaccounted for.
I know our SAN is pretty heavily fragmented. Could this account for the missing space? 20TB seems like a lot.
SSDs for metadata
I talked with someone who added some Intel SD3700s to a Promise x30. They used them for metadata on a volume with an extremely diverse set of data (big and small), but primarily so they could get better performance for the millions of tiny files they were serving. The person I spoke with said they actually worked with Promise to make sure drive support was included in the latest firmware. Our ActiveRAIDs are out of warranty now, but I'd still like to get a couple more years out of them.
Has anyone tried something similar with Active Storage hardware?
Pete
Xsan 10.9.4 does not add clients 10.8.5
Good Evening brave Xsan deployers
Last year i built an Xsan of 120 TB in 3x volumes, 2x 10.9.1 servers with 6x clients 10.8.5, working flawless (except for the random failover issue)
Two months ago, i upgraded both the MDC servers at 10.9.4 to get rid of the famous bug (finally !!! ), and added a MacPro 2013 with 10.9.4. No problem so far.
Today, i reinitialised one of the clients with 10.8.5 for some issues with Adobe suite, after removing it from the Xsan system regularly But now, after putting back everything where it was (Name, IP, DNS, and activating the Xsan client), The computer does not appear in the list of the Xsan Admin when i want to add it, no matter what i do.
The DNS respond with absolute perfection on both public and metadata network
The LUN are perfectly seen in the Disk Utility
Already canceled and recreated the Xsan dir into the Library/Preferences of the client. The uuid file is generated, but still nothing.
The other 5x clients 10.8.5 are working, but i don't know what will happen if i remove them from the Xsan. I still cannot test on another mac 10.8.5 if the problem repeat itself, because the system is in production.
I will try, eventually, to upgrade the client to 10.9.4 and check if, like this, it will be added to the Xsan system.
Is this a bug any of you experienced?
Maybe an upgrade to the 10.9.5 will solve it?
A curious fact:
When i open the Xsan Admin to add the computer, it ask me to install the Xsan 2 on the computer i want to add, and prepare the serial number........ what ?????
Thanks a lot for any help
MDC locking up. XSAN 3.1 OSX 10.9.4, MDC Mac Mini
XSAN configuration went well. System appears to work well for about a week, then MDC1 will usually freeze up. It appears to fail over to MDC2, but XSAN admin and cvadmin still list MDC1 as hosting. This is usually discovered by clients losing access to volume. It requires a hard shutdown of MDC1 and sometimes MDC2 and reboot of storage to get back. The log files are usually lost in the Hard Boot. Once restarted, everything works ok again for about a week then freezes again.
Any thoughts?
Rorke Data Storage Galaxy HDX 4
Hi, guys!
Any one know that a Rorke Data Storage Galaxy hdx4 works on Mavericks and the latest version of Xsan? I've searched for info about this storage system, but the last info that i founded was on 2011... LOL
Thanks!
Xsan and OSX 10.9.5
Just upgraded some lab MDCs to 10.9.5. No problems to report, please chime in below with your experiences and recommendations.
Xsan 4
OS X 10.10 Yosemite includes Xsan 4.
There are some changes to how to set up Xsan and compatibility between versions of OS X and Xsan.
If you upgrade your Mac to 10.10 then it is incompatible with Xsan 3. Officially, you can NOT have Xsan 3 (10.9) clients on a 10.10 Xsan, and, vice versa, 10.10 (Xsan 4) clients will not work on a Xsan 3 based SAN.
I’ve done some basic testing with Xsan 4 and it does away with the Xsan Admin app, all setup is done in the Server.app. Also, it depends on Open Directory (and DNS of course). If there is no OD set up then it will create one (same with DNS). Therefore, join your Xsan controller to your OD or risk creating a new master OD.
To configure the clients you export a config profile and install it on the clients, or enrol the Xsan controller in MDM (Profile Manager, for example) and push out the config to the clients.
I have not tested Xsan 4 with StorNext but I expect there is compatibility, as usual.
More info when I test some more and upgrade clients out in the real world.
Notes on Xsan4
Notes on Xsan 4
As others have noted, Xsan 4's administration model is notably different from the model in all previous versions of Xsan. Here are some notes I have on the changes.
We have coined a new term, "Activation.""Upgrade" and "Migration" are two ways of taking the base operating system and configuration from a previous OS to Yosemite. "Promotion" is the act of updating Server.app to Server 4.0. We coined a corresponding term, "Activation," to describe taking an Xsan 3 or 2 configuration and moving it into Xsan 4. SAN Activation happens after Upgrade or Migration and also after Server Promotion. Activation happens first on the previous Xsan Primary MDC, and then happens on the other MDCs.
Xsan 4 no longer directly manages clients. We ran into too many issues where SAN operations would fail because one client was off-line. To address this, Xsan 4 uses ldap to store the SAN configuration. Now instead of having Xsan Admin update (push) the configuration on all machines, we have Server.app store a changed configuration in ldap and we inform all the clients that they need to re-parse (pull) the configuration.
We now run Open Directory on all the MDCs and they act as an OD cluster to replicate this information. Clients do not need to bind to these severs. If you previously had Xsan managing Users & Groups, Xsan will use that OD for its storage. If you have an external source for your Users and Groups, say AD, just use that for Users&Groups and don't worry about this OD cluster.
There are two direct consequences of this change.
First, Xsan 4 does not support Xsan 3 or earlier systems in the SAN. We do not lock them out of the fsm's, and you can perform a zero-down-time upgrade. The rub is that the older clients do not understand the messages we now send out when the configuration changes, nor do they understand the new message instructing clients to unmount a volume (as it is about to be stopped). In this respect they are akin to Linux or Windows clients in the SAN. If you want to stop a volume, these clients will not automatically unmount it. If you destroy a volume, these cients will not correctly forget it.
Second, we use Transport Level Security (TLS) in ldap when querying the configuration. As such, we need certificates to anchor the TSL trust. Certificates need DNS host names. So Xsan 4 requires DNS be configured on the Metadata Network. We expect many sites already had it, but we now will require it.
Unfortunately the error message you get today is unclear about this issue. So be forewarned.
A thrid issue is that all of the MDCs need to be in the same OD cloud. If you had a SAN before where Xsan was NOT managing users & groups but you had OD running on some of the MDCs, you need to ensure the Xsan 3 primary controller is in that OD cluster as the primary (ODM) before activating the SAN. Otherwise Xsan will see that OD is not running on the former primary, create it, and then you won't be able to activate MDCs which are in a different OD cloud; we require all SAN MDCs have the same OD master.
Xsan with NetStor NA381TB
Good Evening.
I would like to understand the possibility to build a low cost Xsan system using an hardware chassis called NetStor NA381TB
You can see a presentation of the item here:
https://www.youtube.com/watch?v=JuqfVN_bliI
As you can see, with a MacMini and a thunderbolt connection, i can have up to 96TB storage in Raid, using an Areca card.
My question is the following:
Until today i've always used fibre channel switches (Qlogic or Brocade).
If i mount 1 or 2 atto FC card (dual or quad port), can i build an Xsan system with 4 or 8 clients connecting directly the optical cables from the Fibre Channel cards of the chassis to the clients?
Thank you very much for your time
Mac clients flooding Quantum MDCs with requests on startup.
We have an assortment of Mac clients that connect to Quantum MDCs (M440/5) for the volumes. When these Macs (running Mavericks) startup , they flood the MDCs with request for their LUNS. Something in the startup of the OS isn't showing the fiber LUNs at that time, so the client asks the MDCs again, and again, and so on. When the LUNs finally do show up on the fiber ports, Xsan connects to the volumes just fine.
Is there a way to delay Xsan startup until the fiber ports have been initialized properly?
We are using ATTO CelerityFC8 fiber cards, and zoned via Brocade DCX switches.
Thanks
Xsan 4 will not mount on client
I recently upgraded my MDC and my client computer to Yosemite. I went through the migration process the best I could and created a Configuration Profile on my MDC for my client computer. I installed the profile successfuly but the volume will not mount on the client computer. On the client computer within Profiles, the Xsan Configuration Profile has in red "Unsigned" underneath it, is that what is causing the problem?
Also a few notes: there are two other client computers that haven't been upgraded and are running Maverick and the volume does mount to those computers. The volume is also mounted on the MDC and if I go to Disk Utilities on the client computer I do see the volume, just not mounted.
Any help would be great! Thank you.
XSAN and 4K video editing
I currently have an Apple Xsan 3 deployment using Mavericks Mac Mini MDC Atto Thunderlink to FC, Promise 610f RAID with a JBOD and a Qlogic 5602FC 4GB switch.
I was originally looking to simply upgrade my storage as our editors are working in ProRes422 in Final Cut ProX without any issues. As our Promise RAIDs were going out of warrantly, I felt comfortable replacing the RAID and building a new volume on new RAID hardware. But then... I was told in the middle of 2015 we will need to be able to edit 4K. I did some Googling and see there is a wide variety of 4K formats. I am currently trying to nail down the format they will be using. I am also trying to nail down how many streams, I will need to feed to our edit stations.
Has anyone worked with 4K on an Apple Xsan? I am a bit confused on the requirements... from what I have read 4Gb FC is not going to cut it... 8Gb FC can handle some, but may have issues with some formats... 16Gb FC looks capable but the RAIDs I have been looking at use 8Gb FC.
Any input would be appreciated. I am in the Washington, Baltimore area.
Thanks,
Ray
Active Storage and UPS intelligence
Hi,
Has anyone set up a system with Active Storage who listens to APC UPS? I want to Active Storage to shutdown after a while when it get's to hot in the server room. (we had some airco issues in the past).
I can't seem to get it working with the serial cable (communication what so ever) so I tried it with a shell script through PowerChute.
What I did was i've setup the Mac's running on the APC with PowerChute, I can configure then to shutdown at a certain time or run a shell script. The last server to go down will run this shell script sending a "activeadmin --device his_ipaddress --password his_password shutdown".
Now, normally activeadmin will ask if I am sure, I have to type in "yes", so I use the feature "expect". For you who use shell script will know this.
Unfortunately, this doesn't work. I think because the activeadmin can't really work with this kind of sophistication. But he, this is not a script forum...
My question is if one of you have a way figured out control the Active Storage?
thanks, Lucas
OS X Lion: Supported digital camera RAW formats (Apple KB)

Digital camera RAW formats retain more image information than JPEGs and can produce better results when used with imaging applications such as Aperture and iPhoto.
Read more: http://support.apple.com/kb/HT4757
OS X Lion: Supported digital camera RAW formats (Apple KB)

Digital camera RAW formats retain more image information than JPEGs and can produce better results when used with imaging applications such as Aperture and iPhoto.
Read more: http://support.apple.com/kb/HT4757
iMac: How to remove or install memory (Apple KB)

Learn how to remove or install memory in your iMac computer.
Read more: http://support.apple.com/kb/HT201191
iMac: How to remove or install memory (Apple KB)

Learn how to remove or install memory in your iMac computer.
Read more: http://support.apple.com/kb/HT201191