Stuck at emergency mode after reboot... related to RAID?
I have a Dell PowerEdge T110, running Suse Linux Enterprise Server 11 SP1 kernel 22.214.171.124-0.7. I don't believe kernel was ever upgraded. The OS resides on a RAID array (2 disks in RAID 1), /dev/system, which I guess is also dev/sda1,2,3,5,6; there are another 2 non-OS disk in a second RAID 1 array, /dev/sdb, all 4 disks behind the same Dell SAS 6/IR controller card.
I've rebooted my server in the past without problems...
but this time, after having gone into Dell's SAS 6/IR LSI config utility (control-C in the early bootup stages), just to check what was in the RAID array there -- I don't believe I made any changes -- but upon reboot cannot get past
"Started /boot" message, just after some fscks have been run. See attached photo (sorry, can't figure out how to get text off the server, can't mount a usb stick... can't find what device it resides at)
Doing ctrl-D or systemctl [pretty much anything] results in error message: failed to get D-bus connection: no connection to service manager
in output of systemd-journalctl --all
I see "Job dev-system-home_lv.device/start timed out"
and following that are multiple lines ending with "failed with result 'dependency' "
indeed, /home directory is empty (not mounted)
/etc/mtab has same lines repeated multiple times, which seems odd.
I ran fsck on /dev/mapper/system*, all clean (couldn't run it on root_lv since it's mounted)
I ran Dell's hardware diagnostics (SMART tests etc) on the hard drives in the RAID array and the controller, all passed. On the other hand in the LSI SAS RAID utility it says "Yes" under Predicted Failure for one of the non-OS disks, so I removed it from the RAID array just in case... still same behavior at bootup.
See attached photo of fstab. I never changed these initial entries.
Any ideas? Not sure what to do next. I'm waiting on a call from Alttech, who took over Novell's tech support.