PDA

View Full Version : Question: how to remove old paths from mapped device



elizada
03-Feb-2013, 14:18
Hi,

I have SLES 11 SP2 installed as Boot from SAN.
I'm tried to find on web documentation how to remove dead paths (not all, just the one I choose) from device without success.
My current status is:
mpatha (2001738002750002a) dm-0 IBM,2810XIV
size=48G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:7:1 sdi 8:128 failed faulty running
|- 0:0:10:1 sdf 8:80 failed faulty running
|- 1:0:8:1 sdj 8:144 failed faulty running
|- 0:0:5:1 sda 8:0 failed faulty running
|- 0:0:8:1 sdd 8:48 failed faulty running
|- 0:0:9:1 sde 8:64 failed faulty running
|- 1:0:9:1 sdk 8:160 failed faulty running
|- 1:0:10:1 sdl 8:176 failed faulty running
|- 0:0:6:1 sdb 8:16 active ready running
|- 0:0:7:1 sdc 8:32 active ready running
|- 1:0:4:1 sdg 8:96 active ready running
`- 1:0:5:1 sdh 8:112 active ready running

I would like to remove the all faulty paths and keep the active path (without causing timeout/shutdown to host).
Thanks is advance

Magic31
03-Feb-2013, 16:48
Hi,

I have SLES 11 SP2 installed as Boot from SAN.
I'm tried to find on web documentation how to remove dead paths (not all, just the one I choose) from device without success.
My current status is:
mpatha (2001738002750002a) dm-0 IBM,2810XIV
size=48G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:7:1 sdi 8:128 failed faulty running
|- 0:0:10:1 sdf 8:80 failed faulty running
|- 1:0:8:1 sdj 8:144 failed faulty running
|- 0:0:5:1 sda 8:0 failed faulty running
|- 0:0:8:1 sdd 8:48 failed faulty running
|- 0:0:9:1 sde 8:64 failed faulty running
|- 1:0:9:1 sdk 8:160 failed faulty running
|- 1:0:10:1 sdl 8:176 failed faulty running
|- 0:0:6:1 sdb 8:16 active ready running
|- 0:0:7:1 sdc 8:32 active ready running
|- 1:0:4:1 sdg 8:96 active ready running
`- 1:0:5:1 sdh 8:112 active ready running

I would like to remove the all faulty paths and keep the active path (without causing timeout/shutdown to host).
Thanks is advance

You can try running "multipath -r" to force the device mappings to get reloaded (do watch out if you have made changes to multipath.conf alias names for devices, as these will also get applied)

It should also be ok to restart (not stop/start) the multipathd service, as service interruption is very minimal. That normally fixes invalid path issues without hampering running services (also here, watch out is multipath.conf has been modified to use different device names).

Running these commands during off-hours is recommended.

Cheers,
Willem

elizada
06-Feb-2013, 10:35
thank you for your answer but it didn't work
Do you have another suggestions ?

linux-bfs:~ # multipath -f mpatha
Feb 06 11:32:31 | mpatha_part2: map in use
linux-bfs:~ # multipath -F mpatha
Feb 06 11:34:09 | mpatha_part2: map in use
Feb 06 11:34:09 | failed to remove multipath map mpatha

Magic31
07-Feb-2013, 00:00
thank you for your answer but it didn't work
Do you have another suggestions ?

linux-bfs:~ # multipath -f mpatha
Feb 06 11:32:31 | mpatha_part2: map in use
linux-bfs:~ # multipath -F mpatha
Feb 06 11:34:09 | mpatha_part2: map in use
Feb 06 11:34:09 | failed to remove multipath map mpatha

To be clear, which commands did you run before trying the multipath flush options?

My first thought is that the multipath daemon still has or thinks there are still disk requests held up in the queues.

Could you describe a bit more how you came in this state of 8 failed paths and 4 active ones? How is the storage connected and what has changed?

-Willem

merrymax264
26-Sep-2014, 06:35
Did you find the solution? i am also getting the active faulty issue in multipath

Bharath_Gannu
12-Feb-2015, 13:19
Restart of multipath service might help you.

I have used following set of commands and it went well for me.

[root@ajax ~]# multipath -ll mpathey
mpathey (3638a95f225810000c0000000000002e8) dm-7 IBM,2145
size=3.0G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=10 status=active
`- 6:0:0:4 sdi 8:128 active ready running
[root@ajax ~]# /etc/init.d/multipathd status
multipathd (pid 5368) is running...
[root@ajax ~]# /etc/init.d/multipathd stop
Stopping multipathd daemon: [ OK ]
[root@ajax ~]# /etc/init.d/multipathd start
Starting multipathd daemon: [ OK ]
[root@ajax ~]# multipath -ll mpathey
mpathey (3638a95f225810000c0000000000002e8) dm-7 IBM,2145
size=3.0G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=10 status=active
`- 6:0:0:4 sdi 8:128 active ready running
[root@ajax ~]# multipath -f mpathey
[root@ajax ~]#multipath -ll mpathey
[root@ajax ~]#

Bharath Gannu