PDA

View Full Version : Port 745 required to be open on an nfs server?



x0500hl
16-Aug-2012, 14:22
I recently enabled SuSEfirewall2 on several servers (SLES 10 SP4 on System z) that host DB2 9.5 and closed almost all of the ports. One of the Prod DB2 servers needs to export its DB2 /logs and /backup directories (5 file systems) to one of the Test DB2 servers. This is to save the time it takes to ftp the data to the test server if/when a DB2 recovery is needed (we recover the data to the test server, then the user extracts the needed data and imports it back into the production DB2).

When the firewall rules were enabled on Prod DB2, we found that issuing a 'mount' or 'df' command on the Test DB2 server would lock up the PuTTY session. I determined that ports 111 and 2049 needed to be opened on the Prod DB2 server to allow the test server to access the file systems. After the rules were modified SuSEfirewall2 was restarted the 'mount' and 'df' commands would still lock up the PuTTY session. Browsing the firewall log at /var/log/firewall showed the the test server was trying to access port 745. Opening port 745 resolved the problem.

I need to determine, for my PCI documentation, why port 745 needs to be opened. I Googled 'port 745' and all of the pages I looked at said that this is an unassigned port. I tried to track it down on Test DB2 with 'netstat -a' but nothing shows up.

What is also disconcerting is that I have several SLES 11 SP2 servers (also System z) running with SuSEfirewall2 enabled that are nfs servers. Port 745 is not open on these servers and all of the nfs clients are able to access the exported fie systems.

Does anyone know what service on the client server is using port 745?

Does anyone know why port 745 doesn't need to be opened on a SLES 11 system but does on a SLES 10 system?


Harley

richlyall
17-Aug-2012, 05:20
While I don't have firewall on my DB2 servers and my NFS clients (they are all SLES11 on vmware), netstat -a showed nothing in use on those ports for me either.

Try either rpcinfo or a tcpdump to capture what is actually using port 745 (might have to run it when you do the mount)
eg:- tcpdump -vv -i eth1 'port 745'


cheers
rich

jmozdzen
17-Aug-2012, 12:58
Hi Harley,

don't know where yesterday's reply went, so here it goes again:

NFS is an ONCRPC-based protocol - when starting, the server process gets a (rather random) port from the OS and registers it with the local portmapper/rpcbind. Clients contact the portmapper via port 111 to inquire the port actually used by the server and then contact that server port directly.

Port 111 is the static portmapper port, and 2049 you initially allowed may very well have been the port used by the NFS server at that time. But the latter port number will most likely change with every start of nfsd.

Due to it's dynamic nature, ONCRPC is not very firewall-friendly. Activating port-blocking rules on an NFS server is no good idea at all.

With regards,
Jens

x0500hl
21-Aug-2012, 14:14
Solved. Thank you to Rich and Jens for the responses.

I found that you CAN specify the ports to be used by services required by nfsserver. Jens is correct that the process dynamically assigns ports ala passive ftp. Novell document http://www.novell.com/support/kb/doc.php?id=7000524 explains how to do this for SLES 9, SLES 10, and SLES 11.

I followed the steps for SLES 10 and was able to assign ports and lock down who can access NFS using SuSEfirewall2. The IPTables firewall is now blocking access to the files shared via NFS on my DB2 server.


Harley