PDA

View Full Version : SLES-Other SUSE 10 SP2 don't allow to run xymon on port 23.



soopal
21-Apr-2016, 00:04
hi all:
I have recently received a server with SUSE 10 SP2 as the old sysadmin moved away of the department.
AFAIK there no firewall or apparmor running; just Oracle DB and some other stuff.
The boss asked to run xymon client on the server since we have a xymon server some time ago.
I got logged to the server uncompressed tar and compile with a regular user the client (local mode) and then install with a sudo -s.
So far so good; suddenly we came that the client weren't running fine since something don't allow even a localhost to 1984.
After a long talk with older admins; we found that the system was lockdown in a undocumented way (at least to me) ... and the system is on a network that filter most tcp/udp ports (from the xymon server network the only open port is 22/ssh port and icmp ).

So via yast2 I disabled the xinetd telnet daemon to run our xymon client on the 23 port. We are using a reversed ssh tunnel to let the xymon client send the data to the xymon server (https://en.wikibooks.org/wiki/System_Monitoring_with_Xymon/Other_Docs/HOWTO#Monitor_Hobbit_clients_in_a_DMZ_using_revers e_SSH_tunnels).

I modified xymon client script to use port 23 but as soon I run the start command to the xymon client I see this error:

2016-04-20 16:46:50.231234 Whoops ! Failed to send message (Connection failed)
2016-04-20 16:46:50.231295 -> Could not connect to Xymon daemon@127.0.0.1:23 (Connection refused)
2016-04-20 16:46:50.231301 -> Recipient '127.0.0.1', timeout 15
2016-04-20 16:46:50.231306 -> 1st line: 'client report.linux linux'


When xinetd telnet daemon was running the telnet localhost return the common connection string...

So I humbly ask for some help to fix the problem.... this is my 1st SUSE box and since there is no SElinux or firewall processes I'm lost.

UID PID PPID C STIME TTY TIME CMD
root 1 0 0 2011 ? 02:30:52 init [3]
root 2 1 0 2011 ? 00:00:47 [migration/0]
root 3 1 0 2011 ? 00:00:08 [ksoftirqd/0]
root 4 1 0 2011 ? 00:01:38 [migration/1]
root 5 1 0 2011 ? 00:00:19 [ksoftirqd/1]
root 6 1 0 2011 ? 00:07:20 [migration/2]
root 7 1 0 2011 ? 00:00:00 [ksoftirqd/2]
root 8 1 0 2011 ? 00:02:43 [migration/3]
root 9 1 0 2011 ? 00:00:00 [ksoftirqd/3]
root 10 1 0 2011 ? 00:00:45 [migration/4]
root 11 1 0 2011 ? 00:00:05 [ksoftirqd/4]
root 12 1 0 2011 ? 00:01:14 [migration/5]
root 13 1 0 2011 ? 00:00:00 [ksoftirqd/5]
root 14 1 0 2011 ? 00:03:02 [migration/6]
root 15 1 0 2011 ? 00:00:01 [ksoftirqd/6]
root 16 1 0 2011 ? 00:02:05 [migration/7]
root 17 1 0 2011 ? 00:00:00 [ksoftirqd/7]
root 18 1 0 2011 ? 00:00:00 [events/0]
root 19 1 0 2011 ? 00:00:00 [events/1]
root 20 1 0 2011 ? 00:00:00 [events/2]
root 21 1 0 2011 ? 00:00:00 [events/3]
root 22 1 0 2011 ? 00:00:00 [events/4]
root 23 1 0 2011 ? 00:00:00 [events/5]
root 24 1 0 2011 ? 00:00:00 [events/6]
root 25 1 0 2011 ? 00:00:00 [events/7]
root 26 1 0 2011 ? 00:52:21 [khelper]
root 27 1 0 2011 ? 00:00:00 [kthread]
root 37 27 0 2011 ? 00:00:00 [kblockd/0]
root 38 27 0 2011 ? 00:00:00 [kblockd/1]
root 39 27 0 2011 ? 00:00:00 [kblockd/2]
root 40 27 0 2011 ? 00:00:00 [kblockd/3]
root 41 27 0 2011 ? 00:00:00 [kblockd/4]
root 42 27 0 2011 ? 00:00:00 [kblockd/5]
root 43 27 0 2011 ? 00:00:00 [kblockd/6]
root 44 27 0 2011 ? 00:00:00 [kblockd/7]
root 45 27 0 2011 ? 00:00:00 [kacpid]
root 46 27 0 2011 ? 00:00:00 [kacpi_notify]
root 254 1 0 2011 ? 00:00:27 [kswapd0]
root 255 27 0 2011 ? 00:00:00 [aio/0]
root 256 27 0 2011 ? 00:00:00 [aio/1]
root 257 27 0 2011 ? 00:00:00 [aio/2]
root 258 27 0 2011 ? 00:00:00 [aio/3]
root 259 27 0 2011 ? 00:00:00 [aio/4]
root 260 27 0 2011 ? 00:00:00 [aio/5]
root 261 27 0 2011 ? 00:00:00 [aio/6]
root 262 27 0 2011 ? 00:00:00 [aio/7]
root 482 27 0 2011 ? 00:00:00 [cqueue/0]
root 483 27 0 2011 ? 00:00:00 [cqueue/1]
root 484 27 0 2011 ? 00:00:00 [cqueue/2]
root 485 27 0 2011 ? 00:00:00 [cqueue/3]
root 486 27 0 2011 ? 00:00:00 [cqueue/4]
root 487 27 0 2011 ? 00:00:00 [cqueue/5]
root 488 27 0 2011 ? 00:00:00 [cqueue/6]
root 489 27 0 2011 ? 00:00:00 [cqueue/7]
root 490 27 0 2011 ? 00:00:00 [kseriod]
root 525 27 0 2011 ? 00:00:00 [kpsmoused]
root 981 27 0 2011 ? 00:00:00 [scsi_eh_0]
root 1129 27 0 2011 ? 00:00:00 [fc_sc_wq]
root 1131 27 0 2011 ? 00:00:00 [lpfc_worker_0]
root 1132 27 0 2011 ? 00:00:00 [scsi_eh_1]
root 1133 27 0 2011 ? 00:00:00 [scsi_wq_1]
root 1134 27 0 2011 ? 00:00:00 [fc_wq_1]
root 1135 27 0 2011 ? 00:00:00 [fc_dl_1]
postfix 1205 5180 0 2012 ? 00:00:18 qmgr -l -t fifo -u
root 1615 27 0 2011 ? 00:00:00 [lpfc_worker_1]
root 1616 27 0 2011 ? 00:00:00 [scsi_eh_2]
root 1660 27 0 2011 ? 00:00:00 [scsi_wq_2]
root 1661 27 0 2011 ? 00:00:00 [fc_wq_2]
root 1662 27 0 2011 ? 00:00:00 [fc_dl_2]
root 2223 27 0 2011 ? 00:00:00 [khubd]
root 2311 27 0 2011 ? 00:00:00 [mpp_wq]
root 2313 1 0 2011 ? 00:01:16 [mpp_dcr]
root 2314 1 0 2011 ? 00:01:04 [mppFailback]
root 2315 1 0 2011 ? 00:00:00 [mppPathValidate]
root 2316 1 0 2011 ? 00:00:00 [mppWorker]
root 2317 27 0 2011 ? 00:00:00 [scsi_eh_3]
root 2396 1 0 2011 ? 00:59:45 [kjournald]
root 2457 1 0 2011 ? 00:00:00 /sbin/udevd --daemon
root 3397 1 0 2011 ? 00:00:00 [shpchpd_event]
root 3838 1 0 2011 ? 00:28:50 [kjournald]
root 3840 1 0 2011 ? 00:00:00 [kjournald]
root 3905 27 0 2011 ? 00:00:00 [kauditd]
root 4167 1 0 2011 ? 00:00:00 /sbin/resmgrd
root 4182 1 0 2011 ? 00:00:04 /sbin/syslog-ng
root 4184 1 0 2011 ? 00:00:00 /sbin/acpid
100 4192 1 0 2011 ? 00:02:14 /usr/bin/dbus-daemon --system
root 4195 1 0 2011 ? 00:00:00 /sbin/klogd -c 1 -x -x
root 4252 1 0 2011 ? 00:00:18 /usr/sbin/hald --daemon=yes --retain-privileges
root 4703 4252 0 2011 ? 00:00:00 hald-addon-acpi
oracle 4708 1 0 2015 ? 01:25:43 /home/oracle/app/product/11g/db/bin/tnslsnr LISTENER -inherit
root 4719 27 0 2011 ? 00:00:00 [bond]
nobody 5050 1 0 2011 ? 00:00:00 /sbin/portmap
root 5062 1 0 2011 ? 00:00:00 /sbin/auditd -n
daemon 5086 1 0 2011 ? 00:00:00 /usr/sbin/slpd
root 5125 1 0 2011 ? 00:00:00 /usr/sbin/irqbalance
root 5180 1 0 2011 ? 00:00:05 /usr/lib/postfix/master
root 5322 1 0 2011 ? 00:00:01 /usr/sbin/cron
root 5324 1 0 2011 ? 05:11:03 /usr/sbin/nscd
root 5467 1 0 2011 ? 00:00:00 /usr/sbin/libvirtd -d
root 5475 1 0 2011 ? 00:00:19 /usr/sbin/powersaved -d -f /var/run/acpid.socket -v 3
root 5476 27 0 2011 ? 00:03:19 [ondemand]
root 5502 1 0 2011 tty2 00:00:00 /sbin/mingetty tty2
root 5504 1 0 2011 tty3 00:00:00 /sbin/mingetty tty3
root 5505 1 0 2011 tty4 00:00:00 /sbin/mingetty tty4
root 5506 1 0 2011 tty5 00:00:00 /sbin/mingetty tty5
root 5507 1 0 2011 tty6 00:00:00 /sbin/mingetty tty6
root 5508 1 0 2011 ttyS0 00:00:00 /sbin/agetty -L 9600 ttyS0 vt102
root 5575 1 0 2014 ? 00:00:17 /usr/sbin/sshd -o PidFile=/var/run/sshd.init.pid
root 5593 1 0 2011 ? 00:00:00 dbus-daemon --fork --print-pid 8 --print-address 6 --session
oracle 9423 1 0 2011 ? 00:00:00 oracleora11g (LOCAL=NO)
oracle 9434 1 0 2011 ? 00:00:16 oracleora11g (LOCAL=NO)
oracle 9661 1 0 2011 ? 00:00:00 oracleora11g (LOCAL=NO)
oracle 9672 1 0 2011 ? 00:00:00 oracleora11g (LOCAL=NO)
oracle 10714 1 0 2011 ? 00:00:00 oracleora11g (LOCAL=NO)
oracle 11241 1 0 2011 ? 00:00:23 oracleora11g (LOCAL=NO)
root 16246 1 0 15:51 ? 00:00:00 /trabajo/xymon/./bin/xymonlaunch --config=/trabajo/xymon/./etc/clientlaunch.cfg --log=/trabajo/xymon/./logs/clientlaunch.log --pidfile=/trabajo/xymon/./logs/clientlaunch.report.pid
root 16861 1 0 2011 ? 00:00:20 [kjournald]
oracle 17039 1 0 2011 ? 00:00:58 ora_pmon_ora11g
oracle 17043 1 0 2011 ? 00:00:00 ora_vktm_ora11g
oracle 17049 1 0 2011 ? 00:00:00 ora_diag_ora11g
oracle 17053 1 0 2011 ? 00:02:05 ora_dbrm_ora11g
oracle 17057 1 0 2011 ? 00:10:48 ora_psp0_ora11g
oracle 17061 1 0 2011 ? 00:00:03 ora_dia0_ora11g
oracle 17065 1 0 2011 ? 00:00:07 ora_mman_ora11g
oracle 17069 1 0 2011 ? 01:45:57 ora_dbw0_ora11g
oracle 17073 1 0 2011 ? 02:50:54 ora_lgwr_ora11g
oracle 17077 1 0 2011 ? 01:49:58 ora_ckpt_ora11g
oracle 17081 1 0 2011 ? 06:29:19 ora_smon_ora11g
oracle 17085 1 0 2011 ? 00:00:00 ora_reco_ora11g
oracle 17089 1 0 2011 ? 19:58:33 ora_mmon_ora11g
oracle 17093 1 0 2011 ? 00:00:03 ora_mmnl_ora11g
oracle 17097 1 0 2011 ? 00:00:00 ora_d000_ora11g
oracle 17101 1 0 2011 ? 00:00:00 ora_s000_ora11g
oracle 17373 1 0 2011 ? 00:00:03 ora_fbda_ora11g
oracle 17381 1 0 2011 ? 00:00:00 ora_qmnc_ora11g
oracle 17409 1 0 2011 ? 00:00:00 ora_q000_ora11g
oracle 17501 1 0 2011 ? 00:31:10 ora_cjq0_ora11g
oracle 17537 1 0 2011 ? 00:00:45 ora_smco_ora11g
oracle 17651 1 0 2011 ? 00:00:00 oracleora11g (LOCAL=NO)
oracle 17679 1 0 2011 ? 00:00:08 oracleora11g (LOCAL=NO)
oracle 17845 1 0 2015 ? 00:00:12 ora_q005_ora11g
root 18206 1 0 16:12 ? 00:00:01 /usr/lib/YaST2/bin/y2base menu ncurses --nothreads
root 18215 18206 0 16:12 ? 00:00:00 sh -c /sbin/yast mail >&2
root 18216 18215 0 16:12 ? 00:00:00 /bin/bash /sbin/yast mail
root 18260 18216 52 16:12 ? 00:23:37 /usr/lib/YaST2/bin/y2base mail ncurses --nothreads
root 18267 18260 0 16:12 ? 00:00:00 /bin/bash /usr/lib/YaST2/servers_non_y2/ag_initscripts
root 18517 18477 0 16:14 pts/1 00:00:00 /bin/bash
root 18803 1 0 16:16 ? 00:00:01 /usr/lib/YaST2/bin/y2base menu ncurses --nothreads
root 18900 18803 0 16:17 ? 00:00:00 sh -c /sbin/yast SD_Report >&2
root 18901 18900 0 16:17 ? 00:00:00 /bin/bash /sbin/yast SD_Report
root 18945 18901 0 16:17 ? 00:00:00 /usr/lib/YaST2/bin/y2base SD_Report ncurses --nothreads
root 18955 18945 0 16:17 ? 00:00:00 [ag_subdomain] <defunct>
root 18956 18945 0 16:17 ? 00:00:00 /usr/bin/perl /usr/lib/YaST2/servers_non_y2/ag_reports_sched
root 18980 1 0 16:17 ? 00:00:02 /usr/bin/perl -w /usr/sbin/aa-eventd -p /var/run/aa-eventd.pid
root 19110 1 0 16:18 ? 00:00:01 /usr/lib/YaST2/bin/y2base menu ncurses --nothreads
root 19182 27 0 2015 ? 00:00:01 [pdflush]
root 19193 27 0 2015 ? 00:00:00 [pdflush]
root 19706 1 0 10:00 ? 00:00:10 /usr/bin/atop -w /var/log/atop/atop_20160420 30
root 19724 19110 0 16:20 ? 00:00:00 sh -c /sbin/yast SD_EditProfile >&2
root 19725 19724 0 16:20 ? 00:00:00 /bin/bash /sbin/yast SD_EditProfile
root 19769 19725 0 16:20 ? 00:00:00 /usr/lib/YaST2/bin/y2base SD_EditProfile ncurses --nothreads
root 19779 19769 0 16:20 ? 00:00:00 /usr/bin/perl /usr/lib/YaST2/servers_non_y2/ag_subdomain
root 19780 19769 0 16:20 ? 00:00:00 /usr/bin/perl /usr/lib/YaST2/servers_non_y2/ag_subdomain_profiles
ntp 20211 1 0 2014 ? 00:00:00 /usr/sbin/ntpd -p /var/run/ntp/ntpd.pid -u ntp -i /var/lib/ntp
root 22424 18260 0 16:33 ? 00:00:00 /usr/bin/perl -w /usr/lib/YaST2/servers_non_y2/ag_fetchmailrc
root 22425 18260 0 16:33 ? 00:00:00 /usr/bin/perl -w /usr/lib/YaST2/servers_non_y2/ag_smtp_auth
postfix 22557 5180 0 16:35 ? 00:00:00 pickup -l -t fifo -u
oracle 23729 1 0 16:48 ? 00:00:00 ora_w000_ora11g
root 24381 1 0 16:56 ? 00:00:00 sh -c vmstat 300 2 1>/trabajo/xymon/./tmp/xymon_vmstat.report.24309 2>&1; mv /trabajo/xymon/./tmp/xymon_vmstat.report.24309 /trabajo/xymon/./tmp/xymon_vmstat.report
root 24383 24381 0 16:56 ? 00:00:00 vmstat 300 2
root 24419 18517 0 16:57 pts/1 00:00:00 ps -ef
root 27677 1 0 2015 tty1 00:00:00 /sbin/mingetty --noclear tty1
root 28511 1 0 2012 ? 00:00:00 /usr/sbin/xinetd


Any clue or advice is welcome!

Regards...

PD: yes snmp is blocked !

ab
21-Apr-2016, 02:26
SLES 10 SP2; wow. I presume you do not want any comments about really old
software, but if so insert them here.

I do not fully understand what you are trying to do network-wise. It
sounds like you want to run some commands on the target box, and you want
those commands to then connect back to the source/server system via the
reverse SSH tunnel; is that correct? If so, you're basically opening up
127.0.0.1:23 with the SSH connection to that machine, and then any
connection to that socket will connect back to the source side of your SSh
connection, however you configured it.

If that is all correct, I'd just try using a higher port. Ports < 1024
cannot be bound by non-root users for listening, so it's probably not
listening at all when you SSH into the box, unless you're SSHing as 'root'
which generally you should not do.

If the assumptions above are incorrect about how the networking should
work, please provide a lot of details on that side of things. Also,
perhaps show networking output from each system so we can better
understand that, along with the SSH command setting up the tunnel so we
can have confirmation about how that works. In summary:

1. Show SSH client command; describe any non-obviuos IPs or ports
referenced therein.
2. Describe, in detail, how data should flow for xymon, the SSH
connection, etc.
3. Show what is listening via the 'ss' command:



/usr/sbin/ss -planeto | grep -e ':23' -e ':22' -e 'other stuff'



--
Good luck.

If you find this post helpful and are logged into the web interface,
show your appreciation and click on the star below...

soopal
04-May-2016, 18:55
[QUOTE=ab;32277]SLES 10 SP2; wow. I presume you do not want any comments about really old
software, but if so insert them here.
^^^yeah I know it's old, by now I'm not allowed to make any upgrade.


I do not fully understand what you are trying to do network-wise. It
sounds like you want to run some commands on the target box, and you want
those commands to then connect back to the source/server system via the
reverse SSH tunnel; is that correct? If so, you're basically opening up
127.0.0.1:23 with the SSH connection to that machine, and then any
connection to that socket will connect back to the source side of your SSh
connection, however you configured it.

^^^^^^ basically this is the issue

If that is all correct, I'd just try using a higher port. Ports < 1024
cannot be bound by non-root users for listening, so it's probably not
listening at all when you SSH into the box, unless you're SSHing as 'root'
which generally you should not do.

^^^^^ I know, but 1st I can not open/use any other port since something is blocking that.
My 1st idea was make a description on general terms.
Then let it clear that there is something that is blocking any try to open any ports. As far I have tried that "something" isn't iptables, or ipchains or tcpwrappers.

So I'm requesting some help to find out what can block port even 127.0.0.1:23.
via sudo -s I can shutdown the telnetd daemon (via yast2) to free port 23. But the xymon client refuse to work since the connection of the client to 127.0.0.1 is block by "something".

Before shutdown the telnetd daemon; I was able to telnet localhost 23. After shutdown the telnetd daemon and let xymon client to run on port 23 intead 1984; I can't telnet localhost 23. I was thinking in apparmor but it appear to be off.


Of course I post to the community since my major backgrounding is from other *nix and other distros of linux. Very few of our customers have SUSE and then my need of help.

Thanks and hope clarified some aspects of the issue and the help needed.

TIA.

ab
04-May-2016, 20:55
Provide the output from the command sent previously. For convenience,
here it is, with some more:



#Listening sockets information, which is good information
#regardless of how a firewall is or is not configured.
sudo /usr/sbin/ss -planeto | grep :23
sudo /usr/sbin/ss -planeto | grep -i xymon #in case above has nothing

#Firewall stuff, which could block access to even localhost, but
#probably does not:
sudo /usr/sbin/iptables -nvL
sudo /usr/sbin/iptables-save



#If xymon has any logs, I'd probably post them too, since if it errors
trying to bind TCP 23 then that's notable. This should not happen after
you stop telnetd, since it is running as 'root', but who knows. Maybe
make sure xinetd is also stopped entirely, just in case.

--
Good luck.

If you find this post helpful and are logged into the web interface,
show your appreciation and click on the star below...

soopal
09-May-2016, 17:00
Dear AB:
here is the output of the commands that you suggested:
ice@report:/trabajo/xymon> sudo /usr/sbin/ss -planeto | grep :23
ice@report:/trabajo/xymon> sudo /usr/sbin/ss -planeto | grep :23
ice@report:/trabajo/xymon> sudo /usr/sbin/ss -planeto | grep -i xymon
ice@report:/trabajo/xymon> sudo /usr/sbin/iptables -nvL
Chain INPUT (policy ACCEPT 24M packets, 2579M bytes)
pkts bytes target prot opt in out source destination
89 5839 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:1084

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination

Chain OUTPUT (policy ACCEPT 22M packets, 2409M bytes)
pkts bytes target prot opt in out source destination
ice@report:/trabajo/xymon> sudo /usr/sbin/iptables-save
# Generated by iptables-save v1.3.5 on Mon May 9 08:48:50 2016
*filter
:INPUT ACCEPT [23971407:2579127083]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [22207883:2409195972]
-A INPUT -p tcp -m tcp --dport 1084 -j ACCEPT
COMMIT
# Completed on Mon May 9 08:48:50 2016

So If I read fine; there is a iptable firewall running!?! I will not lie to you I run iptables -L and nothing came... (I must forget something in the way!!).