OpenWrt Forum Archive

Topic: [SOLVED] dropbear troubleshooting

The content of this topic has been archived on 13 Apr 2018. There are no obvious gaps in this topic, but there may still be some posts missing at the end.

Dear Forum members,

I have hit a problem with dropbear on my openwrt router.
I have set up two dropbear instances like this:

  • one instance listening on all interfaces, with disabled password authentication, and root password login and root login disabled

  • one instance listening on an internal interface, with password authentication enabled, and root password login enabled

I have added a firewall rule, to allow access to port 22 of the device from the outside world.
I have set up a set of public keys to allow pubkey authentication.
I also have set up a new user, who can log in, and then sudo to root when necessary.
I have created its home, and ~/.ssh directories 0700 and ~/.ssh/authorized_keys as 0600, and put the /etc/dropbear/authorized_keys' contents to it.
I also have set up dyndns properly.

The problem is the following:

When trying to ssh into the router over its dyndns name (on the standard port), the connection is being reset right away.
If I ssh in to the internal IP of the router on the default ssh port, then the connection is established properly.

I tried to debug the problem, did a tcpdump on the external interface of the router, but I only saw the SYN package arrive, upon which an RST was sent as a reply.
Meanwhile I can see an american/Canadian IP (something like 192.99.54.54) trying to brute-force its way to the router, but it did manage to get a proper connection up to dropbear (only to be refused access), because I could see its attempts in the system log.
My attempts to connect however did not leave any trace in the system logs.

My questions are:

  • how could I most efficiently debug this problem

  • did I make a mistake/forget a step

  • If yes, what was it?

Thank you!
János

(Last edited by wowbaggerHU on 9 Jul 2017, 20:19)

Any form of suggestion would be welcome!

wowbaggerHU wrote:

I have set up two dropbear instances like this:
-- one instance listening on all interfaces, with disabled password authentication, and root password login and root login disabled
-- one instance listening on an internal interface, with password authentication enabled, and root password login enabled

I see a potential race condition here, unless you have solved it already.

The UCI config of Dropbear does not forward the actual UCI network name to the Dropbear executable, because Dropbear does not understand such syntax. Instead, it uses the available procd functions to fetch the IP address that is assigned to the interface (or interfaces) bound to the UCI network, and formulates the '-p' command-line parameter of Dropbear appropriately, combining the 'Port' argument from Dropbear's UCI config with the discovered IP address.

The first instance calls Dropbear with just '-p <UCI Port>' to listen on all interfaces at the single port, while the latter goes through the extra hoop of determining what the local IPs of the interfaces bound to the UCI network are, and passing them like '-p <IP #1>:<UCI Port>,<IP #2>:<UCI Port>' and so on.

The race condition occurs on who gets to control the 'internal interface'. If the 'internal-only' instance starts up first, the 'global' instance will most likely fail, and vice versa.

In my opinion, the best way to resolve the potential race condition -- and to add a layer protection for your public-facing Dropbear -- is to pump up the public-facing port number to something else than the default 22. Try this first, and see if you can use either the WAN-side IP plus the custom port, and/or the DDNS-managed hostname and the custom port to connect to your router.

Also, depending on your DDNS service provider, you might even create what is called a 'web hop' so that if your router's DDNS name is "therouter.mydomain.biz", then an address such as 'ssh.mydomain.biz' redirects to 'therouter.mydomain.biz:<custom port>', so using the non-standard port becomes a breeze.

You can use 'ps' and 'grep' to verify the actual parameters that are used to start the Dropbear instances, and of course verify that they are both actually running.

wowbaggerHU wrote:

My attempts to connect however did not leave any trace in the system logs.

Once you've resolved the race condition, try using the UCI config to start-up the two instances, then see their command-line arguments with ps and grep. Shutdown the instances, and replicate the start-up of both instances using two SSH prompts, or GNU screen. Append the '-E' switch so that the instance logs into stderr instead of syslog.

Now you should be able to observe the log entries in real-time when you attempt connections. As far as I know, there's no way to actually increase the verbosity of logging. If the source code doesn't log, then it doesn't log smile

Antek wrote:

The first instance calls Dropbear with just '-p <UCI Port>' to listen on all interfaces at the single port, while the latter goes through the extra hoop of determining what the local IPs of the interfaces bound to the UCI network are, and passing them like '-p <IP #1>:<UCI Port>,<IP #2>:<UCI Port>' and so on.

The race condition occurs on who gets to control the 'internal interface'. If the 'internal-only' instance starts up first, the 'global' instance will most likely fail, and vice versa.

Thank you Antek for your reply.
My strategy is and has been that I put the public-facing "hardened" dropbear instance on port 22, so that it listens on 0.0.0.0, so that an IP change could not interfere with it in any way.
The second "soft" instance I put on a different port, on an internal statically addressed interface, so that it also can remain running without any chance of IP change.
This way, I shoulf be able to avoid the race condition you described.

As for the rest of your suggestions for debuging, thank you for putting them down!
I will try to debug the problem the way you described.
My only problem is that I don't have remote access to the router ATM, so I will have to get physical access to it first.

wowbaggerHU wrote:

This way, I shoulf be able to avoid the race condition you described.

Yep, running them on different ports circumvents the issue completely.

It seems that the issue was more like a PEBKAC.
As far as I can see, the ddns settings (password for afraid.org) were incorrect and thus the IP was most likely wrong.
Today, I grabbed my router, took it home where I could spend some more time on it, debugged the ddns issue, and ssh started to work suddenly.
Wow, just wow.

The discussion might have continued from here.