ERROR Failed to get a response

@derick
Trying to replace the app folder and setup an old VM which was working, and getting the old error, “ERROR Failed to get a response from https://medic:****@41.xxx.xxx.xxx/medic/. Maybe you entered the wrong URL, wrong port or the instance is not started. Please check and try again.

I have confirmed the containers are up and running, trying to figure out the source of error

@oyierphil - moving this to it’s own topic so we can more directly answer it and help others find it in the future if they have the same problem.

From a prior post, you shared the IP address already - While I can ping it, both http and https ports seem to be closed. Has the IP changed maybe?

➜  ~ ping 41.204.187.172
PING 41.204.187.172 (41.204.187.172) 56(84) bytes of data.
64 bytes from 41.204.187.172: icmp_seq=1 ttl=47 time=381 ms
64 bytes from 41.204.187.172: icmp_seq=2 ttl=47 time=402 ms
^C
--- 41.204.187.172 ping statistics ---
3 packets transmitted, 2 received, 33.3333% packet loss, time 2001ms
rtt min/avg/max/mdev = 380.697/391.479/402.262/10.782 ms
➜  ~ nmap -p 80,443 41.204.187.172
Starting Nmap 7.80 ( https://nmap.org ) at 2022-11-11 13:52 PST
Nmap scan report for 41.204.187.172
Host is up (0.36s latency).

PORT    STATE  SERVICE
80/tcp  closed http
443/tcp closed https

Nmap done: 1 IP address (1 host up) scanned in 1.65 seconds

@mrjones
The IP and subdomains didn’t change, preparing this VM for production of our current project, community testing.

I had noted interference between the VMS while doing my test configurations with configurable hierarchy, you would do config in one VM and see the results in another, thus had configured firewall with UFW. I later disabled when cloning over SSH became a long process :blush:

Did run some tests and noted the nginx service, couch2pg is not running.
Further noted the following error: curl: (6) Could not resolve host: https, and finally noted the port 443 not open

@mrjones
Getting the same response from another VM, our test VM currently receiving data. Interesting, I see app behavior from another VM, a puzzle to me, need to remove the app files and load the correct ones, stuck

ERROR Failed to get a response from https://medic:****@41.204.187.164/medic. Maybe you entered the wrong URL, wrong port or the instance is not started? Please check and try again.

The app doesn’t load contact form for registration as was working previously, only allows registration of places, base_settings and forms intact

Running nmap 41.204.187.164, gives the error 8080/tcp closed http-proxy

I have removed all UFW rules and disabled the same, still stuck, since I cant get a response from the VMs, cant do any config work, now stuck until I sort out source of the error, anyone with an idea?

@oyierphil - thanks so much for the call earlier today trying to debug this issue. The error you were seeing ERROR Failed to get a response was from when you were running CHT Conf.

After you and I were looking around, I was able to find this error in your nginx logs located in /srv/storage/medic-core/nginx/logs/startup.log in the medic-os container:

==> /srv/storage/medic-core/nginx/logs/startup.log <==
nginx: [emerg] invalid number of arguments in "server_name" directive in /srv/software/medic-core/v2.1.1/x64/etc/nginx/nginx.conf:29
nginx: [emerg] invalid number of arguments in "server_name" directive in /srv/software/medic-core/v2.1.1/x64/etc/nginx/nginx.conf:29
nginx: [emerg] invalid number of arguments in "server_name" directive in /srv/software/medic-core/v2.1.1/x64/etc/nginx/nginx.conf:29
nginx: [emerg] invalid number of arguments in "server_name" directive in /srv/software/medic-core/v2.1.1/x64/etc/nginx/nginx.conf:29

Looking at the nginx config file /srv/software/medic-core/v2.1.1/x64/etc/nginx/nginx.conf, on line 29 it looked like this where server_name has no argument:

    server {
        listen         80;
        server_name  ;
        error_log /srv/storage/medic-core/nginx/logs/error.log;
        location / {
            return 301 https://$host$request_uri;
        }
    }

It was missing the _ which is used as a catch all. The fix was to add it back so it looks like this:

    server {
        listen         80;
        server_name    _;
        error_log /srv/storage/medic-core/nginx/logs/error.log;
        location / {
            return 301 https://$host$request_uri;
        }
    }

I then rebooted nginx inside the container with /boot/svc-restart medic-core nginx.

A post was split to a new topic: Can’t access top level hierarchy place