Community Health Toolkit

Prototyping a fully offline CHT Server environment

Overview

In this post we look at setting up an instance of the CHT server that is initially online so it can be configured, but ultimately is deployed fully offline. We’ll be using the information from the newly published Offline CHT Server guide in the CHT docs as well as the existing guide on how to deploy a self-hosted CHT server.

Our environment has the following setup:

  • A router with the IP 192.168.8.1
  • A WiFi access point (AP) (using the one in the router)
  • A CHT server running on Ubuntu 20.04 with a static IP of 192.168.8.2
  • A Pi-hole server with DNS and DHCP services
  • An Android device running Android 10
  • An unbranded install of medic-android 0.8.0

Unless otherwise specified, all commands are run on the Ubuntu server as the root user. All commands should be run from the same location of /root. You will end up with a folder that looks like this*:

/root
   /cht-docker-compose-local-host.yml
   /etc-dnsmasq.d
   /etc-pihole
   /etc-lighttpd
   /medic-srv
   /pi-hole-docker-compose.yml
   /tls-certs

* There will be other folders like Documents and Pictures as this is a Desktop install. These can be ignored.

As a reminder, by default, containers do not store your files across reboots:

Docker containers are stateless by design. In order to persist your data when a container restarts you need to specify the volumes that the container can use to store data.

Reverts

If at any point you get stuck and want to start from scratch you can run these three commands. They will delete all containers, volumes and data directories:

  • docker-compose -f pi-hole-docker-compose.yml down -v
  • docker-compose -f cht-docker-compose-local-host.yml down -v
  • rm -rf pi-hole-docker-compose.yml cht-docker-compose-local-host.yml etc-pihole etc-dnsmasq.d tls-certs etc-lighttpd /etc/systemd/system/*-docker.service
  • systemctl daemon-reload

Development

This forum post is for development only. It is not meant for a production environment. Please see the note at the top of the Offline Hosting document for more information.

LAN & Server

  1. Given our bare metal server is using upstream DNS and not the Pi-hole, add two entries in the /etc/hosts file to ease testing:

    192.168.8.2     dns.my.local-ip.co
    192.168.8.2     cht.my.local-ip.co
    
  2. Set up a router that has DHCP turned off

  3. Set up an AP

  4. Install Ubuntu on bare metal. I chose Ubuntu desktop (vs server), so the server itself had a GUI I could use to configure settings.

  5. Assign static IP, I used 192.168.8.2 by using the desktop GUI.

  6. Install docker and docker compose on the server

  7. Ensure port 53 is free on the server so that the DNS server can bind to it below

Prepare TLS certificates

In this example we’re using the free certificates offered on local-ip.co. We’ll store them in ./tls-certs and share this between all the containers. For your deployment it is assumed you will provide your own certificates, but local-ip’s are free to test with.

  1. Create a local directory and cd into it:

    mkdir tls-certs
    cd tls-certs
    
  2. Copy down the certificates and private key with curl:

    curl -s -o server.pem http://local-ip.co/cert/server.pem
    curl -s -o chain.pem http://local-ip.co/cert/chain.pem
    curl -s -o server.key http://local-ip.co/cert/server.key
    
  3. preparing them for use by concatenating them as needed::

    cat server.key server.pem > lighttpd.key.and.pem.pem
    cat server.pem chain.pem > server.chained.pem
    
  4. THe final result should look like this:

    ./tls-certs/
      chain.pem
      lighttpd.key.and.pem.pem
      server.chained.pem
      server.key
      server.pem
    

DNS & DHCP

To set up the Pi-hole with a certificate that persists, we’ll use a file based off the example docker compose one for Pi-hole, create a mount point for both the TLS certificates and the configuration. This is in addition to the two default mounts.

  1. Create a new directory ./etc-lighttpd

  2. Edit a new file ./etc-lighttpd/external.conf and add this content:

    $HTTP["host"] == "dns.my.local-ip.co" {
      # Ensure the Pi-hole Block Page knows that this is not a blocked domain
      setenv.add-environment = ("fqdn" => "true")
      
      # Enable the SSL engine with a LE cert, only for this specific host
      $SERVER["socket"] == ":8443" {
          ssl.engine = "enable"
          ssl.pemfile = "/etc/tls-certs/lighttpd.key.and.pem.pem"
          ssl.ca-file =  "/etc/tls-certs/chain.pem"
          ssl.honor-cipher-order = "enable"
          ssl.cipher-list = "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH"
          ssl.use-sslv2 = "disable"
          ssl.use-sslv3 = "disable"
      }
      
      # Redirect HTTP to HTTPS
      $HTTP["scheme"] == "http" {
          $HTTP["host"] =~ ".*" {
            url.redirect = (".*" => "https://dns.my.local-ip.co:8443")
          }
      }
    }
    
  3. In the root directory, create a file called pi-hole-docker-compose.yml with this contents:

    version: "3"
    
    services:
      pihole:
        container_name: pihole
        image: pihole/pihole:latest
        network_mode: "host"
        restart: unless-stopped
        environment:
          TZ: 'America/Chicago'
          # WEBPASSWORD: 'YOUR PASSWORD HERE'
          WEB_PORT: 8081
        volumes:
          - './etc-pihole/:/etc/pihole/'
          - './etc-dnsmasq.d/:/etc/dnsmasq.d/'
          - './tls-certs/:/etc/tls-certs/'
          - './etc-lighttpd/external.conf:/etc/lighttpd/external.conf'
        cap_add:
          - NET_ADMIN
    
  4. Uncomment and set WEBPASSWORD to be a good password

  5. Start the Pi-hole container: docker-compose -f pi-hole-docker-compose.yml up --detach. Pi-hole should be accessible on https://dns.my.local-ip.co:8443/admin including a valid certificate.

  6. Login in to Pi-hole the using WEBPASSWORD you set above.

  7. On the left most menu, go to “Settings” → “DHCP” and turn on DHCP, ensuring “range” and “router” are set to be correct for your LAN. It was 192.168.8.201 - 192.168.8.251 and 192.168.8.1 in my case. Click “Save” on the bottom right.

  8. On the left most menu, go to “Settings” → “DNS” → “Interface listening behavior” and set it to “Listen on all interfaces, permit all origins”. Click “Save” on the bottom right.

  9. On the left most menu, go to “Local DNS” → “DNS Records” and add two entries for your CHT instance and Pi-hole instance. These need to match matches the CN in your certificate.

    cht.my.local-ip.co 192.168.8.2
    dns.my.local-ip.co 192.168.8.2
    
  10. On the left most menu, go to “Disable” and choose “Indefinitely” so there is no DNS filtering

CHT Server

Following the CHT self-hosted guide and the official docker-compose file, this section provisions a docker container and then configures it to preserve your data across reboots:

  1. Create a file called cht-docker-compose-local-host.yml with this contents:

    version: '3.7'
    
    services:
      medic-os:
        container_name: medic-os
        image: medicmobile/medic-os:cht-3.9.0-rc.2
        working_dir: /srv
        restart: unless-stopped
        network_mode: host
        volumes:
          - medic-data:/srv
          - './tls-certs/:/etc/tls-certs/'    
        depends_on:
          - haproxy
        environment:
          - DOCKER_NETWORK_NAME=localhost
          - DOCKER_COUCHDB_ADMIN_PASSWORD=$DOCKER_COUCHDB_ADMIN_PASSWORD
    
      haproxy:
        container_name: haproxy
        image: medicmobile/haproxy:rc-1.17
        restart: unless-stopped
        network_mode: host
        volumes:
          - medic-data:/srv    
          - './tls-certs/:/etc/tls-certs/'    
        environment:
          - COUCHDB_HOST=localhost
          - HA_PASSWORD=$DOCKER_COUCHDB_ADMIN_PASSWORD
    
    volumes:
      medic-data:
        name: medic-data
    
  2. Export the DOCKER_COUCHDB_ADMIN_PASSWORD per the instructions

  3. Start the CHT docker instance: docker-compose -f cht-docker-compose-local-host.yml up --detach.

nginx

CHT uses nginx as its front end web server. These steps follow the CHT Self-hosting guide to install our valid certificate and fix an nginx configuration based on our host networking mode:

  1. Enter the medic-os container with docker exec -it medic-os /bin/bash

  2. Edit the /srv/settings/medic-core/nginx/nginx.conf file and change the two certificate lines to look like this:

    ssl_certificate             /etc/tls-certs/server.chained.pem;
    ssl_certificate_key         /etc/tls-certs/server.key;
    
  3. While still in nginx.conf, change this server line from:

    upstream couchdb {
      server haproxy:5984;
    }
    

    to:

    upstream couchdb {
      server localhost:5984;
    }
    
  4. Restart the web server with /boot/svc-restart medic-core nginx

  5. check that you have a valid certificate on https://cht.my.local-ip.co

Upgrade

CHT’s docker image has the version of CHT that was current at the time it was made (3.9.0). To upgrade the latest version:

  1. Log into your CHT instance with user medic and the DOCKER_COUCHDB_ADMIN_PASSWORD password you set above

  2. In the upper right hamburger menu, choose “App Management”

  3. On the left choose “Upgrades”

  4. Find the latest stable version under “Releases”, 3.10.3 at the time of this forum post and click “Install”. This process may take a few minutes.

  5. When it is done upgrading, click “Reload application”

Install CHT Android application

Now that your CHT server instance is up, you can prepare your Android device by installing the Medic Mobile app. Note that we won’t configure it, as we want to do that entirely offline:

  1. Install the latest version, 0.8.0 as of this writing.

Boot Persistence

We’ll use systemd to have containers perist across reboots per the docker docs. Start by pulling down all the containers:

docker-compose -f cht-docker-compose-local-host.yml down
docker-compose -f pi-hole-docker-compose.yml down

Now proceed to create the two services below.

Pi-hole

  1. Create a file /etc/systemd/system/pihole-docker.service with the contents:

    [Unit]
    Description=Start Pi-hole in Docker
    Requires=docker.service
    After=docker.service
    
    [Service]
    Type=oneshot
    RemainAfterExit=yes
    WorkingDirectory=/root/
    ExecStart=/usr/local/bin/docker-compose -f pi-hole-docker-compose.yml up --detach
    ExecStop=/usr/local/bin/docker-compose -f pi-hole-docker-compose.yml down
    TimeoutStartSec=0
    
    [Install]
    WantedBy=multi-user.target
    
  2. Reload systemd with systemctl daemon-reload and enable and start the service with systemctl enable --now pihole-docker

  3. Ensure the service started by checking the status systemctl status pihole-docker

CHT

  1. Create a file /etc/systemd/system/cht-docker.service with the contents:

    [Unit]
    Description=Start CHT in Docker
    Requires=docker.service
    After=docker.service
    
    [Service]
    Type=oneshot
    RemainAfterExit=yes
    WorkingDirectory=/root/
    ExecStart=/usr/local/bin/docker-compose -f cht-docker-compose-local-host.yml up --detach
    ExecStop=/usr/local/bin/docker-compose -f cht-docker-compose-local-host.yml down
    TimeoutStartSec=0
    
    [Install]
    WantedBy=multi-user.target
    
  2. Reload systemd with systemctl daemon-reload and enable and start the service with systemctl enable --now cht-docker

  3. Ensure the service started by checking the status systemctl status cht-docker

Test

Reboot the bare metal computer and ensure all service restart successfully.

Connecting devices

Now that your CHT instance is available via DNS, DHCP and TLS, any device on the network can connect to it with out Internet. Disconnect your LAN so all devices are fully offline with no Internet connection.

Android

  1. Connect to the AP on your LAN

  2. After installing launching the APK, choose “Custom” for which CHT instance to use

  3. Enter https://cht.my.local-ip.co

Note: Some configurations of Android may not like being connected to an AP with no Internet access.

Desktop Browsers

  1. Connect to the AP on your LAN or via Ethernet

  2. In a browser, go to https://cht.my.local-ip.co.

Note: The CHT does not support the Safari browser on macOS

Summary

If you run docker ps on your server, you should see 3 containers running:

CONTAINER ID   IMAGE                                 COMMAND                  CREATED         STATUS                   PORTS     NAMES
cbc032b8f5c1   pihole/pihole:latest                  "/s6-init"               3 minutes ago   Up 3 minutes (healthy)             pihole
4c600ddcc4c6   medicmobile/medic-os:cht-3.9.0-rc.2   "/bin/bash -l /boot/…"   6 hours ago     Up 6 hours                         medic-os
a69d06736cbb   medicmobile/haproxy:rc-1.17           "/entrypoint.sh -f /…"   6 hours ago     Up 6 hours                         haproxy

Your server should have a static IP of 192.168.8.2 with two DNS entries pointing to it of dns.my.local-ip.co and cht.my.local-ip.co.

The following ports are accessible externally on the server:

  • 8081 - http port for Pi-hole, redirects to https on 8443
  • 8443 - https port for Pi-hole web admin GUI
  • 80 - http port for CHT, redirects to https on 443
  • 443 - https port for CHT application GUI

Additionally, the original ports for CHT are exposed, which presents a security issue and likely should be protected by a firewall like ufw but are handy to use for debugging the CHT:

Best of all, all services like DNS and DHCP, are running locally, so the server can be run entirely offline.

Please feel free to ask any questions on this write-up!

6 Likes

Thanks for the helpful guide. We have setup the CHT Offline setup under a controlled deployment in a few facilities in preparation for a mass rollout in coming weeks. We however encountered a few scenarios where by CHT isn’t accessible as a mobile app. Please see the issue explained here and let me know: CHT APP not accessible via the mobile phone option and remotely using the local dns · Issue #482 · medic/cht-docs · GitHub

3 Likes

@andrineM - thanks for responding to my post! And welcome to the forum - I see this is your first post here :tada:

I know that you’ve opened a CHT issue at the beginning of the year. I wanted to pull anything from that ticket as well as your issue from the docs ticket where you said:

We have a scenario where the setup and configuration of both pihole and CHT worked ok and we could access the app via the browser on the host server (Where the application is installed). However, any remote access via https://cht.my.local-ip.co did not work including the mobile phone app option.

To this @derick responded:

Do you have a DNS server on your network that resolves cht.my.local-ip.co to an actual host within your network?

@andrineM could you experiment with this and evaluate if running a self-hosted version would be of value to you?

@derick - the local-ip.co service is identical to the nip.io service! It looks like our two great minds were thinking of the same solution :wink: The added benefit of of local-ip is that they not only allow you to use local IPs, but also give the full cert and private key for *.my.local-ip.co for download. DNS and a cert for every host \o/

@andrineM - Continuing @derick’s line of questioning, please do check the following which assumes your instance of Pi-hole is running on the static IP of 192.168.8.2. With these tests we want to confirm:

  • DNS server works
  • DHCP server gives leases with your DNS server

Please confirm:

  • you have no custom entries in any /etc/hosts files. You want all domains to be resolved by the DNS server. Having entries here will make only that host work, but not enable any phones to work.
  • From the ubuntu server, you can run dig cht.my.local-ip.co @192.168.8.2 and also dig dns.my.local-ip.co @192.168.8.1 - this will test that Pi-hole is set up to resolve your domains correctly
  • Connect different desktop or laptop client to the LAN and ensure that it gets 192.168.8.2 as it’s DNS from the DHCP server
  • Confirm that the android device on the LAN is getting the same DNS from the DHCP server as the ubuntu server and the other desktop client.

Report back your results and we’ll continue on until we solve the problem!

3 Likes

Thanks @mrjones for the Response. Kindly allow me to give a walk-through on the setup and configuration we did to help identify any gap.
Disabled DHCP on our host machine and gave it a Static ip
Edited the /etc/hosts file to include the two domain names.
Installed the docker version on pi-hole and cht.
Enabled dhcp on pihole and the ip range to that of our host ip.

To answer the questions that you have raised.
From the ubuntu server, you can run dig cht.my.local-ip.co @192.168.8.2 and also dig dns.my.local-ip.co @192.168.8.1
Local (From the machine where both cht and pihole are running) - We can ping both
Remotely (From another machine within the network) -We are unable to resolve the dns.Only accessible via host ip address.
Connect different desktop or laptop client to the LAN and ensure that it gets 192.168.8.2 as it’s DNS from the DHCP server
This isn’t happening.
Confirm that the android device on the LAN is getting the same DNS from the DHCP server as the ubuntu server and the other desktop client.
This isn’t happening.

@andrineM - Great, thanks for doing the tests and for the updates about the status of your project.

First off, please accept my apologies - I typed the wrong IP in my test call! I used .1 at the end, when it should have been .2 for the dig command!

The problem for sure with your setup is that the Desktop and Phones on the LAN are not able to use the Pi-hole as the DNS server. This means that either DHCP isn’t working correctly or there is another network issue preventing the Pi-hole DNS from working. In order to succeed both DNS and DHCP need to work.

From the Ubuntu server, please try running the dig command, not the ping command:

dig cht.my.local-ip.co @192.168.8.2 +short

Please note that the DNS server for local-ip.co on the Internet has an entry for cht already, so we need the Pi-hole to override this. This is what it should look like with the correct response from our Pi-hole:

$ dig cht.my.local-ip.co @192.168.8.2 +short
192.168.8.2

This is what looks like when the Pi-hole is miss-configured and it proxies the request up to the public DNS server for local-ip.co:

$ dig cht.my.local-ip.co @192.168.8.2 +short
65.63.0.0
1 Like

@mrjones Please see below the results from running the dig command. The static ip assigned is 192.168.0.138:

Please let me know of any additional information that may assist in getting the app accessible on mobile phone.

1 Like

@andrineM - great, thanks for testing that! If you have another computer on the LAN that can run this same command, it would be great to verify that dig cht.my.local-ip.co @192.168.0.138 +short . Otherwise, it looks like your DNS server is working correctly then.

Let’s next test that the DHCP server is set up correctly. First make sure that nmap is installed on your Ubuntu server with sudo apt install nmap. Then run the script it has to check DHCP status: sudo nmap --script broadcast-dhcp-discover.

Here’s what it looks like when I run it:

sudo nmap --script broadcast-dhcp-discover 
Starting Nmap 7.80 ( https://nmap.org ) at 2021-05-11 09:57 PDT
Pre-scan script results:
| broadcast-dhcp-discover: 
|   Response 1 of 1: 
|     IP Offered: 192.168.206.241
|     DHCP Message Type: DHCPOFFER
|     Server Identifier: 192.168.206.190
|     IP Address Lease Time: 1h00m00s
|     Renewal Time Value: 30m00s
|     Rebinding Time Value: 52m30s
|     Subnet Mask: 255.255.255.0
|     Broadcast Address: 192.168.206.255
|     Router: 192.168.206.190
|     Domain Name Server: 192.168.206.190
|_    Vendor Specific Information: ANDROID_METERED

Please paste back all the results!

1 Like

@mrjones Below is the verification of the dig command from a remote host in the network.

sudo nmap --script broadcast-dhcp-discover Returns:

root@test:~# nmap --script broadcast-dhcp-discover

Starting Nmap 7.01 ( https://nmap.org ) at 2021-05-12 17:49 EAT
Pre-scan script results:
| broadcast-dhcp-discover:
| Response 1 of 1:
| IP Offered: 192.168.0.149
| DHCP Message Type: DHCPOFFER
| Server Identifier: 192.168.0.138
| IP Address Lease Time: 2m00s
| Renewal Time Value: 1m00s
| Rebinding Time Value: 1m45s
| Subnet Mask: 255.255.255.0
| Broadcast Address: 192.168.0.255
| Domain Name Server: 192.168.0.138
| Domain Name: lan
|_ Router: 192.168.0.1
WARNING: No targets were specified, so 0 hosts scanned.
Nmap done: 0 IP addresses (0 hosts up) scanned in 3.69 seconds

@andrineM - great! The output of both the second dig command you ran as well as the nmap command both look correct. The DHCP server is sending out the Pi-hole as the DNS which is what we want.

Let’s confirm the CHT is running correctly. Using the DESKTOP-VS7DR70 host you ran the second dig command on, what do you get when you run curl -I https://cht.my.local-ip.co ?

For me, I see the 200 response from my docker based CHT server. Hopefully you see the same!

$ curl -I https://cht.my.local-ip.co
HTTP/1.1 200 OK
Server: nginx/1.13.6
Date: Wed, 12 May 2021 20:12:50 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 1458
Connection: keep-alive
Vary: Accept-Encoding
Content-Security-Policy: default-src 'none'; font-src 'self'; manifest-src 'self'; connect-src 'self' https://staging.dev.medicmobile.org/_couch/builds/ maps.googleapis.com; child-src 'self'; form-action 'self'; img-src 'self' data: blob: *.openstreetmap.org; media-src 'self' blob:; script-src 'self' 'sha256-6i0jYw/zxQO6q9fIxqI++wftTrPWB3yxt4tQqy6By6k=' 'unsafe-eval'; style-src 'self' 'unsafe-inline'
X-DNS-Prefetch-Control: off
X-Frame-Options: SAMEORIGIN
Strict-Transport-Security: max-age=15552000; includeSubDomains
X-Download-Options: noopen
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Accept-Ranges: bytes
Cache-Control: public, max-age=0
Last-Modified: Wed, 12 May 2021 20:12:30 GMT
ETag: W/"5b2-17962359814"
Vary: Accept-Encoding
X-Frame-Options: SAMEORIGIN

While I’m here I can document some possible failure modes of the server:

DNS failed:

curl: (28) Failed to connect to cht.my.local-ip.co port 443: Connection timed out

DNS worked, but no service is listening on port 443:

curl: (7) Failed to connect to cht.my.local-ip.co port 443: Connection refused

DNS worked, service is listening on 443, but an invalid SSL certificate is installed:

curl: (60) SSL certificate problem: self signed certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
1 Like

@mrjones The response am getting is as below:

Running the same curl command from the host machine where cht is installed I get the response of 200 OK

@andrineM - ah ha! While this is bad in that the DESKTOP-VS7DR70 host has an error - this hopefully is the same reason that the mobile devices can’t connect. Now we have an easier way to test and debug the problem.

Can you please run this command on both the server and the server and the DESKTOP-VS7DR70 host?

curl -vI https://cht.my.local-ip.co

By adding v to the command we can confirm that curl is trying to talk to the same IP on both hosts. Further, we can compare the results of the two calls to see if there’s any important differences.

Below I’ve mocked up a test where my CHT instance is at 192.168.68.17. You can see the first call is when I’ve intentionally stopped the docker containers and it gives the same error, but with more information, as you’re seeing:

clear;curl -vI https://cht.my.local-ip.co
*   Trying 192.168.68.17:443...
* TCP_NODELAY set     
* connect to 192.168.68.17 port 443 failed: Connection refused
* Failed to connect to cht.my.local-ip.co port 443: Connection refused 
* Closing connection 0                                                
curl: (7) Failed to connect to cht.my.local-ip.co port 443: Connection refused

Then, when I restart the containers, here’s what a successful call looks like:

curl -vI https://cht.my.local-ip.co   
*   Trying 192.168.68.17:443...                                       
* TCP_NODELAY set                                                     
* Connected to cht.my.local-ip.co (192.168.68.17) port 443 (#0)
* ALPN, offering h2                                                   
* ALPN, offering http/1.1                                             
* successfully set certificate verify locations:           
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs                                              
* TLSv1.3 (OUT), TLS handshake, Client hello (1):    
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):                                                                                 
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-SHA
* ALPN, server accepted to use http/1.1
* Server certificate:    
*  subject: CN=*.my.local-ip.co
*  start date: Apr 27 09:42:25 2021 GMT
*  expire date: Jul 26 09:42:25 2021 GMT
*  subjectAltName: host "cht.my.local-ip.co" matched cert's "*.my.local-ip.co"
*  issuer: C=US; O=Let's Encrypt; CN=R3
*  SSL certificate verify ok.
> HEAD / HTTP/1.1   
> Host: cht.my.local-ip.co
> User-Agent: curl/7.68.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 404 Not Found
HTTP/1.1 404 Not Found
< Server: nginx/1.13.6
Server: nginx/1.13.6
< Date: Thu, 13 May 2021 17:01:50 GMT
Date: Thu, 13 May 2021 17:01:50 GMT
< Content-Type: text/html
Content-Type: text/html
< Content-Length: 169
Content-Length: 169
< Connection: keep-alive
Connection: keep-alive
< Vary: Accept-Encoding
Vary: Accept-Encoding

< 
* Connection #0 to host cht.my.local-ip.co left intact

@mrjones Here is the response I get from running when i run the curl from remote DESKTOP-VS7DR70 host

  • Trying 64:ff9b::413f:0…
  • Trying 65.63.0.0…
  • connect to 64:ff9b::413f:0 port 443 failed: Connection refused
  • connect to 65.63.0.0 port 443 failed: Connection refused
  • Failed to connect to cht.my.local-ip.co port 443: Connection refused
  • Closing connection 0
    curl: (7) Failed to connect to cht.my.local-ip.co port 443: Connection refused

When I run the same command from the Host where CHT is running am able to get the response as below;

@andrineM - Very interesting! On DESKTOP-VS7DR70 the dig call is telling you that it resolves cht.my.local-ip.co to the correct IP of 192.168.0.138. But when we use curl on the same machine it’s actually resolving to 65.63.0.0. This 65 IP is what happens when the DNS request goes out to the Internet, skipping Pi-hole all together.

Can we go back a step to the nmap call again? Can you run nmap --script broadcast-dhcp-discover on the DESKTOP-VS7DR70 computer? I suspect this is not actually getting the Pi-hole as it’s DNS server which is why the curl command fails, but the dig command succeeds.

@mrjones Please see the response from re-running the command *nmap --script broadcast-dhcp-discover*:
Have used a remote machine within the network with ip: 192.168.0.137
Have included the results from our previous commands as well.

@andrineM - Ah yes, this makes sense - thanks. Here’s what I understand of your LAN configuration, please let me know if anything is incorrect. You have a LAN with at least 5 devices:

  • Device 1 at 192.168.0.138: Ubuntu desktop with running docker-compose with Pi-hole and CHT
  • Device 2: DESKTOP-VS7DR70 Ubuntu Desktop
  • Device 3: Mobile Android device
  • Device 4 at 192.168.0.137: Andrine’s linux computer
  • Device 5 at 192.168.0.1: A router with a DHCP and DNS server

We discovered device 5 just now - this is a big help to know about - when you ran the last nmap --script broadcast-dhcp-discover command. A network can not have two devices running a DHCP server. Step 5 in the LAN & Sever section calls this out:

Set up a router that has DHCP turned off

So, to fix it, just turn of DHCP on 192.168.0.1 - hopefully an easy fix! While we’re at it, if you have an AP that is a stand alone device, ensure it is not running a DHCP server either.

After you have a LAN with just one DHCP server (the Pi-hole server), re-run the dig, curl and nmap commands an your three hosts (Andrine, DESKTOP-VS7DR70 and server) and they should all match up! You may need to reboot the Andrine and DESKTOP-VS7DR70 computers for DHCP change to take effect.

Let me know what you find when making the above changes.

@mrjones Thanks so much for this.The one challenge that we may experience is that majority of our facilities have a replica of the setup I have which you have described above.
Another thing have also noticed is the routers have a Dynamic DHCP as default with no option to disable it (Turn on/off at router level).
Please have a look at the router setting we have.

@andrineM - yes, that does sound challenging if you can’t turn off DHCP. Maybe you have an option to unplug the router from the LAN? For my set up, the router/Internet was just a way to get apt and docker-compose updates while I was doing set up - it is not needed for day to day operation.

From this screenshot it looks like you’re using a TP-Link M7200 which is a 4g hotspot type device which may mean all devices are wireless on the LAN and so you do not have, for example, a stand alone Ethernet switch and stand alone AP. If so, you’ll have to figure another solution than using the TP-Link M7200.

Best of luck and please do report back if you have more questions!

@mrjones Thanks so much for the support this far. Kindly allow me to give my feedback upon disabling of DHCP on the router. I was successfully able to curl the CHT endpoint and could access the application remotely on both the the desktop browser and the mobile phone browser. I however wasn’t able to use the custom url option on the App as I got the error: unable to contact server.

Below please see the result when using the phone browser and while using the app mobile phone option. Let me know what could be the cause of this.

@andrineM - This one took some thinking to solve - and I hope I have indeed solved it. Let’s find out!

I think you have a space in your URL. I was able to reproduce your exact problem, as well as solve it, as see in these side by side screenshots:

On the left you see a what looks to be a valid URL but has a space in it. On the right is the unbranded app working with the “same” URL, but it has no spaces. In my testing, I wasn’t unable to reproduce the problem in the mobile browser as it always trimmed the spaces before submitting it. I’ve filed a ticket to get this fixed in future version of Medic’s unbranded app - thanks for pointing out the issue!

Please try the URL again in the unbranded app, being sure there are no spaces before or after the URL and let us know how it works! As well, please be sure you’re testing in the latest version of the unbranded app, v 0.8.0.

@mrjones Have confirmed am able to access from NEON_RAY (Android 10) using the custom url. Still unsuccessful on Samsung A71 and Sumsung A30. Both on android 10 and ensuring no spaces before and after.