Setting up the CHT development build

hello @diana
i would like to get some clarification on the credentials used to access the development build of your CHT application talked about here in the browser ?

I have https://192-168-110-70.my.local-ip.co/ giving me the login UI

@cliff - initial credentials should be set based off of the environment variables passed into docker. If you copied and pasted from that page COUCHDB_USER=myadminuser -e COUCHDB_PASSWORD=myadminpass then it would be myadminuser and myadminpass.

Let us know if that worked!

@mrjones i tried those credentials and its responding with Unexpected error while logging in. Please try again.

If the credentials were incorrect, the message you would see is Incorrect user name or password. Please try again. (this is true for recent versions. Which version have you deployed?).
Can you please look up medic-api logs (details about how to get api logs) and find the error there. The problem might be obvious once you see the stack trace, but if it’s not please feel free to share here.

thanks @diana
i have "version": "3.14.0", for medic in the package.json file

i am running CHT API via node and not run the docker set up, when i look into the node api logs during logging in i see

022-01-07 14:01:59 DEBUG: Checking for a configured outgoing message service 
2022-01-07 14:02:59 DEBUG: Checking for a configured outgoing message service 
2022-01-07 14:03:59 DEBUG: Checking for a configured outgoing message service 
2022-01-07 14:04:59 DEBUG: Checking for a configured outgoing message service 
2022-01-07 14:05:59 DEBUG: Checking for a configured outgoing message service 
REQ 9e1765d5-0b80-44bc-b804-1f6625845e09 ::1 - GET /medic/_design/medic-client/_view/data_records_by_type?group=true HTTP/1.1
REQ ec8a3c0b-298d-4d64-9656-d885edbd6afe ::1 - GET /medic-user-myadminuser-meta/_design/medic-user/_view/read?group=true HTTP/1.1
RES 9e1765d5-0b80-44bc-b804-1f6625845e09 ::1 - GET /medic/_design/medic-client/_view/data_records_by_type?group=true HTTP/1.1 200 - 18.118 ms
RES ec8a3c0b-298d-4d64-9656-d885edbd6afe ::1 - GET /medic-user-myadminuser-meta/_design/medic-user/_view/read?group=true HTTP/1.1 200 - 17.953 ms
2022-01-07 14:06:59 DEBUG: Checking for a configured outgoing message service

hi @cliff

In the logs you shared, there is no indication of any login attempt (it actually looks like myadminuser is already logged in).
Can you make sure that the request is making it to the server? Checking browser logs while trying to login could help you figure this out.

1 Like

hello @mrjones @diana

When obtaining a TLS certificate for CHT docker instance , the DNS fails with curl: (28) Failed to connect to 192-168-110-70.my.local-ip.co port 443: Connection timed out could this be an issue with local-ip.co HTTPS reverse-proxy used ?

cliff@cliff-Latitude-5580:~/Documents/project/medic/cht-core$ ./scripts/add-local-ip-certs-to-docker.sh
Debug: Service 'medic-core/nginx' exited with status 143
Info: Service 'medic-core/nginx' restarted successfully
Success: Finished restarting services in package 'medic-core'

If no errors output above, certificates successfully installed.


cliff@cliff-Latitude-5580:~/Documents/project/medic/cht-core$ curl -I https://192-168-110-70.my.local-ip.co|egrep 'SSL|HTTP|Server'
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:--  0:02:11 --:--:--     0
curl: (28) Failed to connect to 192-168-110-70.my.local-ip.co port 443: Connection timed out
cliff@cliff-Latitude-5580:~/Documents/project/medic/cht-core$

@cliff - I believe you have successfully obtained the TLS certificate as there are indeed no errors above the “If no errors output above, certificates successfully installed.” line.

Further, according to the curl docs, error 28 is:

Operation timeout. The specified time-out period was reached according to the conditions.

I wonder if you have the correct IP address for where the the CHT is hosted?

I thought maybe your docker based CHT instance hadn’t correctly booted, but when test I get a “Failed to Connect” as expected. Here I install the certificate on a new install:

$ ./scripts/add-local-ip-certs-to-docker.sh 
Debug: Service 'medic-core/nginx' exited with status 143
Info: Service 'medic-core/nginx' restarted successfully
Success: Finished restarting services in package 'medic-core'

If no errors output above, downloaded fresh local-ip.co certificate.

And then confirm there’s a good connection:

$  curl -I https://192-168-68-26.my.local-ip.co|egrep 'SSL|HTTP|Server'                                                                                  
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0  1458    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
HTTP/1.1 200 OK
Server: nginx/1.13.6

If I stop the containers:

$ docker stop medic-os haproxy
medic-os
haproxy

Curl can indeed not connect - note error code 7:

$ curl -I https://192-168-68-26.my.local-ip.co|egrep 'SSL|HTTP|Server'
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
curl: (7) Failed to connect to 192-168-68-26.my.local-ip.co port 443: Connection refused

As well, checking now 5+ hours later, DNS is working for the address you specified:

dig 192-168-110-70.my.local-ip.co +short 9.9.9.9                      
192.168.110.70

Can you confirm the IP of your machine is indeed 192.168.110.70? I’m guessing this is wrong.

thanks @mrjones for your response

when i run ip a in the terminal is get

cliff@cliff-Latitude-5580:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp0s31f6: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
    link/ether 10:65:30:41:c2:72 brd ff:ff:ff:ff:ff:ff
3: wlp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 14:4f:8a:29:bb:75 brd ff:ff:ff:ff:ff:ff
    inet 192.168.235.70/24 brd 192.168.235.255 scope global dynamic noprefixroute wlp1s0
       valid_lft 2476sec preferred_lft 2476sec
    inet6 fe80::1d71:4cd4:9fd0:9a8e/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:3f:4e:90:a5 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
33: br-825be516885a: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:83:ac:d2:eb brd ff:ff:ff:ff:ff:ff
    inet 172.22.0.1/16 brd 172.22.255.255 scope global br-825be516885a
       valid_lft forever preferred_lft forever
    inet6 fe80::42:83ff:feac:d2eb/64 scope link 
       valid_lft forever preferred_lft forever
35: vethf8fb2da@if34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-825be516885a state UP group default 
    link/ether 12:8f:85:11:8f:bb brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::108f:85ff:fe11:8fbb/64 scope link 
       valid_lft forever preferred_lft forever
37: vethe0612c2@if36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-825be516885a state UP group default 
    link/ether f6:ae:03:4c:14:84 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::f4ae:3ff:fe4c:1484/64 scope link 
       valid_lft forever preferred_lft forever
cliff@cliff-Latitude-5580:~$ $ systemd-resolve --status | grep Current
$: command not found
cliff@cliff-Latitude-5580:~$  systemd-resolve --status | grep Current
      Current Scopes: none
      Current Scopes: DNS           
  Current DNS Server: 192.168.235.93
      Current Scopes: none
      Current Scopes: none
      Current Scopes: none
      Current Scopes: none
cliff@cliff-Latitude-5580:~$ ip r
default via 192.168.235.93 dev wlp1s0 proto dhcp metric 600 
169.254.0.0/16 dev wlp1s0 scope link metric 1000 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
172.22.0.0/16 dev br-825be516885a proto kernel scope link src 172.22.0.1 
192.168.235.0/24 dev wlp1s0 proto kernel scope link src 192.168.235.70 metric 600 
cliff@cliff-Latitude-5580:~$

inet 192.168.235.70/24 under number 3

@cliff - I’m not sure if you’re on a different network now and that’s why it changed, but per your (super helpful \o/) ip a testing, your IP is 192.168.235.70 instead of 192.168.110.70. Can you test with https://192-168-235-70.my.local-ip.co and let us know if that is successful?

@mrjones with https://192-168-235-70.my.local-ip.co/ am able to get the medic app

@cliff - wonderful! Thanks for letting us know you were successful.

1 Like

@mrjones thanks for this your responses are helping me learn more about CHT

its abit funny for the change in the networks coz with 192.168.110.70 i was still accessing the app but no able to log in with medic/password but with https://192-168-235-70.my.local-ip.co/
am able to access the app n login remotely…
Any thoughts why?

20 posts were split to a new topic: Connecting sms gateway to local CHT instance