Problems - CHT Local Setup

I am having problems with setting up either a blank or CHT Maternal & Newborn Health Reference application onto my local machine. I am hoping its a trivial that I might have overlooked.

Describe the Bug
After installing all of the required resources and following all implementation steps, I cannot access the application on the browser (http://localhost).

How To Reproduce The Issue
Here are the exact steps I took to implement.

  1. Install Node, npm
  2. Install Docker for Ubuntu
  3. Install medic-conf
  4. Install the core framework
    – downloaded docker-compose file from repo, run docker-compose up
  5. Navigate to localhost on Google Chrome

Expected Behavior
I expected to get to the login page that would confirm that I am on the right track but instead. I got a 404 Not Found page.

Logs
If applicable, include the server or browser logs (check for PHI):

Environment

  • Instance: localhost
  • Browser: Firefox, Google Chrome
  • Client platform: Linux - Ubuntu 20.04
  • Docker version 20.10.4, docker-compose version 1.25.0
  • App:webapp
  • Version: 3.7

Additional Context
I tried the exact same steps on a Windows 10 machine with WSL2 installed and it worked. Both for the blank local setup and setup with test data. Both worked.

Thanks in advance
Kristopher Kivutha.

2 Likes

@Kristopher_Kivutha - Sorry you’re having trouble setting up your Docker instance of the CHT!

Can you specify the URL where you’re following the steps to set up your local instance? Is it these steps from our docs? If yes, then please provide any logs/errors after you run the docker-compose call.

As well, after you run docker-compose , in another terminal can you paste the output of docker ps -a here too? That will list the all the active containers or recently crashed ones.

Thanks!

Thank you @mrjones for the quick response.

I am actually using the steps included in the CHT Local Environment Setup Tutorial.. I will use the steps provided from your docs and then provide the list of containers running locally.

I still get the same error when I try and navigate to localhost - 404 Not Found.

List of Containers

CONTAINER ID   IMAGE                                 COMMAND                  CREATED       STATUS                   PORTS                                      NAMES
7ab7bed30eaa   medicmobile/medic-os:cht-3.9.0-rc.2   "/bin/bash -l /boot/…"   3 hours ago   Up 4 minutes             0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp   medic-os
9ef85374e63f   medicmobile/haproxy:rc-1.17           "/entrypoint.sh -f /…"   3 hours ago   Up 4 minutes                                                        haproxy
659475595e38   hello-world                           "/hello"                 5 hours ago   Exited (0) 5 hours ago                                              practical_shaw

Open Ports

systemd-r  607 systemd-resolve   13u  IPv4  27123      0t0  TCP 127.0.0.53:53 (LISTEN)
cupsd      648            root    6u  IPv6  30203      0t0  TCP [::1]:631 (LISTEN)
cupsd      648            root    7u  IPv4  30204      0t0  TCP 127.0.0.1:631 (LISTEN)
docker-pr 5117            root    4u  IPv4  64901      0t0  TCP *:443 (LISTEN)
docker-pr 5131            root    4u  IPv4  68808      0t0  TCP *:80 (LISTEN)

Thanks for the report! That all looks correct. Specifically, you have two containers running medic-os and haproxy and medic-os is listening on the right ports. When I run the same docker command on my local dev instance, I get the same output. :+1:

Let’s see what’s going on with the server. Can you please send along:

  1. The output of both curl -I http://localhost and curl -kI https://localhost/? I want to confirm what server is running there and what info it’s sending. That first command should not actually send a 404, but a 301 to redirect you to the https URL. The second should send you the HTML for the login form for the CHT. If you do see this, try going to https://localhost (not the “s” in https!). Either way, do report back when you see.
  2. Any errors from the docker-compose. This call may show a lot of log data, so we’re looking for anything that shows an error. Or, just go crazy and post lots of log data, better too much than too little :wink:
  3. Check that couchdb is up and running: Ideally from your Ubuntu terminal this should work curl -k https://admin:pass@localhost/api/v1/monitoring However, if it doesn’t, try getting shell on your container to test. Using the ID above from docker ps we’ll use 7ab7bed30eaa. If you’ve since restarted/destroyed the containers, run docker ps to get the new ID for medic-os.
    1. docker exec -it 63f8bb8607fe sh
    2. curl -k https://admin:pass@localhost/api/v1/monitoring

If none of this works we can next try deleting your containers, your images and re-running docker-up, but let’s see what the above yields first.

Good morning! Thank you for the quick reply @mrjones

1. Results - Curl Commands on Localhost

The first command is successful and sends a 301 result however, the second command throws a 404 error and thus redirects me to the 404 Not Found page.

ngumbau@ngumbau:~/Shina$ curl -I http://localhost
HTTP/1.1 301 Moved Permanently
Server: nginx/1.13.6
Date: Sat, 27 Feb 2021 05:43:45 GMT
Content-Type: text/html
Content-Length: 185
Connection: keep-alive
Location: https://localhost/

ngumbau@ngumbau:~/Shina$ curl -kI https://localhost/?
HTTP/1.1 404 Not Found
Server: nginx/1.13.6
Date: Sat, 27 Feb 2021 05:44:21 GMT
Content-Type: text/html
Content-Length: 169
Connection: keep-alive
Vary: Accept-Encoding

2. Results - Docker Compose

There is only one error listed when running docker-compose.

medic-os    | mesg: ttyname failed: Inappropriate ioctl for device
  1. Results - CouchDB
    When I test from the terminal CouchDB is not up and running.
ngumbau@ngumbau:~/Shina$ curl -k https://admin:pass@localhost/api/v1/monitoring
<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.13.6</center>
</body>
</html>

Executing the same command, through the container’s shell still brings out the same result.

ngumbau@ngumbau:~/Shina$ sudo docker exec -it 7ab7bed30eaa sh
sh-4.3# curl -k https://admin:pass@localhost/api/v1/monitoring
<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.13.6</center>
</body>
</html>
sh-4.3# 

Lastly deleted all containers and re-run docker-compose up and still getting the same result.

Here is everything from running docker-compose. I hope it helps. I even went as far as re-installing ubuntu 20.04 again but unfortunately still the same result.

Building with native build. Learn about native build in Compose here: https://docs.docker.com/go/compose-native-build/
Creating haproxy ... done
Creating medic-os ... done
Attaching to haproxy, medic-os
medic-os    | mesg: ttyname failed: Inappropriate ioctl for device
medic-os    | [2021/02/27 11:13:36] Info: Setting up software...
medic-os    | [2021/02/27 11:13:36] Info: Running setup task 'horticulturalist/sudoers'
haproxy     | Starting enhanced syslogd: rsyslogd.
haproxy     | Feb 27 10:54:42 2ee468373f44 haproxy[25]: 172.18.0.3,200,GET,//_active_tasks,-,horticulturalist,'-',256,4,3,'-'
haproxy     | Feb 27 10:54:42 2ee468373f44 haproxy[25]: 172.18.0.3,200,GET,/medic/horti-upgrade?,-,horticulturalist,'-',1006,2,707,'node-fetch/1.0 (+https://github.com/bitinn/node-fetch)'
haproxy     | Feb 27 11:08:46 2ee468373f44 haproxy[25]: Stopping frontend http-in in 0 ms.
haproxy     | Feb 27 11:08:46 2ee468373f44 haproxy[25]: Stopping frontend http-in2 in 0 ms.
haproxy     | Feb 27 11:08:46 2ee468373f44 haproxy[25]: Stopping backend couch-backend in 0 ms.
haproxy     | Feb 27 11:08:46 2ee468373f44 haproxy[25]: Stopping backend couch-backend2 in 0 ms.
haproxy     | Feb 27 11:08:46 2ee468373f44 haproxy[25]: Proxy http-in stopped (FE: 4 conns, BE: 0 conns).
haproxy     | Feb 27 11:08:46 2ee468373f44 haproxy[25]: Proxy http-in2 stopped (FE: 0 conns, BE: 0 conns).
haproxy     | Feb 27 11:08:46 2ee468373f44 haproxy[25]: Proxy couch-backend stopped (FE: 0 conns, BE: 4 conns).
haproxy     | Feb 27 11:08:46 2ee468373f44 haproxy[25]: Proxy couch-backend2 stopped (FE: 0 conns, BE: 0 conns).
haproxy     | # Setting `log` here with the address of 127.0.0.1 will have the effect
haproxy     | # of haproxy sending the udp log messages to its own rsyslog instance
haproxy     | # (which sits at `127.0.0.1`) at the `local0` facility including all
haproxy     | # logs that have a priority greater or equal to the specified log level
haproxy     | # log 127.0.0.1 local0 warning
haproxy     | global
haproxy     |   maxconn 4096
haproxy     |   lua-load /usr/local/etc/haproxy/parse_basic.lua
haproxy     |   lua-load /usr/local/etc/haproxy/parse_cookie.lua
haproxy     |   lua-load /usr/local/etc/haproxy/replace_password.lua
haproxy     |   log /dev/log len 65535 local2 info
haproxy     | 
haproxy     | defaults
haproxy     |   mode http
haproxy     |   log global
haproxy     |   option dontlognull
haproxy     |   option http-ignore-probes
haproxy     |   timeout client 150000
haproxy     |   timeout server 3600000
haproxy     |   timeout connect 15000
haproxy     |   stats enable
haproxy     |   stats refresh 30s
haproxy     |   stats auth admin:password
haproxy     |   stats uri /haproxy?stats
haproxy     | 
haproxy     | frontend http-in
haproxy     |   bind  *:5984
haproxy     |   acl has_user req.hdr(x-medic-user) -m found
haproxy     |   acl has_cookie req.hdr(cookie) -m found
haproxy     |   acl has_basic_auth req.hdr(authorization) -m found
haproxy     |   declare capture request len 400000
haproxy     |   http-request set-header x-medic-user %[lua.parseBasic] if has_basic_auth
haproxy     |   http-request set-header x-medic-user %[lua.parseCookie] if !has_basic_auth !has_user has_cookie
haproxy     |   http-request capture req.body id 0 # capture.req.hdr(0)
haproxy     |   http-request capture req.hdr(x-medic-service) len 200 # capture.req.hdr(1)
haproxy     |   http-request capture req.hdr(x-medic-user) len 200 # capture.req.hdr(2)
haproxy     |   http-request capture req.hdr(user-agent) len 600 # capture.req.hdr(3)
haproxy     |   capture response header Content-Length len 10 # capture.res.hdr(0)
haproxy     |   log-format "%ci,%ST,%[capture.req.method],%[capture.req.uri],%[capture.req.hdr(1)],%[capture.req.hdr(2)],'%[capture.req.hdr(0),lua.replacePassword]',%B,%Tr,%[capture.res.hdr(0)],'%[capture.req.hdr(3)]'"
haproxy     |   default_backend couch-backend
haproxy     | 
haproxy     | frontend http-in2
haproxy     |   bind  *:5986
haproxy     |   default_backend couch-backend2
haproxy     | 
haproxy     | backend couch-backend
haproxy     |   balance roundrobin
haproxy     |   server couchdb1 medic-os:5985
haproxy     | 
haproxy     | backend couch-backend2
haproxy     |   balance roundrobin
haproxy     |   server couchdb1 medic-os:5987
haproxy     | [alert] 057/111335 (1) : parseBasic loaded
haproxy     | [alert] 057/111335 (1) : parseCookie loaded
haproxy     | [alert] 057/111335 (1) : replacePassword loaded
haproxy     | Feb 27 11:13:35 a32a7d711e84 haproxy[1]: Proxy http-in started.
haproxy     | Feb 27 11:13:35 a32a7d711e84 haproxy[1]: Proxy http-in2 started.
haproxy     | Feb 27 11:13:35 a32a7d711e84 haproxy[1]: Proxy couch-backend started.
haproxy     | Feb 27 11:13:35 a32a7d711e84 haproxy[1]: Proxy couch-backend2 started.
medic-os    | [2021/02/27 11:13:36] Info: Running setup task 'horticulturalist/users'
medic-os    | [2021/02/27 11:13:36] Info: Setting up software (14% complete)...
medic-os    | [2021/02/27 11:13:36] Info: Running setup task 'medic-api/link-logs'
medic-os    | [2021/02/27 11:13:36] Info: Running setup task 'medic-api/logrotate'
medic-os    | [2021/02/27 11:13:36] Info: Running setup task 'medic-api/users'
medic-os    | [2021/02/27 11:13:36] Info: Setting up software (28% complete)...
medic-os    | [2021/02/27 11:13:36] Info: Running setup task 'medic-core/ldconfig'
medic-os    | [2021/02/27 11:13:36] Info: Running setup task 'medic-core/link-logs'
medic-os    | [2021/02/27 11:13:36] Info: Running setup task 'medic-core/logrotate'
medic-os    | [2021/02/27 11:13:36] Info: Running setup task 'medic-core/nginx'
medic-os    | [2021/02/27 11:13:36] Info: Running setup task 'medic-core/nginx-ssl'
medic-os    | [2021/02/27 11:13:36] Info: Running setup task 'medic-core/profile'
medic-os    | [2021/02/27 11:13:36] Info: Running setup task 'medic-core/ssh-authorized-keys'
medic-os    | [2021/02/27 11:13:36] Info: Running setup task 'medic-core/ssh-keygen'
medic-os    | [2021/02/27 11:13:36] Info: Running setup task 'medic-core/usb-modeswitch'
medic-os    | [2021/02/27 11:13:36] Info: Setting up software (42% complete)...
medic-os    | [2021/02/27 11:13:36] Info: Running setup task 'medic-couch2pg/link-logs'
medic-os    | [2021/02/27 11:13:36] Info: Running setup task 'medic-couch2pg/logrotate'
medic-os    | [2021/02/27 11:13:36] Info: Setting up software (57% complete)...
medic-os    | [2021/02/27 11:13:36] Info: Running setup task 'medic-rdbms/ldconfig'
medic-os    | [2021/02/27 11:13:36] Info: Running setup task 'medic-rdbms/link-logs'
medic-os    | [2021/02/27 11:13:37] Info: Running setup task 'medic-rdbms/reconfigure'
medic-os    | [2021/02/27 11:13:37] Info: Setting up software (71% complete)...
medic-os    | [2021/02/27 11:13:37] Info: Running setup task 'medic-sentinel/link-logs'
medic-os    | [2021/02/27 11:13:37] Info: Running setup task 'medic-sentinel/logrotate'
medic-os    | [2021/02/27 11:13:37] Info: Running setup task 'medic-sentinel/users'
medic-os    | [2021/02/27 11:13:37] Info: Setting up software (85% complete)...
medic-os    | [2021/02/27 11:13:37] Info: Running setup task 'system-services/home-directories'
medic-os    | [2021/02/27 11:13:37] Info: Running setup task 'system-services/link-logs'
medic-os    | [2021/02/27 11:13:37] Info: Running setup task 'system-services/logrotate'
medic-os    | [2021/02/27 11:13:37] Info: Setting up software (100% complete)...
medic-os    | [2021/02/27 11:13:37] Info: Starting services...
medic-os    | [2021/02/27 11:13:37] Info: Service 'horticulturalist/horticulturalist' started successfully
medic-os    | [2021/02/27 11:13:37] Info: Horticulturalist is now bootstrapping
medic-os    | [2021/02/27 11:13:37] Info: Service 'medic-api/medic-api' started successfully
medic-os    | [2021/02/27 11:13:37] Info: Service 'medic-core/couchdb' started successfully
medic-os    | [2021/02/27 11:13:37] Info: Service 'medic-core/nginx' started successfully
medic-os    | [2021/02/27 11:13:37] Info: CouchDB is already configured
medic-os    | [2021/02/27 11:13:37] Info: Service 'medic-core/openssh' started successfully
medic-os    | [2021/02/27 11:13:37] Info: Service 'medic-couch2pg/medic-couch2pg' started successfully
medic-os    | [2021/02/27 11:13:37] Info: Service 'medic-rdbms/postgresql' started successfully
medic-os    | [2021/02/27 11:13:37] Info: Service 'medic-sentinel/medic-sentinel' started successfully
medic-os    | [2021/02/27 11:13:37] Info: Service 'system-services/cron' started successfully
medic-os    | [2021/02/27 11:13:37] Info: Service 'system-services/syslog' started successfully
medic-os    | [2021/02/27 11:13:37] Info: Synchronizing disks...
haproxy     | Feb 27 11:13:38 a32a7d711e84 haproxy[26]: 172.18.0.3,200,GET,/medic/,-,horticulturalist,'-',859,58,604,'node-fetch/1.0 (+https://github.com/bitinn/node-fetch)'
haproxy     | Feb 27 11:13:38 a32a7d711e84 haproxy[26]: 172.18.0.3,200,GET,/medic/,-,horticulturalist,'-',858,2,603,'node-fetch/1.0 (+https://github.com/bitinn/node-fetch)'
haproxy     | Feb 27 11:13:38 a32a7d711e84 haproxy[26]: 172.18.0.3,200,GET,//_active_tasks,-,horticulturalist,'-',256,4,3,'-'
haproxy     | Feb 27 11:13:38 a32a7d711e84 haproxy[26]: 172.18.0.3,200,GET,/medic/horti-upgrade?,-,horticulturalist,'-',1006,2,707,'node-fetch/1.0 (+https://github.com/bitinn/node-fetch)'
medic-os    | [2021/02/27 11:13:47] Info: System started successfully
medic-os    | [2021/02/27 11:13:47] Info: Starting log streaming

@Kristopher_Kivutha - The problem is for sure that couchdb isn’t running inside your medic-os container. Howerver, the reason as to why this is happening is a mystery. Let’s solve it!

Just to double confirm, your steps to reproduce are by following these steps:

  1. Start with a clean Ubuntu 20.04 install
  2. Install docker per the “install using repository” steps
  3. Install docker compose via these steps.
  4. get the yaml file to use with docker-compse with this curl call: curl -o test-docs-medic-os-compose.yml https://raw.githubusercontent.com/medic/cht-core/master/docker-compose.yml
  5. docker-compose -f test-docs-medic-os-compose.yml up

Expected: CHT is on https://localhost
Actual: 404 is served on https://localhost
Notes: I tested this process on a clean Ubuntu 20.04 server at Digital Ocean. So you can see the expected output of a successful install, here’s my logs:


root@ubuntu-cht-test-forum-docker:~# docker-compose --version
docker-compose version 1.28.5, build c4eb3a1f
root@ubuntu-cht-test-forum-docker:~# docker-compose -f test-docs-medic-os-compose.yml up
Creating network "medic-net" with the default driver            
Creating volume "medic-data" with default driver
Pulling haproxy (medicmobile/haproxy:rc-1.17)...                   
rc-1.17: Pulling from medicmobile/haproxy
a5a6f2f73cd8: Pull complete                                                                 
7746471d9b75: Pull complete                                                                            
3149ba82c5fb: Pull complete                                                                          
1aa0160b5764: Pull complete                                                                                               
03f3e41e886e: Pull complete                                                                                           
b07f863becc6: Pull complete                                                                                                           
cecc0c19b01e: Pull complete     
c265e38a0775: Pull complete                  
dc4ff7b6af19: Pull complete                                            
Digest: sha256:e7386241b9baec7b373b2981867b0b1e1c0ed15c39d88a4d88ddaae2095ea923
Status: Downloaded newer image for medicmobile/haproxy:rc-1.17
Pulling medic-os (medicmobile/medic-os:cht-3.9.0-rc.2)...
cht-3.9.0-rc.2: Pulling from medicmobile/medic-os            
b5e173e44934: Pull complete          
29047100b040: Pull complete                           
15743a713c2a: Pull complete                                                       
4b6bc9e29877: Pull complete                             
70e8b9c5fc55: Pull complete                       
b5c1c4676ef2: Pull complete                                                   
2e876803ec7b: Pull complete                         
475010e18dc4: Pull complete                                       
37dffbcce53e: Pull complete                                                                   
f34e044170aa: Pull complete                                         
08f2e43b306f: Pull complete                         
Digest: sha256:6c678b09476f133cab80b149747f772be797581f703c808ce475bf224d2b4d85
Status: Downloaded newer image for medicmobile/medic-os:cht-3.9.0-rc.2
Creating haproxy ... done           
Creating medic-os ... done            
Attaching to haproxy, medic-os                                                                                          
medic-os    | mesg: ttyname failed: Inappropriate ioctl for device
medic-os    | [2021/03/01 23:43:49] Info: Setting up software...
haproxy     | Starting enhanced syslogd: rsyslogd.   
haproxy     | # Setting `log` here with the address of 127.0.0.1 will have the effect                           
haproxy     | # of haproxy sending the udp log messages to its own rsyslog instance                        
haproxy     | # (which sits at `127.0.0.1`) at the `local0` facility including all
haproxy     | # logs that have a priority greater or equal to the specified log level
haproxy     | # log 127.0.0.1 local0 warning          
haproxy     | global                                            
haproxy     |   maxconn 4096                     
haproxy     |   lua-load /usr/local/etc/haproxy/parse_basic.lua
haproxy     |   lua-load /usr/local/etc/haproxy/parse_cookie.lua
haproxy     |   lua-load /usr/local/etc/haproxy/replace_password.lua           
haproxy     |   log /dev/log len 65535 local2 info   
haproxy     |
haproxy     | defaults
haproxy     |   mode http                                                 
haproxy     |   log global
haproxy     |   option dontlognull                        
haproxy     |   option http-ignore-probes         
haproxy     |   timeout client 150000                                    
haproxy     |   timeout server 3600000
haproxy     |   timeout connect 15000                                       
haproxy     |   stats enable                                      
haproxy     |   stats refresh 30s                                             
haproxy     |   stats auth admin:password
haproxy     |   stats uri /haproxy?stats
haproxy     |                                                         
haproxy     | frontend http-in
haproxy     |   bind  *:5984
haproxy     |   acl has_user req.hdr(x-medic-user) -m found      
haproxy     |   acl has_cookie req.hdr(cookie) -m found
haproxy     |   acl has_basic_auth req.hdr(authorization) -m found
haproxy     |   declare capture request len 400000
haproxy     |   http-request set-header x-medic-user %[lua.parseBasic] if has_basic_auth
haproxy     |   http-request set-header x-medic-user %[lua.parseCookie] if !has_basic_auth !has_user has_cookie
haproxy     |   http-request capture req.body id 0 # capture.req.hdr(0)
haproxy     |   http-request capture req.hdr(x-medic-service) len 200 # capture.req.hdr(1)        
haproxy     |   http-request capture req.hdr(x-medic-user) len 200 # capture.req.hdr(2)                     
haproxy     |   http-request capture req.hdr(user-agent) len 600 # capture.req.hdr(3)                                                                 
haproxy     |   capture response header Content-Length len 10 # capture.res.hdr(0)
haproxy     |   log-format "%ci,%ST,%[capture.req.method],%[capture.req.uri],%[capture.req.hdr(1)],%[capture.req.hdr(2)],'%[capture.req.hdr(0),lua.replacePassword]',%B,%Tr,%[capture.res.hdr(0)],'%[capture.req.hdr(3)]'"
haproxy     |   default_backend couch-backend                                 
haproxy     |                                                                        
haproxy     | frontend http-in2                                                                                               
haproxy     |   bind  *:5986                                         
haproxy     |   default_backend couch-backend2                                          
haproxy     |
haproxy     | backend couch-backend                           
haproxy     |   balance roundrobin
haproxy     |   server couchdb1 medic-os:5985
haproxy     |
haproxy     | backend couch-backend2                                                                                                                                                  
haproxy     |   balance roundrobin                                             
haproxy     |   server couchdb1 medic-os:5987                                
haproxy     | [alert] 059/234349 (1) : parseBasic loaded                      
haproxy     | [alert] 059/234349 (1) : parseCookie loaded                     
haproxy     | [alert] 059/234349 (1) : replacePassword loaded                   
haproxy     | Mar  1 23:43:49 a73c374e0d85 haproxy[1]: Proxy http-in started.
haproxy     | Mar  1 23:43:49 a73c374e0d85 haproxy[1]: Proxy http-in2 started.
haproxy     | Mar  1 23:43:49 a73c374e0d85 haproxy[1]: Proxy couch-backend started.     
haproxy     | Mar  1 23:43:49 a73c374e0d85 haproxy[1]: Proxy couch-backend2 started.
medic-os    | [2021/03/01 23:43:49] Info: Running setup task 'horticulturalist/sudoers'
medic-os    | [2021/03/01 23:43:49] Info: Running setup task 'horticulturalist/users'
medic-os    | [2021/03/01 23:43:50] Info: Service 'horticulturalist/horticulturalist' started successfully
medic-os    | [2021/03/01 23:43:50] Info: Setting up software (14% complete)...
medic-os    | [2021/03/01 23:43:50] Info: Running setup task 'medic-api/link-logs'
medic-os    | [2021/03/01 23:43:50] Info: Running setup task 'medic-api/logrotate'
medic-os    | [2021/03/01 23:43:50] Info: Running setup task 'medic-api/users'
medic-os    | [2021/03/01 23:43:50] Info: Service 'medic-api/medic-api' started successfully
medic-os    | [2021/03/01 23:43:50] Info: Setting up software (28% complete)...
medic-os    | [2021/03/01 23:43:54] Info: Running setup task 'medic-core/ldconfig'
medic-os    | [2021/03/01 23:43:54] Info: Running setup task 'medic-core/link-logs'
medic-os    | [2021/03/01 23:43:54] Info: Running setup task 'medic-core/logrotate'
medic-os    | [2021/03/01 23:43:54] Info: Running setup task 'medic-core/nginx'
medic-os    | [2021/03/01 23:43:54] Info: Running setup task 'medic-core/nginx-ssl'
medic-os    | [2021/03/01 23:43:59] Info: Running setup task 'medic-core/profile'
medic-os    | [2021/03/01 23:43:59] Info: Running setup task 'medic-core/ssh-authorized-keys'
medic-os    | [2021/03/01 23:43:59] Info: Running setup task 'medic-core/ssh-keygen'
medic-os    | [2021/03/01 23:44:00] Info: Running setup task 'medic-core/usb-modeswitch'
medic-os    | [2021/03/01 23:44:00] Info: Service 'medic-core/couchdb' started successfully
medic-os    | [2021/03/01 23:44:00] Info: Setting up CouchDB for the first time
medic-os    | [2021/03/01 23:44:00] Info: Service 'medic-core/nginx' started successfully
medic-os    | [2021/03/01 23:44:00] Info: Service 'medic-core/openssh' started successfully
medic-os    | [2021/03/01 23:44:00] Info: Setting up software (42% complete)...
medic-os    | [2021/03/01 23:44:01] Info: Running setup task 'medic-couch2pg/link-logs'
medic-os    | [2021/03/01 23:44:01] Info: Running setup task 'medic-couch2pg/logrotate'
medic-os    | [2021/03/01 23:44:01] Info: Service 'medic-couch2pg/medic-couch2pg' started successfully
medic-os    | [2021/03/01 23:44:01] Info: Setting up software (57% complete)...
haproxy     | Mar  1 23:44:01 a73c374e0d85 haproxy[25]: 172.18.0.3,200,GET,/,-,-,'-',444,70,208,'curl/7.47.0'
medic-os    | [2021/03/01 23:44:01] Info: Creating system databases   
haproxy     | Mar  1 23:44:01 a73c374e0d85 haproxy[25]: 172.18.0.3,201,PUT,/_users,-,-,'-',290,44,12,'curl/7.47.0'
haproxy     | Mar  1 23:44:01 a73c374e0d85 haproxy[25]: 172.18.0.3,201,PUT,/_replicator,-,-,'-',295,13,12,'curl/7.47.0'
haproxy     | Mar  1 23:44:01 a73c374e0d85 haproxy[25]: 172.18.0.3,201,PUT,/_global_changes,-,-,'-',299,16,12,'curl/7.47.0'
medic-os    | [2021/03/01 23:44:01] Info: Setting up CouchDB administrative account
haproxy     | Mar  1 23:44:01 a73c374e0d85 haproxy[25]: Connect from 172.18.0.3:57376 to 172.18.0.2:5986 (http-in2/HTTP)
haproxy     | Mar  1 23:44:01 a73c374e0d85 haproxy[25]: 172.18.0.3,201,PUT,/_users/org.couchdb.user:admin,-,admin,'{#012    "id": "org.couchdb.user:admin", "roles": [],#012    "type": "user", "name": "admin"#012  }',430,25,85,'curl/7.47.0'
medic-os    | [2021/03/01 23:44:01] Info: Restricting CouchDB to authorized users only
haproxy     | Mar  1 23:44:01 a73c374e0d85 haproxy[25]: Connect from 172.18.0.3:57384 to 172.18.0.2:5986 (http-in2/HTTP)
haproxy     | Mar  1 23:44:01 a73c374e0d85 haproxy[25]: Connect from 172.18.0.3:57388 to 172.18.0.2:5986 (http-in2/HTTP)
medic-os    | [2021/03/01 23:44:01] Info: Creating CouchDB service accounts          
haproxy     | Mar  1 23:44:01 a73c374e0d85 haproxy[25]: Connect from 172.18.0.3:57392 to 172.18.0.2:5986 (http-in2/HTTP)
haproxy     | Mar  1 23:44:01 a73c374e0d85 haproxy[25]: 172.18.0.3,201,PUT,/_users/org.couchdb.user:medic-api,-,admin,'{#012    "id": "org.couchdb.user:medic-api", "roles": [],#012    "type": "user", "name": "medic-api"#012  }',438,5,89,'curl/7.47.0'
haproxy     | Mar  1 23:44:01 a73c374e0d85 haproxy[25]: Connect from 172.18.0.3:57400 to 172.18.0.2:5986 (http-in2/HTTP)
haproxy     | Mar  1 23:44:01 a73c374e0d85 haproxy[25]: 172.18.0.3,201,PUT,/_users/org.couchdb.user:medic-couch2pg,-,admin,'{#012    "id": "org.couchdb.user:medic-couch2pg", "roles": [],#012    "type": "user", "name": "medic-couch2pg"#012  }',448,3,94,'curl/7.47.0'
haproxy     | Mar  1 23:44:01 a73c374e0d85 haproxy[25]: Connect from 172.18.0.3:57408 to 172.18.0.2:5986 (http-in2/HTTP)
haproxy     | Mar  1 23:44:01 a73c374e0d85 haproxy[25]: 172.18.0.3,201,PUT,/_users/org.couchdb.user:medic-sentinel,-,admin,'{#012    "id": "org.couchdb.user:medic-sentinel", "roles": [],#012    "type": "user", "name": "medic-sentinel"#012  }',448,4,94,'curl/7.47.0'
haproxy     | Mar  1 23:44:01 a73c374e0d85 haproxy[25]: Connect from 172.18.0.3:57416 to 172.18.0.2:5986 (http-in2/HTTP)
haproxy     | Mar  1 23:44:01 a73c374e0d85 haproxy[25]: 172.18.0.3,201,PUT,/_users/org.couchdb.user:horticulturalist,-,admin,'{#012    "id": "org.couchdb.user:horticulturalist", "roles": [],#012    "type": "user", "name": "horticulturalist"#012  }',452,4,96,'curl/7.47.0'
haproxy     | Mar  1 23:44:01 a73c374e0d85 haproxy[25]: Connect from 172.18.0.3:57424 to 172.18.0.2:5986 (http-in2/HTTP)
haproxy     | Mar  1 23:44:01 a73c374e0d85 haproxy[25]: 172.18.0.3,201,PUT,/_users/org.couchdb.user:medic,-,admin,'{#012    "id": "org.couchdb.user:medic", "roles": [],#012    "type": "user", "name": "medic"#012  }',430,4,85,'curl/7.47.0'
medic-os    | [2021/03/01 23:44:01] Info: New CouchDB Administrative User: medic
medic-os    | [2021/03/01 23:44:01] Warning: A new administrative password was explicitly set via the CLI
medic-os    | [2021/03/01 23:44:01] Warning: The new administrative password will not be displayed
medic-os    | [2021/03/01 23:44:01] Info: CouchDB first-run setup successful
medic-os    | [2021/03/01 23:44:02] Info: Horticulturalist is now bootstrapping
haproxy     | Mar  1 23:44:02 a73c374e0d85 haproxy[25]: 172.18.0.3,404,GET,/medic/,-,horticulturalist,'-',326,5,58,'node-fetch/1.0 (+https://github.com/bitinn/node-fetch)'
haproxy     | Mar  1 23:44:02 a73c374e0d85 haproxy[25]: 172.18.0.3,201,PUT,/medic/,-,horticulturalist,'-',308,12,12,'node-fetch/1.0 (+https://github.com/bitinn/node-fetch)'
haproxy     | Mar  1 23:44:02 a73c374e0d85 haproxy[25]: 172.18.0.3,200,GET,/medic/,-,horticulturalist,'-',841,9,586,'node-fetch/1.0 (+https://github.com/bitinn/node-fetch)'
haproxy     | Mar  1 23:44:02 a73c374e0d85 haproxy[25]: 172.18.0.3,200,GET,//_active_tasks,-,horticulturalist,'-',256,1,3,'-'
haproxy     | Mar  1 23:44:02 a73c374e0d85 haproxy[25]: 172.18.0.3,404,GET,/medic/horti-upgrade?,-,horticulturalist,'-',309,1,41,'node-fetch/1.0 (+https://github.com/bitinn/node-fetch)'
haproxy     | Mar  1 23:44:02 a73c374e0d85 haproxy[25]: 172.18.0.3,201,PUT,/medic/horti-upgrade,-,horticulturalist,'{"_id":"horti-upgrade","user":"horticulturalist cli","created":1614642242519,"action":"install","build_info":{"namespace":"medic","application":"medic","version":"3.9.0"}}',430,5,76,'node-fetch/1.0 (+https://github.com/bitinn/node-fetch)'
haproxy     | Mar  1 23:44:02 a73c374e0d85 haproxy[25]: 172.18.0.3,201,PUT,/medic/horti-upgrade,-,horticulturalist,'{"_id":"horti-upgrade","user":"horticulturalist cli","created":1614642242519,"action":"install","build_info":{"namespace":"medic","application":"medic","version":"3.9.0"},"_rev":"1-9b770a5b481e8f1e6748c5cf5e5bc84f","log":[{"type":"stage","datetime":1614642242529,"key":"horti.stage.init","message":{"message":"Horticulturalist deployment of 'medic:medic:3.9.0' initialising","key":"horti.stage.init"}}]}',430,4,76,'node-fetch/1.0 (+https://github.com/bitinn/node-fetch)'
haproxy     | Mar  1 23:44:02 a73c374e0d85 haproxy[25]: 172.18.0.3,201,PUT,/medic/horti-upgrade,-,horticulturalist,'{"_id":"horti-upgrade","user":"horticulturalist cli","created":1614642242519,"action":"install","build_info":{"namespace":"medic","application":"medic","version":"3.9.0"},"_rev":"2-0391b2247a4239a05b32bb97ffdc44d6","log":[{"type":"stage","datetime":1614642242529,"key":"horti.stage.init","message":{"message":"Horticulturalist deployment of 'medic:medic:3.9.0' initialising","key":"horti.stage.init"}},{"type":"stage","datetime":1614642242537,"key":"horti.stage.preCleanup","message":{"message":"Pre-deploy cleanup","key":"horti.stage.preCleanup"}}]}',430,4,76,'node-fetch/1.0 (+https://github.com/bi
tinn/node-fetch)'                                          
haproxy     | Mar  1 23:44:02 a73c374e0d85 haproxy[25]: 172.18.0.3,200,GET,/medic/_all_docs?startkey=%22_design%2F%3Astaged%3A%22&endkey=%22_design%2F%3Astaged%3A%EF%BF%B0%22,-,horticulturalist,'-',320,6,-,'node-fetch/1.0 (+https://github.com/bitinn/node-fetch)'
haproxy     | Mar  1 23:44:02 a73c374e0d85 haproxy[25]: 172.18.0.3,202,POST,/medic/_compact,-,horticulturalist,'-',272,1,12,'node-fetch/1.0 (+https://github.com/bitinn/node-fetch)'
haproxy     | Mar  1 23:44:02 a73c374e0d85 haproxy[25]: 172.18.0.3,202,POST,/medic/_view_cleanup,-,horticulturalist,'-',272,4,12,'node-fetch/1.0 (+https://github.com/bitinn/node-fetch)'
haproxy     | Mar  1 23:44:02 a73c374e0d85 haproxy[25]: 172.18.0.3,200,GET,/medic/,-,horticulturalist,'-',856,3,601,'node-fetch/1.0 (+https://github.com/bitinn/node-fetch)'
haproxy     | Mar  1 23:44:02 a73c374e0d85 haproxy[25]: 172.18.0.3,200,GET,/medic/,-,horticulturalist,'-',858,2,603,'node-fetch/1.0 (+https://github.com/bitinn/node-fetch)'
haproxy     | Mar  1 23:44:02 a73c374e0d85 haproxy[25]: 172.18.0.3,201,PUT,/medic/horti-upgrade,-,horticulturalist,'{"_id":"horti-upgrade","user":"horticulturalist cli","created":1614642242519,"action":"install","build_info":{"namespace":"medic","application":"medic","version":"3.9.0"},"_rev":"3-a1c20802db3602db72a1aff9e824e512","log":[{"type":"stage","datetime":1614642242529,"key":"horti.stage.init","message":{"message":"Horticulturalist deployment of 'medic:medic:3.9.0' initialising","key":"horti.stage.init"}},{"type":"stage","datetime":1614642242537,"key":"horti.stage.preCleanup","message":{"message":"Pre-deploy cleanup","key":"horti.stage.preCleanup"}},{"type":"stage","datetime":1614642242775,"key":"hor
ti.stage.download","message":{"message":"Downloading and staging install","key":"horti.stage.download"}}]}',430,4,76,'node-fetch/1.0 (+https://github.com/bitinn/node-fetch)'
medic-os    | [2021/03/01 23:44:03] Info: Running setup task 'medic-rdbms/ldconfig'    
medic-os    | [2021/03/01 23:44:03] Info: Running setup task 'medic-rdbms/link-logs' 
medic-os    | [2021/03/01 23:44:03] Info: Running setup task 'medic-rdbms/reconfigure'
medic-os    | [2021/03/01 23:44:03] Info: Service 'medic-rdbms/postgresql' started successfully                                                                                                                           
medic-os    | [2021/03/01 23:44:03] Info: Setting up software (71% complete)...
medic-os    | [2021/03/01 23:44:03] Info: Running setup task 'medic-sentinel/link-logs'
medic-os    | [2021/03/01 23:44:03] Info: Running setup task 'medic-sentinel/logrotate'
medic-os    | [2021/03/01 23:44:03] Info: Running setup task 'medic-sentinel/users'
medic-os    | [2021/03/01 23:44:04] Info: Service 'medic-sentinel/medic-sentinel' started successfully
medic-os    | [2021/03/01 23:44:04] Info: Setting up software (85% complete)...
medic-os    | [2021/03/01 23:44:04] Info: Running setup task 'system-services/home-directories'
medic-os    | [2021/03/01 23:44:04] Info: Running setup task 'system-services/link-logs'
medic-os    | [2021/03/01 23:44:04] Info: Running setup task 'system-services/logrotate'
medic-os    | [2021/03/01 23:44:04] Info: Service 'system-services/cron' started successfully
medic-os    | [2021/03/01 23:44:04] Info: Service 'system-services/syslog' started successfully
medic-os    | [2021/03/01 23:44:04] Info: Setting up software (100% complete)...
medic-os    | [2021/03/01 23:44:04] Info: Starting services...
medic-os    | [2021/03/01 23:44:04] Info: Synchronizing disks...
haproxy     | Mar  1 23:44:11 a73c374e0d85 haproxy[25]: 172.18.0.3,201,PUT,/medic/_design/%3Astaged%3Amedic,-,horticulturalist,'{"_id":"_design/:staged:medic","build_info":{"application":"medic","author":"grunt on travis","base_version":"3.9.0","namespace":"medic","node_modules":["medic-api-0.1.0.tgz","medic-sentinel-0.1.0.tgz"],"schema_version":1,"time":"2020-06-19T03:08:55.156Z","version":"3.9.0"},"rewrites":[{"from":"/manifest.appcache","to":"manifest.appcache"}],"validate_doc_update":"/*\n  SERVER DOCUMENT VALIDATION\n\n  This is for validating authority. It is  ---- SNIP ---  actingDdocs"}}]}',430,11,76,'node-fetch/1.0 (+https://github.com/bitinn/node-fetch)'
medic-os    | [2021/03/01 23:44:14] Info: System started successfully          
medic-os    | [2021/03/01 23:44:14] Info: Starting log streaming                  

If the above steps to reproduce are all correct, then let’s get the status of couchdb by running these two in another terminal:

  1. docker exec -it medic-os /boot/supervisor-inspect
  2. docker exec -it medic-os head -n70 /srv/storage/medic-api/logs/medic-api.log

Here’s the output for me for supervisor-inspect:

docker exec -it medic-os /boot/supervisor-inspect

Package 'horticulturalist':
  Service 'horticulturalist':
    Status: Up
    Up: 306 seconds, Restarts: 0
    Attributes: watched, running, expected
    Service PID: 98, Supervisor PID: 88

Package 'medic-api':
  Service 'medic-api':
    Status: Up
    Up: 306 seconds, Restarts: 0
    Attributes: watched, running, expected
    Service PID: 206, Supervisor PID: 191

Package 'medic-core':
  Service 'couchdb':
    Status: Up
    Up: 297 seconds, Restarts: 0
    Attributes: watched, running, expected
    Service PID: 396, Supervisor PID: 380
  Service 'nginx':
    Status: Up
    Up: 297 seconds, Restarts: 0
    Attributes: watched, running, expected
    Service PID: 438, Supervisor PID: 416
  Service 'openssh':
    Status: Up
    Up: 297 seconds, Restarts: 0
    Attributes: watched, running, expected
    Service PID: 497, Supervisor PID: 463

Package 'medic-couch2pg':
  Service 'medic-couch2pg':
    Status: Up
    Up: 0 seconds, Restarts: 120
    Attributes: watched, running, expected
    Service PID: 5480, Supervisor PID: 581

Package 'medic-rdbms':
  Service 'postgresql':
    Status: Up
    Up: 294 seconds, Restarts: 0
    Attributes: watched, running, expected
    Service PID: 1054, Supervisor PID: 1044

Package 'medic-sentinel':
  Service 'medic-sentinel':
    Status: Up
    Up: 293 seconds, Restarts: 0
    Attributes: watched, running, expected
    Service PID: 1166, Supervisor PID: 1153

Package 'system-services':
  Service 'cron':
    Status: Up
    Up: 293 seconds, Restarts: 0
    Attributes: watched, running, expected
    Service PID: 1262, Supervisor PID: 1245
  Service 'syslog':
    Status: Up
    Up: 293 seconds, Restarts: 0
    Attributes: watched, running, expected
    Service PID: 1285, Supervisor PID: 1273

And here’s the head call:

root@ubuntu-cht-test-forum-docker:~# docker exec -it medic-os head -n70 /srv/storage/medic-api/logs/medic-api.log
[2021-03-01 23:44:46] Node Environment Options: '--max_old_space_size=8192'
[2021-03-01 23:44:46] Node Version: 8.11.4 in production mode
[2021-03-01 23:44:46] COUCH_URL http://medic-api:37726630696c3137@haproxy:5984/medic
[2021-03-01 23:44:46] COUCH_NODE_NAME couchdb@127.0.0.1
[2021-03-01 23:44:46] CouchDB Version: 2.3.1
[2021-03-01 23:44:47] 2021-03-01 23:44:47 INFO: Extracting ddoc… 
[2021-03-01 23:44:47] 2021-03-01 23:44:47 INFO: Updating docs: _design/sentinel 
[2021-03-01 23:44:47] 2021-03-01 23:44:47 INFO: Updating docs: _design/users-meta 
[2021-03-01 23:44:48] 2021-03-01 23:44:48 INFO: Updating docs: _design/medic-client 
[2021-03-01 23:44:48] 2021-03-01 23:44:48 INFO: Updating service worker meta doc 
[2021-03-01 23:44:48] 2021-03-01 23:44:48 INFO: DDoc extraction completed successfully 
[2021-03-01 23:44:48] 2021-03-01 23:44:48 INFO: Extracting resources… 
[2021-03-01 23:44:48] 2021-03-01 23:44:48 INFO: Extracting resources completed successfully 
[2021-03-01 23:44:48] 2021-03-01 23:44:48 INFO: Extracting initial documents… 
[2021-03-01 23:44:48] 2021-03-01 23:44:48 INFO: Updating settings with new defaults 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 INFO: Extracting initial documents completed successfully 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 INFO: Loading configuration… 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 INFO: Not updating settings - the existing settings are already up to date 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 WARN: Disabled transition "update_clinics" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 WARN: Disabled transition "self_report" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 WARN: Disabled transition "registration" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 WARN: Disabled transition "accept_patient_reports" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 WARN: Disabled transition "accept_case_reports" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 WARN: Disabled transition "generate_shortcode_on_contacts" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 INFO: Loading transition "generate_patient_id_on_people" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 INFO: Skipping asynchronous transition "generate_patient_id_on_people" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 WARN: Disabled transition "default_responses" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 WARN: Disabled transition "update_sent_by" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 WARN: Disabled transition "update_sent_forms" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 INFO: Loading transition "death_reporting" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 WARN: Disabled transition "conditional_alerts" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 WARN: Disabled transition "multi_report_alerts" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 WARN: Disabled transition "update_notifications" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 WARN: Disabled transition "update_scheduled_reports" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 WARN: Disabled transition "resolve_pending" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 WARN: Disabled transition "muting" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 WARN: Disabled transition "mark_for_outbound" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 INFO: Configuration loaded successfully 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 INFO: Merging translations… 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 INFO: Detected translations change - reloading 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 INFO: Detected translations change - reloading 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 INFO: Detected translations change - reloading 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 INFO: Detected translations change - reloading 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 INFO: Detected translations change - reloading 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 INFO: Translations merged successfully 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 INFO: Running db migrations… 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 INFO: Detected translations change - reloading 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 INFO: Detected translations change - reloading 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 INFO: Detected translations change - reloading 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 WARN: Disabled transition "update_clinics" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 WARN: Disabled transition "self_report" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 WARN: Disabled transition "registration" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 WARN: Disabled transition "accept_patient_reports" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 WARN: Disabled transition "accept_case_reports" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 WARN: Disabled transition "generate_shortcode_on_contacts" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 INFO: Loading transition "generate_patient_id_on_people" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 INFO: Skipping asynchronous transition "generate_patient_id_on_people" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 WARN: Disabled transition "default_responses" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 WARN: Disabled transition "update_sent_by" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 WARN: Disabled transition "update_sent_forms" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 INFO: Loading transition "death_reporting" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 WARN: Disabled transition "conditional_alerts" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 WARN: Disabled transition "multi_report_alerts" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 WARN: Disabled transition "update_notifications" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 WARN: Disabled transition "update_scheduled_reports" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 WARN: Disabled transition "resolve_pending" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 WARN: Disabled transition "muting" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 WARN: Disabled transition "mark_for_outbound" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 WARN: Disabled transition "update_clinics" 
[2021-03-01 23:44:49] 2021-03-01 23:44:49 WARN: Disabled transition "self_report"

Hopefully we’ll see what’s causing the problem with couchdb not being able to start.

Thanks very much for your patience!

1 Like

Thanks for the quick reply. I can confirm that the steps you took to reproduce the error are correct. I will run the steps suggested and submit the results as requested.

@Kristopher_Kivutha - Great, thanks - I look forward to your next update!

@Kristopher_Kivutha - Good news! Well, good news in the “now I have the same problem you have” department :wink: I was working on another ticket for cht-core and was able to reproduce your issue! I’m happy about this because it means I can do a deep dive and hopefully come up with a solution.

My steps to reproduce were from following our self hosted steps:

  1. start with clean Ubuntu 20.04 workstation (note, not server!)
  2. install docker and docker-compose.
  3. grab the latest yml: curl -s -o cht-docker-compose.yml https://raw.githubusercontent.com/medic/cht-infrastructure/master/self-hosting/main/docker-compose.yml
  4. change ports to not conflict with service running on 80:
     ports:
      - 8080:80
      - 443:443
    
  5. export the DOCKER_COUCHDB_ADMIN_PASSWORD per the instructions
  6. docker-compose -f cht-docker-compose.yml up
  7. fix horti and couch2pg:
    sudo docker exec -it medic-os /bin/bash -c "sed -i 's/--install=3.9.0/--complete-install/g' /srv/scripts/horticulturalist/postrun/horticulturalist"
    sudo docker exec -it medic-os /bin/bash -c "/boot/svc-disable medic-core openssh && /boot/svc-disable medic-rdbms && /boot/svc-disable medic-couch2pg"
    
  8. connect to https://localhost

Expected: See CHT
Actual: See 404 from nginx:

curl -k -I https://192.168.8.2
HTTP/1.1 404 Not Found
Server: nginx/1.13.6
Date: Tue, 16 Mar 2021 22:40:36 GMT
Content-Type: text/html
Content-Length: 169
Connection: keep-alive
Vary: Accept-Encoding

Following my own debug steps from above, it actually looks like CHT’s node server.js for the API isn’t starting maybe? There’s no log file in /srv/storage/medic-api/logs/medic-api.log as I would expect there to be:

root@chtbuntu:~# docker-compose --version
docker-compose version 1.28.5, build c4eb3a1f

root@chtbuntu:~# docker exec -it medic-os /boot/supervisor-inspect

Package 'horticulturalist':
  Service 'horticulturalist':
    Status: Up
    Up: 2973 seconds, Restarts: 0
    Attributes: watched, running, expected
    Service PID: 392, Supervisor PID: 376

Package 'medic-api':
  Service 'medic-api':
    Status: Up
    Up: 2973 seconds, Restarts: 0
    Attributes: watched, running, expected
    Service PID: 432, Supervisor PID: 415

Package 'medic-core':
  Service 'couchdb':
    Status: Up
    Up: 2973 seconds, Restarts: 0
    Attributes: watched, running, expected
    Service PID: 472, Supervisor PID: 456
  Service 'nginx':
    Status: Up
    Up: 2973 seconds, Restarts: 0
    Attributes: watched, running, expected
    Service PID: 511, Supervisor PID: 488
  Service 'openssh':
    Status: Disabled
    Up: 0 seconds, Restarts: -1
    Attributes: disabled
    Service PID: None, Supervisor PID: None

Package 'medic-couch2pg':
  Service 'medic-couch2pg':
    Status: Disabled
    Up: 0 seconds, Restarts: -1
    Attributes: disabled
    Service PID: None, Supervisor PID: None

Package 'medic-rdbms':
  Service 'postgresql':
    Status: Disabled
    Up: 0 seconds, Restarts: -1
    Attributes: disabled
    Service PID: None, Supervisor PID: None

Package 'medic-sentinel':
  Service 'medic-sentinel':
    Status: Up
    Up: 2974 seconds, Restarts: 0
    Attributes: watched, running, expected
    Service PID: 607, Supervisor PID: 592

Package 'system-services':
  Service 'cron':
    Status: Up
    Up: 2974 seconds, Restarts: 0
    Attributes: watched, running, expected
    Service PID: 650, Supervisor PID: 635
  Service 'syslog':
    Status: Up
    Up: 2974 seconds, Restarts: 0
    Attributes: watched, running, expected
    Service PID: 675, Supervisor PID: 663

root@chtbuntu:~# docker exec -it medic-os head -n70 /srv/storage/medic-api/logs/medic-api.log
head: cannot open '/srv/storage/medic-api/logs/medic-api.log' for reading: No such file or directory

Here’s an output of ps aux on medic-os container:

sh-4.3# ps aux
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root           1  0.0  0.0  18144  2672 ?        Ss   21:42   0:00 /bin/sh /boot/log-streamer
root         376  0.0  0.0  19436  3732 ?        S    21:42   0:00 /bin/sh /boot/system-container-start
root         392  0.0  0.0  18248  3252 ?        S    21:42   0:02 /bin/sh /srv/scripts/horticulturalist/run/horticulturalist
root         415  0.0  0.0  19436  3404 ?        S    21:42   0:00 /bin/sh /boot/system-container-start
root         432  0.0  0.0  18244  3156 ?        S    21:42   0:02 /bin/sh -e /srv/scripts/medic-api/run/medic-api
root         456  0.0  0.0  19436  3404 ?        S    21:42   0:00 /bin/sh /boot/system-container-start
root         472  0.0  0.0  18052  2952 ?        S    21:42   0:00 /bin/bash /boot/timestamp su -c ulimit -n 100000 && exec /srv/software/medic-core/v2.1.1/x64/couchdb/bin/couchdb couchdb
root         488  0.0  0.0  19436  3404 ?        S    21:42   0:00 /bin/sh /boot/system-container-start
root         497  0.0  0.0  46996  2876 ?        S    21:42   0:00 su -c ulimit -n 100000 && exec /srv/software/medic-core/v2.1.1/x64/couchdb/bin/couchdb couchdb
root         499  0.0  0.0  21816  1396 ?        S    21:42   0:00 awk {   print "["strftime("%Y-%m-%d %H:%M:%S")"] "$0;   fflush(); }
couchdb      509  1.4  0.3 3463876 49744 ?       Ssl  21:42   0:46 /srv/software/medic-core/v2.1.1/x64/couchdb/bin/../erts-8.3/bin/beam.smp -K true -A 16 -Bd -- -root /srv/software/medic-core/v2.
root         511  0.0  0.0  18052  3048 ?        S    21:42   0:00 /bin/bash /boot/timestamp /srv/software/medic-core/v2.1.1/x64/bin/nginx -p /srv/storage/medic-core/nginx/data
root         530  0.0  0.0  40992  5608 ?        S    21:42   0:00 nginx: master process /srv/software/medic-core/v2.1.1/x64/bin/nginx -p /srv/storage/medic-core/nginx/data
root         532  0.0  0.0  21816  1316 ?        S    21:42   0:00 awk {   print "["strftime("%Y-%m-%d %H:%M:%S")"] "$0;   fflush(); }
couchdb      533  0.0  0.0   7536    92 ?        S    21:42   0:00 /srv/software/medic-core/v2.1.1/x64/couchdb/bin/../erts-8.3/bin/epmd -daemon
nobody       541  0.0  0.0  45244  6936 ?        S    21:42   0:00 nginx: worker process
nobody       543  0.0  0.0  45244  6936 ?        S    21:42   0:00 nginx: worker process
nobody       544  0.0  0.0  45376  8928 ?        S    21:42   0:00 nginx: worker process
nobody       545  0.0  0.0  45376  8928 ?        S    21:42   0:00 nginx: worker process
nobody       547  0.0  0.0  45376  8992 ?        S    21:42   0:00 nginx: worker process
nobody       548  0.0  0.0  45244  6936 ?        S    21:42   0:00 nginx: worker process
nobody       549  0.0  0.0  45376  8928 ?        S    21:42   0:00 nginx: worker process
nobody       550  0.0  0.0  45376  8992 ?        S    21:42   0:00 nginx: worker process
couchdb      569  0.0  0.0   4364   632 ?        Ss   21:42   0:00 erl_child_setup 100000
root         592  0.0  0.0  19436  3404 ?        S    21:42   0:00 /bin/sh /boot/system-container-start
root         607  0.0  0.0  18248  3248 ?        S    21:42   0:02 /bin/sh -e /srv/scripts/medic-sentinel/run/medic-sentinel
root         635  0.0  0.0  19436  3404 ?        S    21:42   0:00 /bin/sh /boot/system-container-start
root         650  0.0  0.0  18048  3036 ?        S    21:42   0:00 /bin/bash /boot/timestamp cron -f
root         656  0.0  0.0  26072  2484 ?        S    21:42   0:00 cron -f
root         657  0.0  0.0  21816  1444 ?        S    21:42   0:00 awk {   print "["strftime("%Y-%m-%d %H:%M:%S")"] "$0;   fflush(); }
root         663  0.0  0.0  19436  3404 ?        S    21:42   0:00 /bin/sh /boot/system-container-start
root         675  0.0  0.0  18048  2844 ?        S    21:42   0:00 /bin/bash /boot/timestamp busybox syslogd -n -O /srv/storage/system-services/logs/system.log
root         678  0.0  0.0   4996   700 ?        S    21:42   0:00 busybox syslogd -n -O /srv/storage/system-services/logs/system.log
root         679  0.0  0.0  21816  1420 ?        S    21:42   0:00 awk {   print "["strftime("%Y-%m-%d %H:%M:%S")"] "$0;   fflush(); }
couchdb      683  0.0  0.0   9576  2592 ?        Ss   21:42   0:00 sh -s disksup
couchdb      685  0.0  0.0   4228   684 ?        Ss   21:42   0:00 /srv/software/medic-core/v2.1.1/x64/couchdb/bin/../lib/os_mon-2.4.2/priv/bin/memsup
couchdb      686  0.0  0.0   4360   640 ?        Ss   21:42   0:00 /srv/software/medic-core/v2.1.1/x64/couchdb/bin/../lib/os_mon-2.4.2/priv/bin/cpu_sup
root         735  0.0  0.0   4380   668 ?        S    21:42   0:00 sleep 3600
root       10752  0.0  0.0  18228  3384 pts/0    Ss   22:34   0:00 sh
root       10850  0.0  0.0   4380   656 ?        S    22:34   0:00 sleep 1
root       10851  0.0  0.0   4380   672 ?        S    22:34   0:00 sleep 1
root       10852  0.0  0.0   4380   704 ?        S    22:34   0:00 sleep 1
root       10853  0.0  0.0  34428  2796 pts/0    R+   22:34   0:00 ps aux

@hareet or @Nick - maybe you have thoughts on next steps? I’ll ping you on Slack - maybe one of you could pair w/ me tomorrow (Wed) AM-ish?

@Kristopher_Kivutha - while we work out improving our docker-compose offering, feel free to use our Developer set up to check out the CHT. I’d be happy to help if you hit any issues with this!

A post was split to a new topic: Errors setting up CHT development instance