Read-only replica of couchDB docs into postgres for data-visualization

Hello @diana

I am doing a replication of docs from my remote couchDB with https docker CHT instance to postgres docker container using

version: '3.7'
services:
  couch:
        image: couchdb:2.3.1
        environment:
          COUCHDB_USER: cht
          COUCHDB_PASSWORD: cht-password
        networks:
        - cht-net
  cht-couch2pg:
        container_name: cht-couch2pg
        image: medicmobile/cht-couch2pg:v3.2.0-node-10
        environment:
           COUCHDB_URL: "http://cht:cht_password@couch:5984"
           COUCH2PG_SLEEP_MINS: '720'
           COUCH2PG_DOC_LIMIT: '1000'
           COUCH2PG_RETRY_COUNT: '5'
           COUCH2PG_CHANGES_LIMIT: '100'
           POSTGRES_DB: cht-postgres
           POSTGRES_USER_NAME: cht_couch2pg
           POSTGRES_DB_NAME: cht
           POSTGRES_PASSWORD: couch2pg_password
        depends_on:
          - cht-postgres

  cht-postgres:
        container_name: cht-postgres
        image: medicmobile/cht-postgres:release-postgres13-rc.1
        environment:
            POSTGRES_DB: cht
            POSTGRES_USER: cht
            POSTGRES_PASSWORD: cht_password
            COUCH2PG_USER: cht_couch2pg
            COUCH2PG_USER_PASSWORD: couch2pg_password
            DB_OWNER_GROUP: cht_analytics
        volumes:
            - cht-postgres-data:/var/lib/postgresql/data
        networks:
          - cht-net

volumes:
  cht-postgres-data:
    name: cht-postgres-data

networks:
  cht-net:
   name: cht-net

How can i access the COUCHDB_URL for the docker CHT instance running remotely for use as an environment variable ?

Hi @cliff

Can you please just try passing the CHT url directly? Just as if you’d be using it yourself in the browser.
When running in Docker, I don’t think we expose the CouchDb port directly, but all requests would reach CouchDb.

1 Like

Hello @diana

i did set COUCHDB_URL: "https://192-168-146-70.my.local-ip.co/" in the docker-compose file and when i run the container am getting replications of

_shards:load_shards_from_disk/1(line:370) <= mem3_shards:load_shards_from_disk/2(line:399) <= mem3_shards:for_docid/3(line:86) <= fabric_doc_open:go/3(line:39) <= chttpd_auth_cache:ensure_auth_ddoc_exists/2(line:195) <= chttpd_auth_cache:listen_for_changes/1(line:142)
couch_1         | [error] 2022-02-09T13:30:34.294628Z nonode@nohost emulator -------- Error in process <0.292.1> with exit value:
couch_1         | {database_does_not_exist,[{mem3_shards,load_shards_from_db,"_users",[{file,"src/mem3_shards.erl"},{line,395}]},{mem3_shards,load_shards_from_disk,1,[{file,"src/mem3_shards.erl"},{line,370}]},{mem3_shards,load_shards_from_disk,2,[{file,"src/mem3_shards.erl"},{line,399}]},{mem3_shards,for_docid,3,[{file,"src/mem3_shards.erl"},{line,86}]},{fabric_doc_open,go,3,[{file,"src/fabric_doc_open.erl"},{line,39}]},{chttpd_auth_cache,ensure_auth_ddoc_exists,2,[{file,"src/chttpd_auth_cache.erl"},{line,195}]},{chttpd_auth_cache,listen_for_changes,1,[{file,"src/chttpd_auth_cache.erl"},{line,142}]}]}

in the logs claiming db doesnt exist , i did check in the postgres cht DB and no docs replicated from counchDB

Hi @cliff

Could you please include authentication data in the url, like https://usernam:password@192-168-146-70.my.local-ip.co/ and give it another try?

1 Like

Thanks @diana

i have added authentication and refreshed the DB but still no replication yet

Hello @diana

The strange thing is that when i query for docs using curl https://medic:password@192-168-192-1.my.local-ip.co/medic/_design/medic-client/_view/contacts_by_parent?include_docs=true

"total_rows":20,"offset":0,"rows":[
{"id":"df72d275-65c2-423e-8513-84047f4a0fa5","key":["cffe087f-3828-41b1-8be6-d8f4f8c6ade1","health_center"],"value":null,"doc":{"_id":"df72d275-65c2-423e-8513-84047f4a0fa5","_rev":"4-d3267fedcb5e1d9ec007e8ee6b414f09","parent":{"_id":"cffe087f-3828-41b1-8be6-d8f4f8c6ade1"},"type":"health_center","is_name_generated":"true","name":"gita's Area","external_id":"","notes":"","contact":{"_id":"89a7aa7c-4240-4016-8982-36e23e26fd1c","parent":{"_id":"df72d275-65c2-423e-8513-84047f4a0fa5","parent":{"_id":"cffe087f-3828-41b1-8be6-d8f4f8c6ade1"}}},"geolocation":"","meta":{"created_by":"medic","created_by_person_uuid":"","created_by_place_uuid":""},"reported_date":1643026878060,"place_id":"27820"}},
{"id":"42802d47-ef38-4943-b9ae-f6c02b5dc0c5","key":["cffe087f-3828-41b1-8be6-d8f4f8c6ade1","person"],"value":null,"doc":{"_id":"42802d47-ef38-4943-b9ae-f6c02b5dc0c5","_rev":"2-6e0f5238cf106b906c09d3613cb292eb","type":"person","name":"tom","short_name":"tom","date_of_birth":"1975-07-24","date_of_birth_method":"approx","ephemeral_dob":{"age_label":"","age_years":"46","age_months":"6","dob_method":"approx","ephemeral_months":"7","ephemeral_years":"1975","dob_approx":"1975-07-24","dob_raw":"1975-07-24","dob_iso":"1975-07-24"},"phone":"","phone_alternate":"","sex":"male","role":"chw_supervisor","external_id":"","notes":"","meta":{"created_by":"medic","created_by_person_uuid":"","created_by_place_uuid":""},"reported_date":1643026631633,"parent":{"_id":"cffe087f-3828-41b1-8be6-d8f4f8c6ade1"},"patient_id":"58695"}},
{"id":"243e26dd-f2e2-4371-b430-d2915aaf2e58","key":["df72d275-65c2-423e-8513-84047f4a0fa5","person"],"value":null,"doc":{"_id":"243e26dd-f2e2-4371-b430-d2915aaf2e58","_rev":"2-34b2347f3e93339478c8e39cc65f2e99","parent":{"_id":"df72d275-65c2-423e-8513-84047f4a0fa5","parent":{"_id":"cffe087f-3828-41b1-8be6-d8f4f8c6ade1"}},"type":"person","name":"test","short_name":"","date_of_birth":"2016-07-06","date_of_birth_method":"approx","ephemeral_dob":{"age_label":"","age_years":"5","age_months":"7","dob_method":"approx","ephemeral_months":"7","ephemeral_years":"2016","dob_approx":"2016-07-06","dob_raw":"2016-07-06","dob_iso":"2016-07-06"},"sex":"female","phone":"+256772095581","phone_alternate":"","role":"patient","external_id":"","notes":"","meta":{"created_by":"medic","created_by_person_uuid":"","created_by_place_uuid":""},"reported_date":1644155610022,"patient_id":"36791"}},
{"id":"25b71981-4588-4919-9bec-0417436bd007","key":["df72d275-65c2-423e-8513-84047f4a0fa5","person"],"value":null,"doc":{"_id":"25b71981-4588-4919-9bec-0417436bd007","_rev":"2-cceceef6dc08bf0bfd127f1917e69f4e","parent":{"_id":"df72d275-65c2-423e-8513-84047f4a0fa5","parent":{"_id":"cffe087f-3828-41b1-8be6-d8f4f8c6ade1"}},"type":"person","name":"carol","short_name":"carol","date_of_birth":"1983-07-24","date_of_birth_method":"approx","ephemeral_dob":{"age_label":"","age_years":"38","age_months":"6","dob_method":"approx","ephemeral_months":"7","ephemeral_years":"1983","dob_approx":"1983-07-24","dob_raw":"1983-07-24","dob_iso":"1983-07-24"},"sex":"female","phone":"","phone_alternate":"","role":"patient","external_id":"","notes":"","meta":{"created_by":"git","created_by_person_uuid":"89a7aa7c-4240-4016-8982-36e23e26fd1c","created_by_place_uuid":"df72d275-65c2-423e-8513-84047f4a0fa5"},"reported_date":1643045171442,"patient_id":"36043"}},
{"id":"2692639df431031ea8d765eb02000cee","key":["df72d275-65c2-423e-8513-84047f4a0fa5","person"],"value":null,"doc":{"_id":"2692639df431031ea8d765eb02000cee","_rev":"1-76952bd49fab88b1692d58b163d9e0b0","reported_date":1644149128728,"patient_id":"32220","source_id":"8c8260a5-0d7f-4113-b97a-4c3375cc15dd","type":"person","parent":{"_id":"df72d275-65c2-423e-8513-84047f4a0fa5","parent":{"_id":"cffe087f-3828-41b1-8be6-d8f4f8c6ade1"}},"created_by":"89a7aa7c-4240-4016-8982-36e23e26fd1c"}},

i get the feedback above .
Just wondering why when i use thte same end-point as my docker-compose file below as the COUCHDB_URL its doing the replication in the postgres docker container

version: '3.7'
services:
  couch:
        image: couchdb:2.3.1
        environment:
          COUCHDB_USER: cht
          COUCHDB_PASSWORD: cht-password
        networks:
        - cht-net
  cht-couch2pg:
        container_name: cht-couch2pg
        image: medicmobile/cht-couch2pg:master-node-10
        environment:
           COUCHDB_URL: "https://medic:password@192-168-192-1.my.local-ip.co/medic"
           COUCH2PG_SLEEP_MINS: '20'
           COUCH2PG_DOC_LIMIT: '50'
           COUCH2PG_RETRY_COUNT: '5'
           COUCH2PG_CHANGES_LIMIT: '100'
           POSTGRES_DB: cht-postgres
           POSTGRES_USER_NAME: cht_couch2pg
           POSTGRES_DB_NAME: cht
           POSTGRES_PASSWORD: couch2pg_password
        depends_on:
          - cht-postgres

  cht-postgres:
        container_name: cht-postgres
        image: medicmobile/cht-postgres:release-postgres13-rc.1
        environment:
            POSTGRES_DB: cht
            POSTGRES_USER: cht
            POSTGRES_PASSWORD: cht_password
            COUCH2PG_USER: cht_couch2pg
            COUCH2PG_USER_PASSWORD: couch2pg_password
            DB_OWNER_GROUP: cht_analytics
        volumes:
            - cht-postgres-data:/var/lib/postgresql/data
        ports:
          - "5432:5432"
        networks:
          - cht-net

volumes:
  cht-postgres-data:
    name: cht-postgres-data

networks:
  cht-net:
   name: cht-net

cc @mrjones @samuel

Hi @cliff

Sorry for the long delay.
The environment variables passed to cht-couch2pg are used like this:

 export POSTGRESQL_URL=postgres://$POSTGRES_USER_NAME:$POSTGRES_PASSWORD@$POSTGRES_DB:5432/$POSTGRES_DB_NAME

According to your docker-compose file, this ends up being:

postgres://cht_couch2pg:couch2pg_password@cht-postgres:5432/cht

Is this correct? Is cht-postgres indeed your postgres host?

(link to code: https://github.com/medic/cht-couch2pg/blob/6264ffc59cc5fe0c00a6bab1aed0ad348676da43/couch2pg-entrypoint.sh#L12)

Ah, yes it is, because it’s the name of the container.

@cliff
If you’re still having issues with this, could you please check the logs in the cht-couch2pg container and share them?

hello @diana

yes URL to postgres://cht_couch2pg:couch2pg_password@cht-postgres:5432/cht

thanks they are cliff@cliff-Latitude-5580:~/Desktop/couch2pg$ docker-compose upCreating couch2 - Pastebin.com

Thanks a lot for sharing the logs.
I ended up setting this up for myself locally, and replicated your experience exactly.

I found a couple of problems:

  1. Most of the errors you were seeing were from the CouchDb container, which was not needed in your case. Can be worked around by removing it from docker-compose.

  2. https://medic:password@192-168-192-1.my.local-ip.co/medic responds with a 302, not a 200 that the entry script expects.
    This can be worked around by adding an extra / at the end of the URL.

  3. I had an SSL error with my.local-ip, so I ended up changing the url to my actual local IP.
    In your case: https://medic:password@192.168.192.1:<port??>/medic/

  4. Hit another error with the Postgres connection. This was happening because the cht-couch2pg was not part of the cht-net network.
    To fix this, add the network to the container definition:

depends_on:
          - cht-postgres
networks:
          - cht-net

I added an issue to fix all these in the readme and the codebase: https://github.com/medic/cht-couch2pg/issues/112

Thanks a lot for your patience!

hello @diana
.thanks for the work around this

i have followed the fix you recommended above for the issues and came up with

version: '3.7'
services:

  cht-couch2pg:
        container_name: cht-couch2pg
        image: medicmobile/cht-couch2pg:master-node-10
        environment:
           COUCHDB_URL: "https://medic:password@192-168-29-70:443/medic/"
           COUCH2PG_SLEEP_MINS: '20'
           COUCH2PG_DOC_LIMIT: '90'
           COUCH2PG_RETRY_COUNT: '5'
           COUCH2PG_CHANGES_LIMIT: '100'
           POSTGRES_DB: cht-postgres
           POSTGRES_USER_NAME: cht_couch2pg
           POSTGRES_DB_NAME: cht
           POSTGRES_PASSWORD: couch2pg_password
        depends_on:
          - cht-postgres
        networks:
          - cht-net

  cht-postgres:
        container_name: cht-postgres
        image: medicmobile/cht-postgres:release-postgres13-rc.1
        environment:
            POSTGRES_DB: cht
            POSTGRES_USER: cht
            POSTGRES_PASSWORD: cht_password
            COUCH2PG_USER: cht_couch2pg
            COUCH2PG_USER_PASSWORD: couch2pg_password
            DB_OWNER_GROUP: cht_analytics
        volumes:
            - cht-postgres-data:/var/lib/postgresql/data
        ports:
          - "5432:5432"
        networks:
          - cht-net

volumes:
  cht-postgres-data:
    name: cht-postgres-data

networks:
  cht-net:
   name: cht-net

i used port 443 of the docker-compose file

cliff@cliff-Latitude-5580:~$ sudo netstat -ltnp | grep -w ':443'
[sudo] password for cliff: 
tcp6       0      0 :::443                  :::*                    LISTEN      207056/docker-proxy

However still running into

cliff@cliff-Latitude-5580:~/Desktop/couch2pg$ docker-compose up
Creating network "cht-net" with the default driver
WARNING: Found orphan containers (couch2pg_couch_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
Creating cht-postgres ... done
Creating cht-couch2pg ... done
Attaching to cht-postgres, cht-couch2pg
cht-couch2pg    | Starting couch2pg process
cht-postgres    | 
cht-postgres    | PostgreSQL Database directory appears to contain a database; Skipping initialization
cht-postgres    | 
cht-postgres    | 2022-02-15 14:27:56.311 UTC [1] LOG:  starting PostgreSQL 13.3 (Debian 13.3-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
cht-postgres    | 2022-02-15 14:27:56.311 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
cht-couch2pg    | Set postgres URL to postgres://cht_couch2pg:couch2pg_password@cht-postgres:5432/cht
cht-postgres    | 2022-02-15 14:27:56.311 UTC [1] LOG:  listening on IPv6 address "::", port 5432
cht-postgres    | 2022-02-15 14:27:56.319 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
cht-postgres    | 2022-02-15 14:27:56.333 UTC [25] LOG:  database system was shut down at 2022-02-15 14:23:52 UTC
cht-postgres    | 2022-02-15 14:27:56.342 UTC [1] LOG:  database system is ready to accept connections
cht-couch2pg    | Waiting for cht couchdb
cht-couch2pg    | Waiting for cht couchdb
cht-couch2pg    | Waiting for cht couchdb
cht-couch2pg    | Waiting for cht couchdb
cht-couch2pg    | Waiting for cht couchdb
cht-couch2pg    | Waiting for cht couchdb
cht-couch2pg    | Waiting for cht couchdb
cht-couch2pg    | Waiting for cht couchdb
cht-couch2pg    | Waiting for cht couchdb
cht-couch2pg    | Waiting for cht couchdb
cht-couch2pg    | Waiting for cht couchdb
cht-couch2pg    | Waiting for cht couchdb
cht-couch2pg    | Waiting for cht couchdb
cht-couch2pg    | Waiting for cht couchdb
cht-couch2pg    | Waiting for cht couchdb
cht-couch2pg    | Waiting for cht couchdb
cht-couch2pg    | Waiting for cht couchdb
cht-couch2pg    | Waiting for cht couchdb
cht-couch2pg    | Waiting for cht couchdb
cht-couch2pg    | Waiting for cht couchdb
cht-couch2pg    | Waiting for cht couchdb
cht-couch2pg    | No couchdb end point Found
cht-couch2pg exited with code 1
^CGracefully stopping... (press Ctrl+C again to force)
Stopping cht-postgres ... done

192-168-29-70 is not a correct IP. Can you try with port 80 and correct IP? One of the issues was SSL itself.

thanks @diana

the ip looks like its the one

cliff@cliff-Latitude-5580:~$ ip r
default via 192.168.29.104 dev wlp1s0 proto dhcp metric 600 
169.254.0.0/16 dev wlp1s0 scope link metric 1000 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
172.18.0.0/16 dev br-5830065f94eb proto kernel scope link src 172.18.0.1 linkdown 
172.21.0.0/16 dev br-b24241d420f6 proto kernel scope link src 172.21.0.1 linkdown 
172.23.0.0/16 dev br-43861dab8538 proto kernel scope link src 172.23.0.1 
172.24.0.0/16 dev br-bfea5c904011 proto kernel scope link src 172.24.0.1 linkdown 
172.25.0.0/16 dev br-4b9d07301419 proto kernel scope link src 172.25.0.1 linkdown 
192.168.29.0/24 dev wlp1s0 proto kernel scope link src 192.168.29.70 metric 600 
cliff@cliff-Latitude-5580:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp0s31f6: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
    link/ether 10:65:30:41:c2:72 brd ff:ff:ff:ff:ff:ff
3: wlp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 14:4f:8a:29:bb:75 brd ff:ff:ff:ff:ff:ff
    inet 192.168.29.70/24 brd 192.168.29.255 scope global dynamic noprefixroute wlp1s0
       valid_lft 3049sec preferred_lft 3049sec
    inet6 fe80::1d71:4cd4:9fd0:9a8e/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
4: br-4b9d07301419: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:bd:0f:d3:ac brd ff:ff:ff:ff:ff:ff
    inet 172.25.0.1/16 brd 172.25.255.255 scope global br-4b9d07301419
       valid_lft forever preferred_lft forever
5: br-5830065f94eb: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:8a:8b:03:1a brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.1/16 brd 172.18.255.255 scope global br-5830065f94eb
       valid_lft forever preferred_lft forever
    inet6 fe80::42:8aff:fe8b:31a/64 scope link 
       valid_lft forever preferred_lft forever
7: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:12:f8:21:a8 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
25: br-b24241d420f6: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:33:da:36:9a brd ff:ff:ff:ff:ff:ff
    inet 172.21.0.1/16 brd 172.21.255.255 scope global br-b24241d420f6
       valid_lft forever preferred_lft forever
    inet6 fe80::42:33ff:feda:369a/64 scope link 
       valid_lft forever preferred_lft forever
41: br-43861dab8538: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:4f:07:70:93 brd ff:ff:ff:ff:ff:ff
    inet 172.23.0.1/16 brd 172.23.255.255 scope global br-43861dab8538
       valid_lft forever preferred_lft forever
    inet6 fe80::42:4fff:fe07:7093/64 scope link 
       valid_lft forever preferred_lft forever
43: veth688f65c@if42: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-43861dab8538 state UP group default 
    link/ether e2:ab:2a:90:3a:5d brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::e0ab:2aff:fe90:3a5d/64 scope link 
       valid_lft forever preferred_lft forever
45: veth1d5f87c@if44: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-43861dab8538 state UP group default 
    link/ether c6:9b:71:be:7b:3b brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::c49b:71ff:febe:7b3b/64 scope link 
       valid_lft forever preferred_lft forever
50: br-bfea5c904011: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ae:06:df:48 brd ff:ff:ff:ff:ff:ff
    inet 172.24.0.1/16 brd 172.24.255.255 scope global br-bfea5c904011
       valid_lft forever preferred_lft forever
    inet6 fe80::42:aeff:fe06:df48/64 scope link 
       valid_lft forever preferred_lft forever
56: vethc4780cc@if55: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-bfea5c904011 state UP group default 
    link/ether ae:7d:8e:51:21:72 brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::ac7d:8eff:fe51:2172/64 scope link 
       valid_lft forever preferred_lft forever
58: veth1d1e2c6@if57: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-bfea5c904011 state UP group default 
    link/ether 72:0a:97:06:c5:59 brd ff:ff:ff:ff:ff:ff link-netnsid 3
    inet6 fe80::700a:97ff:fe06:c559/64 scope link 
       valid_lft forever preferred_lft forever
cliff@cliff-Latitude-5580:~$

I have used port 80 but still the same result

Which IP did you use?
I think you should use http://medic:password@192.168.29.70/medic/ or change the port if you expose a different one than 80.
For example, I used http://admin:pass@192.168.1.253:6500/medic, where 6500 is the port I run CHT on.
Naturally, this only works if you run couch2pg on the same local network that runs the CHT.

hey @diana ,
why would cht-couch2pg throw this error when trying to replicate data to the postgress db

cht-couch2pg    | [2022-02-15T17:16:10.885Z INFO]:  There are 0 changes to process
cht-couch2pg    | [2022-02-15T17:16:10.891Z DEBUG]:  Import loop complete, 0 changes total
cht-couch2pg    | [2022-02-15T17:16:10.893Z DEBUG]:  Performing an import batch of up to 100 changes
cht-couch2pg    | [2022-02-15T17:16:10.895Z DEBUG]:  Downloading CouchDB changes feed from 0
cht-couch2pg    | [2022-02-15T17:16:13.112Z ERROR]:  Couch2PG import failed
cht-couch2pg    | [2022-02-15T17:16:13.112Z ERROR]:  { error: 'not_found',
cht-couch2pg    |   reason: 'Document is missing attachment',
cht-couch2pg    |   status: 404,
cht-couch2pg    |   name: 'not_found',
cht-couch2pg    |   message: 'Document is missing attachment' }

So sorry, @mozzy, for the delay!

One of the workarounds is causing this problem, specifically adding the extra / at the end of the COUCHDB_URL environment variable.
That environment variable is used here: cht-couch2pg/libs/medic/runner.js at main · medic/cht-couch2pg · GitHub

const sentinelUrl = `${couchUrl}-sentinel`;
const usersMetaUrl = `${couchUrl}-users-meta`;

As a result, sentinelUrl becomes: http://<host....>/medic/-sentinel instead of http://<host....>/medic-sentinel. The same applies to the users-meta url.

hello @mrjones @diana

I have been runing the new updated docker-compose file for the cht-couch2pg and on running the docker-compose file acording to the instruction here i get

cliff@cliff-Latitude-5580:~/Documents/project/medic/cht-couch2pg$ docker-compose up
Creating network "cht-couch2pg_default" with the default driver
WARNING: Found orphan containers (cht-couch2pg_couch_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
Creating cht-couch2pg_postgres_1 ... done
Creating cht-couch2pg_cht-couch2pg_1 ... done
Attaching to cht-couch2pg_postgres_1, cht-couch2pg_cht-couch2pg_1
postgres_1      | 
postgres_1      | PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres_1      | 
postgres_1      | 2022-03-14 11:34:22.878 UTC [1] LOG:  starting PostgreSQL 13.3 (Debian 13.3-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
postgres_1      | 2022-03-14 11:34:22.878 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
postgres_1      | 2022-03-14 11:34:22.878 UTC [1] LOG:  listening on IPv6 address "::", port 5432
postgres_1      | 2022-03-14 11:34:22.888 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_1      | 2022-03-14 11:34:22.906 UTC [26] LOG:  database system was shut down at 2022-03-14 11:33:41 UTC
postgres_1      | 2022-03-14 11:34:22.916 UTC [1] LOG:  database system is ready to accept connections
cht-couch2pg_1  | Starting couch2pg process
cht-couch2pg_1  | Set postgres URL to postgres://cht_couch2pg:cht_couch2pg_password@postgres:5432/cht
cht-couch2pg_1  | START Checking for cht couchdb at https://medic:password@cht.openelis-global.org/medic
cht-couch2pg_1  | couchdb  is ready
cht-couch2pg_1  | check_if_postgres_is_ready with "pg_isready -q  -h postgres -U cht_couch2pg --d cht"
cht-couch2pg_1  | Postgres is ready moving on ...
cht-couch2pg_1  | Launching couch2pg
cht-couch2pg_1  | [2022-03-14T11:34:25.311Z DEBUG]:  { debug: true,
cht-couch2pg_1  |   v4Mode: false,
cht-couch2pg_1  |   couchdbUrl: 'cht.openelis-global.org/medic',
cht-couch2pg_1  |   postgresqlUrl: 'postgres:5432/cht',
cht-couch2pg_1  |   docLimit: 1000,
cht-couch2pg_1  |   changesLimit: 100,
cht-couch2pg_1  |   retryCount: 5,
cht-couch2pg_1  |   sleepMins: 60,
cht-couch2pg_1  |   couchdbUsersMetaDocLimit: 50 }
postgres_1      | 2022-03-14 11:34:25.338 UTC [34] FATAL:  password authentication failed for user "cht_couch2pg"
postgres_1      | 2022-03-14 11:34:25.338 UTC [34] DETAIL:  Password does not match for user "cht_couch2pg".
postgres_1      |       Connection matched pg_hba.conf line 99: "host all all all md5"
cht-couch2pg_1  | [2022-03-14T11:34:25.341Z ERROR]:  An unrecoverable error occurred
cht-couch2pg_1  | [2022-03-14T11:34:25.341Z ERROR]:  { error: password authentication failed for user "cht_couch2pg"
cht-couch2pg_1  |     at Connection.parseE (/app/node_modules/couch2pg/node_modules/postgrator/node_modules/pg/lib/connection.js:539:11)
cht-couch2pg_1  |     at Connection.parseMessage (/app/node_modules/couch2pg/node_modules/postgrator/node_modules/pg/lib/connection.js:366:17)
cht-couch2pg_1  |     at Socket.<anonymous> (/app/node_modules/couch2pg/node_modules/postgrator/node_modules/pg/lib/connection.js:105:22)
cht-couch2pg_1  |     at Socket.emit (events.js:198:13)
cht-couch2pg_1  |     at addChunk (_stream_readable.js:288:12)
cht-couch2pg_1  |     at readableAddChunk (_stream_readable.js:269:11)
cht-couch2pg_1  |     at Socket.Readable.push (_stream_readable.js:224:10)
cht-couch2pg_1  |     at TCP.onStreamRead [as onread] (internal/stream_base_commons.js:94:17)
cht-couch2pg_1  |   name: 'error',
cht-couch2pg_1  |   length: 108,
cht-couch2pg_1  |   severity: 'FATAL',
cht-couch2pg_1  |   code: '28P01',
cht-couch2pg_1  |   detail: undefined,
cht-couch2pg_1  |   hint: undefined,
cht-couch2pg_1  |   position: undefined,
cht-couch2pg_1  |   internalPosition: undefined,
cht-couch2pg_1  |   internalQuery: undefined,
cht-couch2pg_1  |   where: undefined,
cht-couch2pg_1  |   schema: undefined,
cht-couch2pg_1  |   table: undefined,
cht-couch2pg_1  |   column: undefined,
cht-couch2pg_1  |   dataType: undefined,
cht-couch2pg_1  |   constraint: undefined,
cht-couch2pg_1  |   file: 'auth.c',
cht-couch2pg_1  |   line: '334',
cht-couch2pg_1  |   routine: 'auth_failed',
cht-couch2pg_1  |   helpfulDescription: 'Prep() table CHECK query Failed' }
cht-couch2pg_1  | [2022-03-14T11:34:25.342Z ERROR]:  exiting
cht-couch2pg_cht-couch2pg_1 exited with code 1
[2022-03-14T11:34:25.341Z ERROR]:  An unrecoverable error occurred
cht-couch2pg_1  | [2022-03-14T11:34:25.341Z ERROR]:  { error: password authentication failed for user "cht_couch2pg"

I am following the instructions in the read-me file

Thanks

@cliff - Thanks for your follow up on your progress!

As you may have noticed, just last week we finished updating the readme you linked to. The updates were needed because we improved a lot of the logic in the entry point of the docker image and renamed some of the environment variables to be clear of what their use is. These changes were completed 11th of March (see tickets 1, 2 and 3 and 1 PR).

Given these changes, I suspect you have both an out of date compose file and out of date containers. Can you please ensure you have this new compose file? That compose file references the new tag on the image (master-node-10main-node-10), so when you run docker-compose up, the command will force your docker to download the new image. Finally, if you don’t have any important data in postgres, can you delete the postgres container and data volume? If you want to be thorough, you can delete all the couch2pg containers and images. For me, a complete reset looks like this:

  1. first list your images with ps
    $ docker ps --format="{{.Names}} {{.Status}}"
    
    cht-couch2pg_postgres_1 Up About a minute
    cht-couch2pg_cht-couch2pg_1 Up About a minute
    helper_test_medic-os_1 Up 14 minutes
    helper_test_haproxy_1 Up 14 minutes
    
  2. we want to delete the first to, so we’ll do an rm with the force flag (be sure you want to do this!)
    docker rm -f cht-couch2pg_postgres_1 cht-couch2pg_cht-couch2pg_1
    
    cht-couch2pg_postgres_1
    cht-couch2pg_cht-couch2pg_1
    
  3. Then we want to list the volumes to find the postgres one:
    $ docker volume ls
    
    DRIVER    VOLUME NAME
    local     cht-postgres-data
    local     helper_test_medic-data
    
  4. we want to delete the postgres one, so we’ll call rm again passing the force flag (and again, be sure you want to do this!):
    $ docker volume rm -f cht-postgres-data
    cht-postgres-data
    
  5. Let’s fetch and pull in git to make sure we’re up to date
    $ git fetch&&git pull origin main
    From github.com:medic/cht-couch2pg
     * branch            main       -> FETCH_HEAD
    Already up to date.
    
  6. Finally, let’s restart our compose services, passing in the URL for our couch instance via COUCHDB_URL, ensuring it starts with the login and password and ends in /medic with out the trailing slash:
    $ cht-couch2pg git:(main) COUCHDB_URL=https://medic:password@192-168-68-17.my.local-ip.co:8443/medic docker-compose up
    
    

To give you an idea of the happy path result, this is what we should expect if it all works:

Creating volume "cht-postgres-data" with default driver
Creating cht-couch2pg_postgres_1 ... done
Creating cht-couch2pg_cht-couch2pg_1 ... done
Attaching to cht-couch2pg_postgres_1, cht-couch2pg_cht-couch2pg_1
postgres_1      | The files belonging to this database system will be owned by user "postgres".
postgres_1      | This user must also own the server process.
postgres_1      | 
postgres_1      | The database cluster will be initialized with locale "en_US.utf8".
postgres_1      | The default database encoding has accordingly been set to "UTF8".
postgres_1      | The default text search configuration will be set to "english".
postgres_1      | 
postgres_1      | Data page checksums are disabled.
postgres_1      | 
postgres_1      | fixing permissions on existing directory /var/lib/postgresql/data ... ok
postgres_1      | creating subdirectories ... ok
postgres_1      | selecting dynamic shared memory implementation ... posix
cht-couch2pg_1  | Starting couch2pg process
cht-couch2pg_1  | Set postgres URL to postgres://cht_couch2pg:cht_couch2pg_password@postgres:5432/cht
cht-couch2pg_1  | START Checking for cht couchdb at https://medic:password@192-168-68-17.my.local-ip.co:8443/medic
postgres_1      | selecting default max_connections ... 100
postgres_1      | selecting default shared_buffers ... 128MB
postgres_1      | selecting default time zone ... Etc/UTC
postgres_1      | creating configuration files ... ok
postgres_1      | running bootstrap script ... ok
postgres_1      | performing post-bootstrap initialization ... ok
postgres_1      | syncing data to disk ... ok
postgres_1      | 
postgres_1      | 
postgres_1      | Success. You can now start the database server using:
postgres_1      | 
postgres_1      |     pg_ctl -D /var/lib/postgresql/data -l logfile start
postgres_1      | 
postgres_1      | initdb: warning: enabling "trust" authentication for local connections
postgres_1      | You can change this by editing pg_hba.conf or using the option -A, or
postgres_1      | --auth-local and --auth-host, the next time you run initdb.
postgres_1      | waiting for server to start....2022-03-15 22:22:00.254 UTC [47] LOG:  starting PostgreSQL 13.3 (Debian 13.3-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
postgres_1      | 2022-03-15 22:22:00.255 UTC [47] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_1      | 2022-03-15 22:22:00.259 UTC [48] LOG:  database system was shut down at 2022-03-15 22:22:00 UTC
postgres_1      | 2022-03-15 22:22:00.263 UTC [47] LOG:  database system is ready to accept connections
postgres_1      |  done
postgres_1      | server started
postgres_1      | CREATE DATABASE
postgres_1      | 
postgres_1      | 
postgres_1      | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/init-user-db.sh
postgres_1      | CREATE ROLE
postgres_1      | CREATE ROLE
postgres_1      | GRANT
postgres_1      | GRANT ROLE
postgres_1      | 
postgres_1      | 2022-03-15 22:22:00.548 UTC [47] LOG:  received fast shutdown request
postgres_1      | waiting for server to shut down....2022-03-15 22:22:00.549 UTC [47] LOG:  aborting any active transactions
postgres_1      | 2022-03-15 22:22:00.550 UTC [47] LOG:  background worker "logical replication launcher" (PID 54) exited with exit code 1
postgres_1      | 2022-03-15 22:22:00.551 UTC [49] LOG:  shutting down
postgres_1      | 2022-03-15 22:22:00.564 UTC [47] LOG:  database system is shut down
postgres_1      |  done
postgres_1      | server stopped
postgres_1      | 
postgres_1      | PostgreSQL init process complete; ready for start up.
postgres_1      | 
postgres_1      | 2022-03-15 22:22:00.679 UTC [1] LOG:  starting PostgreSQL 13.3 (Debian 13.3-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
postgres_1      | 2022-03-15 22:22:00.679 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
postgres_1      | 2022-03-15 22:22:00.679 UTC [1] LOG:  listening on IPv6 address "::", port 5432
postgres_1      | 2022-03-15 22:22:00.682 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_1      | 2022-03-15 22:22:00.686 UTC [85] LOG:  database system was shut down at 2022-03-15 22:22:00 UTC
postgres_1      | 2022-03-15 22:22:00.693 UTC [1] LOG:  database system is ready to accept connections
cht-couch2pg_1  | couchdb  is ready
cht-couch2pg_1  | check_if_postgres_is_ready with "pg_isready -q  -h postgres -U cht_couch2pg --d cht"
cht-couch2pg_1  | Postgres is ready moving on ...
cht-couch2pg_1  | Launching couch2pg
cht-couch2pg_1  | [2022-03-15T22:22:01.755Z DEBUG]:  { debug: true,
cht-couch2pg_1  |   v4Mode: false,
cht-couch2pg_1  |   couchdbUrl: '192-168-68-17.my.local-ip.co:8443/medic',
cht-couch2pg_1  |   postgresqlUrl: 'postgres:5432/cht',
cht-couch2pg_1  |   docLimit: 1000,
cht-couch2pg_1  |   changesLimit: 100,
cht-couch2pg_1  |   retryCount: 5,
cht-couch2pg_1  |   sleepMins: 60,
cht-couch2pg_1  |   couchdbUsersMetaDocLimit: 50 }
cht-couch2pg_1  | [10:22:01 PM] table couch2pg_migrations does not exist - creating it.
cht-couch2pg_1  | [10:22:01 PM] version of database is: 0
cht-couch2pg_1  | [10:22:01 PM] migrating up to 201803202020
cht-couch2pg_1  | [10:22:01 PM] running 201606091247.do.2318-standardise-schema.sql
cht-couch2pg_1  | [10:22:01 PM] running 201606091248.do.2124-support-edit-and-delete-of-documents.sql
cht-couch2pg_1  | [10:22:01 PM] running 201607132040.do.2506-unique-id-constraint.sql
cht-couch2pg_1  | [10:22:01 PM] running 201607210832.do.2523-remove-unused-index.sql
cht-couch2pg_1  | [10:22:01 PM] running 201611271809.do.2923-couch2-support.sql
cht-couch2pg_1  | [10:22:01 PM] running 201803202020.do.3423-multiple-db-support.sql
cht-couch2pg_1  | [2022-03-15T22:22:01.829Z INFO]:  Adapter is running in NORMAL mode
cht-couch2pg_1  | Package pg version 8.x.x not tested
cht-couch2pg_1  | Please raise issue to support pg > 7.x.x
cht-couch2pg_1  | Package pg version 8.x.x not tested
cht-couch2pg_1  | Please raise issue to support pg > 7.x.x
cht-couch2pg_1  | [2022-03-15T22:22:02.030Z INFO]:  Beginning couch2pg and xmlforms run at Tue Mar 15 2022 22:22:02 GMT+0000 (Coordinated Universal Time)
cht-couch2pg_1  | [2022-03-15T22:22:02.033Z DEBUG]:  Performing an import batch of up to 100 changes
cht-couch2pg_1  | [2022-03-15T22:22:02.057Z DEBUG]:  Downloading CouchDB changes feed from 0
cht-couch2pg_1  | [2022-03-15T22:22:02.204Z INFO]:  There are 67 changes to process
cht-couch2pg_1  | [2022-03-15T22:22:02.205Z DEBUG]:  There are 7 deletions and 60 new / changed documents
cht-couch2pg_1  | [2022-03-15T22:22:02.571Z DEBUG]:  Downloading CouchDB changes feed from 275-g1AAAAJjeJyd0UsKwjAQANBgBcWdCC6tXkAmSdu0K3sTzaehlKor13oTvYneRG9S8-lGKEJLYAIzzIOZqRFC0zJQaC7PF1kqkWPCtmAerk1pxJEIm6apyoAvjyYx4YRKzUVXwx9GrE0Uux9Jp1qSWPeVcivtW2nmpKzI0khCX-lgpWsrbZwkC04p7jvdaWwiupnPYHerhU5TGECQZJD28NrTagunAShICzZIe3ntbbWV01iUaELJIO3jNbe32t-SsFhmnVr1BcLxoB4
cht-couch2pg_1  | [2022-03-15T22:22:02.593Z INFO]:  There are 0 changes to process
cht-couch2pg_1  | [2022-03-15T22:22:02.594Z DEBUG]:  Import loop complete, 67 changes total
cht-couch2pg_1  | [2022-03-15T22:22:02.595Z DEBUG]:  Performing an import batch of up to 100 changes
cht-couch2pg_1  | [2022-03-15T22:22:02.598Z DEBUG]:  Downloading CouchDB changes feed from 0
cht-couch2pg_1  | [2022-03-15T22:22:02.707Z INFO]:  There are 53 changes to process
cht-couch2pg_1  | [2022-03-15T22:22:02.707Z DEBUG]:  There are 1 deletions and 52 new / changed documents
cht-couch2pg_1  | [2022-03-15T22:22:02.723Z DEBUG]:  Pulled 50 results from couchdb
cht-couch2pg_1  | [2022-03-15T22:22:02.723Z DEBUG]:  Clearing any existing documents from postgresql
cht-couch2pg_1  | [2022-03-15T22:22:02.724Z DEBUG]:  Inserting 50 results into postgresql
cht-couch2pg_1  | [2022-03-15T22:22:02.743Z DEBUG]:  Pulled 2 results from couchdb
cht-couch2pg_1  | [2022-03-15T22:22:02.744Z DEBUG]:  Clearing any existing documents from postgresql
cht-couch2pg_1  | [2022-03-15T22:22:02.744Z DEBUG]:  Inserting 2 results into postgresql
cht-couch2pg_1  | [2022-03-15T22:22:02.747Z DEBUG]:  Batch completed with 53 changes
cht-couch2pg_1  | [2022-03-15T22:22:02.747Z DEBUG]:  Performing an import batch of up to 100 changes
cht-couch2pg_1  | [2022-03-15T22:22:02.748Z DEBUG]:  Downloading CouchDB changes feed from 86-g1AAAAJjeJyd0EsKwjAQBuBgFcUH0hPoCSRJH2lX9iY6kyaUUnXlWm-iN9Gb6E1qknZZhJaBf2CG-RZTEUJmhZcTX16ussgxY1zsqClWmdUICG7qui4LD7yTGUxVrCIOsuvgD4Nbk7hvpbmTODBBuegrZVY6tNLCSaCCMErCvtLRSrdWWjpJplQDh57SeWyS3E0z2MNqvtMSyhKkdJD2bLSX1dZO01KkkKpB2rvRPlabOI1hDIHWg7Rvo7m_rZyGmMsw5l135Q8VT5-6
cht-couch2pg_1  | [2022-03-15T22:22:02.779Z INFO]:  There are 0 changes to process
cht-couch2pg_1  | [2022-03-15T22:22:02.782Z DEBUG]:  Import loop complete, 53 changes total
cht-couch2pg_1  | [2022-03-15T22:22:02.782Z DEBUG]:  Performing an import batch of up to 100 changes
cht-couch2pg_1  | [2022-03-15T22:22:02.786Z DEBUG]:  Downloading CouchDB changes feed from 0
cht-couch2pg_1  | [2022-03-15T22:22:02.823Z INFO]:  There are 16 changes to process
cht-couch2pg_1  | [2022-03-15T22:22:02.846Z DEBUG]:  Downloading CouchDB changes feed from 16-g1AAAAJbeJyd0ksKwjAQANCxFdSlJ9ATSJL-V_YmmsmHWmpdda030ZvoTfQmNf1AuyiFloEZSMhjJkwGAOvElrAVt0IkEmPKggMxQTNzZXHAXVmWaWJz-2oOVlpTKlAMPRhhcG8yHltpUUshasEYmyrFlXRqJaglETqaS1_Cpsil0pdcyTHhXAn3VrBqIXJ9omUwsZd8aTI8TDHYs5uMilCj4rO0V6O9ux_n3PE99GZpn0b79jTHc5VHZmm_Ruttg2KMkGhwG9I_-Niesg
cht-couch2pg_1  | [2022-03-15T22:22:02.869Z INFO]:  There are 0 changes to process
cht-couch2pg_1  | [2022-03-15T22:22:02.871Z DEBUG]:  Import loop complete, 16 changes total
cht-couch2pg_1  | [2022-03-15T22:22:02.871Z INFO]:  Refreshing materialised views
cht-couch2pg_1  | [2022-03-15T22:22:02.910Z DEBUG]:  { refresh_matviews: 1 }
cht-couch2pg_1  | [2022-03-15T22:22:02.910Z DEBUG]:  { debug: true,
cht-couch2pg_1  |   v4Mode: false,
cht-couch2pg_1  |   couchdbUrl: '192-168-68-17.my.local-ip.co:8443/medic',
cht-couch2pg_1  |   postgresqlUrl: 'postgres:5432/cht',
cht-couch2pg_1  |   docLimit: 50,
cht-couch2pg_1  |   changesLimit: 100,
cht-couch2pg_1  |   retryCount: 5,
cht-couch2pg_1  |   sleepMins: 60,
cht-couch2pg_1  |   couchdbUsersMetaDocLimit: 50 }
cht-couch2pg_1  | [2022-03-15T22:22:02.911Z INFO]:  Run completed. Next run at Tue Mar 15 2022 23:22:02 GMT+0000 (Coordinated Universal Time)

thanks @mrjones

i did destroy all the containers i had locally plus the volumes and did docker-compose up afresh and all works fine now