Trouble replicating database in a local instance created by cht-docker-compose

I am having trouble replicating remote server to local database in an instance that I created using cht-docker-compose script. I am trying to setup Replication from remote server to local target database on existing database.

Here, I supplied the same credentials that I can login this local instance with i.e. default cht-docker-compose username password: medic/password.

However, I keep on getting error: Your user name or password for target database is incorrect.

I was also having the same problem.

I was able to get past the error by specifying the target as Existing remote database and providing the full URL for the target database: https://192-168-0-229.local-ip.medicmobile.org:10448/medic

After running for a few seconds, it showed an error. The medic documents count in my local increased from 40 to 1087 after this. There were 1752 documents in the source database.

When looking at the CouchDB logs, I could see errors like:

Replicator: error writing document form:contact:p30_health_center:edit, revision 5-9ea9282935b9a8c2e9be6f175be84e79, to target database https://192-168-0-229.local-ip.medicmobile.org:10448/medic/.
Error: bad_request, reason: When new_edits: false, the document needs _revor_revisions specified.

After the error, the documents count increased very slowly as the replicator activity showed this warning: “The replication job will be tried at increasing intervals”.

I tried several times, but it was always the forms that were causing this error.

I didn’t need the forms, so I edited the replicator document (accessible from📄 icon) to filter out the forms:

... 
  "create_target": false,
  "continuous": false,
  "owner": "medic",
  "selector": {
    "type": {
      "$ne": "form"
    }
  }
}

After editing the existing replicator document, I was able to replicate all source documents except the forms.

Hi @binod ,

I followed the same steps as you outlined and was able to set up replication (No credentials warning). Additionally, I filtered out forms like you mentioned, but the replication didn’t start. Below is the log from couchdb.

[error] 2025-08-25T23:20:46.337625Z couchdb@127.0.0.1 <0.4821.0> -------- couch_replicator_httpc: auth plugin initialization failed "https://192-168-0-229.local-ip.medicmobile.org:10444/medic/" {session_request_failed,"https://192-168-0-229.local-ip.medicmobile.org:10444/_session","medic",sel_conn_closed}

[error] 2025-08-25T23:20:46.338277Z couchdb@127.0.0.1 <0.4821.0> -------- throw:{replication_auth_error,{session_request_failed,"https://192-168-0-229.local-ip.medicmobile.org:10444/_session","medic",sel_conn_closed}}: Replication 4b9e7ab8ac5cc9924b541a2a7d65833a failed to start "https://some-cht-core-url.org/medic/" -> "https://192-168-0-229.local-ip.medicmobile.org:10444/medic/" doc <<"shards/15555555-2aaaaaa9/_replicator.1756154917">>:<<"76681ebee8172ba7e0a31846d10007bd">> stack:[{couch_replicator_httpc,setup,1,[{file,"src/couch_replicator_httpc.erl"},{line,62}]},{couch_replicator_api_wrap,db_open,3,[{file,"src/couch_replicator_api_wrap.erl"},{line,67}]}]


[notice] 2025-08-25T23:21:34.568329Z couchdb@127.0.0.1 <0.446.0> -------- couch_replicator_scheduler: Job {"4b9e7ab8ac5cc9924b541a2a7d65833a",[]} started as <0.5636.0>

[notice] 2025-08-25T23:21:57.934218Z couchdb@127.0.0.1 <0.5975.0> 4ea40ad194 couchdb:5984 172.19.0.3 medic GET /_membership 200 ok 1

[notice] 2025-08-25T23:22:03.121530Z couchdb@127.0.0.1 <0.6088.0> 0592bdcd40 couchdb:5984 172.19.0.3 medic GET /_membership 200 ok 0

[error] 2025-08-25T23:22:05.702120Z couchdb@127.0.0.1 <0.5636.0> -------- couch_replicator_httpc: auth plugin initialization failed "https://192-168-0-229.local-ip.medicmobile.org:10444/medic/" {session_request_failed,"https://192-168-0-229.local-ip.medicmobile.org:10444/_session","medic",req_timedout}

[error] 2025-08-25T23:22:05.702747Z couchdb@127.0.0.1 <0.5636.0> -------- throw:{replication_auth_error,{session_request_failed,"https://192-168-0-229.local-ip.medicmobile.org:10444/_session","medic",req_timedout}}: Replication 4b9e7ab8ac5cc9924b541a2a7d65833a failed to start "https://some-cht-core-url.org/medic/" -> "https://192-168-0-229.local-ip.medicmobile.org:10444/medic/" doc <<"shards/15555555-2aaaaaa9/_replicator.1756154917">>:<<"76681ebee8172ba7e0a31846d10007bd">> stack:[{couch_replicator_httpc,setup,1,[{file,"src/couch_replicator_httpc.erl"},{line,62}]},{couch_replicator_api_wrap,db_open,3,[{file,"src/couch_replicator_api_wrap.erl"},{line,67}]}]

I think it’s worth to mention that the remote database in question here is multi-cluster database if that changes the approach.

For the record, I do not think this is specific to docker-helper deployments. The way I understand it, it is a problem with Fauxton functionality that affects all CHT-deployed Couch instances (prod and docker-helper). Looking at the network request Fauxton makes when you click the “Start Replication” button, it is making a single GET request to the root domain (e.g.
https://192-168-1-193.local-ip.medicmobile.org:34327/). I think it is expecting to get back the Couch data JSON (like what you see on your local Couch instance at http://localhost:5984/). But, the Couch container that is deployed with a CHT instance gets all its traffic proxied to it and the CHT app is served at the root URL (not the JSON data). So, TLDR, I think this is more just a incompatibility between Fauxton and our proxied Couch instances.

Regardless, I do not think there should be a real behavior/performance difference between local vs remote replication since the operations are still the same (it is just a bit more inconvenient to fill out the whole URL in the form…).


Speaking of convienience :sweat_smile: chtoolbox does have a handy script for triggering replication. You can choose to just launch it and let it run in the background or to follow the doc count as things get replicated:

chtoolbox docs
❯ chtx doc replicate --help
CHT Toolbox

CHT Toolbox 0.0.1

USAGE

$ replicate [(-f, --follow)] (-c, --contacts text)... [--all] <source> <target>

DESCRIPTION

Triggers a one-time server-side replication of the docs from the source to the target database.

ARGUMENTS

<source>

  A user-defined piece of text.

  The replication source. This can either be a database name for the current instance (e.g. 'medic') or a full URL to a remote Couch database (including username/password). E.g. 'https://medic:password@192-168-1-80.local-ip.medicmobile.org:38593/medic'

<target>

  A user-defined piece of text.

  The replication target. This can either be a database name for the current instance (e.g. 'medic') or a full URL to a remote Couch database (including username/password). E.g. 'https://medic:password@192-168-1-80.local-ip.medicmobile.org:38593/medic'

OPTIONS

(-f, --follow)

  A true or false value.

  After triggering replication, wait for job to complete.

  This setting is optional.

(-c, --contacts text)...

  A user-defined piece of text.

  Replicate contacts with the given contact type

  This option may be repeated zero or more times.

--all

  A true or false value.

  Replicate everything including design documents

  This setting is optional.
1 Like

I don’t think this should matter since the replication is happening purely at the doc-level and I would not expect clustering/sharding to have any direct impact.

Are you trying to setup the replication on the https://192-168-0-229.local-ip.medicmobile.org:10444 instance or on the other one? Does the other instance have an actual public URL? If one instance has a public ip/URL and the other is a local-ip instance, you have to setup the replication document on the local-ip instance. If you try to go the other way, the other Couch instance will not be able to communicate with the local-ip instance (because the local-ip URL will not resolve outside the LAN).

Thank you, I haven’t tested this with directly installed couch or any other ways. I wanted to be specific on how the local instance was brought up. Last time I used replication was may be 4/5 years ago.

Yes, I am trying to setup replication on this local URL.

Yes, the other instance has an actual Public URL.

I followed up on this and tested with the couchdb that was directly installed and configured for development with CHT. This was brew installed couchdb on Macbook.

If I setup replication on this instance, I can setup the replication just by selecting local database and specifying the database as well and I didn’t need to follow extra steps @binod mentioned.

Also, I didn’t have to exclude forms on this approach.

I was able to setup replication from the approach @binod mentioned as well with the instance that I brought up with cht-docker-helper. This required more tweaks. Everything @binod mentioned applies and I wanted to be specific on few more things.

While specifying Remote target URL, specifying https://127-0-0-1.local-ip.medicmobile.org:10444/ didn’t work.I also had to specify the URL with my network IP, something like https://192-168-0-232.local-ip.medicmobile.org:10444 .

Like @binod , I also got error in _branding document and some other document. I had to update the selector to specifically select certain records.

"selector": {
    "type": {
      "$eq": "data_record"
    }
  }

And then update again for another type of recrods.

Ideally, I would expect the replication to run successfully on CHT wrapped couchdb just like it’d be on fresh couchdb.

1 Like

I suspect Josh’s point about replication expecting things at a certain URL and then the CHT API not allowing access is part of the issue.

What if you go into your Docker Helper compose directory and expose the couch ports? So, for a project called cht-replication, that would be cd ~/.medic/cht-docker/cht-replication-dir/compose/ and then edit the cht-couchdb.yml file and add a ports stanza after the networkslike this:

services:
  couchdb:
    ---REMOVED STUFF FOR BEVITY---
    networks:
      cht-net:
    ports:
      - "5984:5984"

volumes:
  cht-credentials:

Now if you stop your docker helper instance and then start it again, you can access your CouchDB database at http://localhost:5984 and fauxton at http://localhost:5984/_utils/ (you’ll need to stop your couch running on brew if it’s still running). This way you can call directly into couch, bypassing any URL rewrites that nginx and API are doing. It should work exactly like it does when you run on baremetal.

Let us know how it goes!