CouchDB is already configured continuous log!

Hello,so i am working with a migrated cht instance that has a mounted volume at /srv .But on starting the containers, i keep getting (CouchDB is already configured) continuously on the medic-os container and my cht never starts on the browser. The proxy is started well but i never get to the default login page of cht. Any thoughts around fixing this, i am suspecting the beginning point is around the continuous log of the medic-os container as mentioned above.
Note: Using the same process,i have mounted other volumes on different severs at the same location ie /srv and things worked fine.

Hi Herbert! Can you supply some more information? For example, a copy of the api and couchdb logs (with private info redacted), and the error message you get in the browser? Also what version of CHT are you using?

the continous logs on the medic-os container looks like this [2024/11/22 12:27:17] Info: CouchDB is already configured[2024/11/22 12:27:19] - Pastebin.com

and the docker cpmpose looks like this version: '3.7'#################################### NOTICE ############ - Pastebin.com

how can i get to the couch db logs within the medic-os container ie the actaul path of the logs

I can’t remember off the top of my head but probably /logs.

What version of the CHT are you running?

With the mention of medic-os, I’m going to guess Herbert is running CHT 3.x. We have some debug info here on where to find logs, but here’s the tl;dr:

  • To view logs, first run this to access a shell in the medic-os container: docker exec -it medic-os /bin/bash
  • View CouchDB logs: less /srv/storage/medic-core/couchdb/logs/startup.log
  • View medic-api logs: less /srv/storage/medic-api/logs/medic-api.log
  • View medic-sentinel logs: less /srv/storage/medic-sentinel/logs/medic-sentinel.log

thanks much i notice i dont have any logs at /srv/storage/medic-core/couchdb

and on running /boot/supervisor-inspect within the container i notice couchdb keeps restarting as seen below

Package ‘medic-core’:
Service ‘couchdb’:
Status: Failure
Up: 0 seconds, Restarts: 3083
Attributes: watched, expected
Service PID: None, Supervisor

@Herbert - in [Shards Issue after migrating an aws volume), you are facing shard issue. Are you trying to configure two instances of CHT such that you have two issues (this thread and the other), or do you only have one instance and you’ve fixed the problem in this thread?

If you only have one, please let us know what the fix was to get around the CouchDB is already configured issue - thanks!