CHT Instance crashing and failing to sync

Happy New Year…
Our app has been running fine, and data has been coming until yesterday when I was informed that no one could sync data. Trying to access the desktop app today nothing shows, it goes round and round and round.

I have checked and confirmed docker containers are running, I restarted them and same challenge, the console indicates as below:I reviewed the configs and found everything fine, restarted the containers twice, first time app started working, then stopped.

Second time, I stopped the two containers, restarted and this time app running, want to monitor till morning to see if we are good to sync all data collected tomorrow, even as I try to figure out what happened?

Hi @oyierphil !

Can you please share logs of medic-api and couchdb? Especially around the restart times, what activity was seen in the logs before/after? Thanks!

@hareet
Just arrived back to the City from the Holiday break, the app stopped working again today while I was on the road, thus couldn’t do much. I stopped and restarted the containers and the app came live

Woke up today morning and the app is down again, stopped and restarted the containers, got the screen below before the app could fully load, any idea where the culprit is?

Need to find out where the bottleneck is, will get the logs in the morning and share

Logs: medic-api
[2022-11-13 17:46:04] Node Environment Options: ‘–max_old_space_size=8192’
[2022-11-13 17:46:04] Node Version: 8.11.4 in production mode
[2022-11-13 17:46:04] COUCH_URL http://medic-api:b8d652296958133c@haproxy:5984/medic
[2022-11-13 17:46:04] COUCH_NODE_NAME couchdb@127.0.0.1
[2022-11-13 17:46:04] CouchDB Version: 2.3.1
[2022-11-13 17:46:05] 2022-11-13 17:46:05 ^[[32mINFO^[[39m: Extracting ddoc ^
[2022-11-13 17:46:05] 2022-11-13 17:46:05 ^[[32mINFO^[[39m: Updating docs: _design/sentinel
[2022-11-13 17:46:05] 2022-11-13 17:46:05 ^[[32mINFO^[[39m: Updating docs: _design/users-meta
[2022-11-13 17:46:06] 2022-11-13 17:46:06 ^[[32mINFO^[[39m: Updating docs: _design/medic-client
[2022-11-13 17:46:07] 2022-11-13 17:46:07 ^[[32mINFO^[[39m: Updating service worker meta doc
[2022-11-13 17:46:07] 2022-11-13 17:46:07 ^[[32mINFO^[[39m: DDoc extraction completed successfully
[2022-11-13 17:46:07] 2022-11-13 17:46:07 ^[[32mINFO^[[39m: Extracting resources ^
[2022-11-13 17:46:07] 2022-11-13 17:46:07 ^[[32mINFO^[[39m: Extracting resources completed successfully
[2022-11-13 17:46:07] 2022-11-13 17:46:07 ^[[32mINFO^[[39m: Extracting initial documents ^
[2022-11-13 17:46:07] 2022-11-13 17:46:07 ^[[32mINFO^[[39m: Updating settings with new defaults
[2022-11-13 17:46:07] 2022-11-13 17:46:07 ^[[32mINFO^[[39m: Extracting initial documents completed successfully
[2022-11-13 17:46:07] 2022-11-13 17:46:07 ^[[32mINFO^[[39m: Loading configuration ^
[2022-11-13 17:46:07] 2022-11-13 17:46:07 ^[[32mINFO^[[39m: Not updating settings - the existing settings are already u$
[2022-11-13 17:46:07] 2022-11-13 17:46:07 ^[[33mWARN^[[39m: Disabled transition “update_clinics”
[2022-11-13 17:46:07] 2022-11-13 17:46:07 ^[[33mWARN^[[39m: Disabled transition “self_report”
[2022-11-13 17:46:07] 2022-11-13 17:46:07 ^[[33mWARN^[[39m: Disabled transition “registration”
[2022-11-13 17:46:07] 2022-11-13 17:46:07 ^[[33mWARN^[[39m: Disabled transition “accept_patient_reports”
[2022-11-13 17:46:07] 2022-11-13 17:46:07 ^[[33mWARN^[[39m: Disabled transition “accept_case_reports”
[2022-11-13 17:46:07] 2022-11-13 17:46:07 ^[[33mWARN^[[39m: Disabled transition “generate_shortcode_on_contacts”
[2022-11-13 17:46:07] 2022-11-13 17:46:07 ^[[32mINFO^[[39m: Loading transition “generate_patient_id_on_people”

Logs: couchdb

[info] 2022-11-13T17:44:54.199449Z couchdb@127.0.0.1 <0.9.0> -------- Application couch_log started on node ‘couchdb@127.0.0.1’
[info] 2022-11-13T17:44:54.206874Z couchdb@127.0.0.1 <0.9.0> -------- Application folsom started on node ‘couchdb@127.0.0.1’
[info] 2022-11-13T17:44:54.259038Z couchdb@127.0.0.1 <0.9.0> -------- Application couch_stats started on node ‘couchdb@127.0.0.1’
[info] 2022-11-13T17:44:54.259298Z couchdb@127.0.0.1 <0.9.0> -------- Application khash started on node ‘couchdb@127.0.0.1’
[info] 2022-11-13T17:44:54.269306Z couchdb@127.0.0.1 <0.9.0> -------- Application couch_event started on node ‘couchdb@127.0.0.1’
[info] 2022-11-13T17:44:54.269549Z couchdb@127.0.0.1 <0.9.0> -------- Application hyper started on node ‘couchdb@127.0.0.1’
[info] 2022-11-13T17:44:54.278873Z couchdb@127.0.0.1 <0.9.0> -------- Application ibrowse started on node ‘couchdb@127.0.0.1’
[info] 2022-11-13T17:44:54.287200Z couchdb@127.0.0.1 <0.9.0> -------- Application ioq started on node ‘couchdb@127.0.0.1’
[info] 2022-11-13T17:44:54.287423Z couchdb@127.0.0.1 <0.9.0> -------- Application mochiweb started on node ‘couchdb@127.0.0.1’
[info] 2022-11-13T17:44:54.299464Z couchdb@127.0.0.1 <0.212.0> -------- Apache CouchDB 2.3.1 is starting.

[info] 2022-11-13T17:44:54.299578Z couchdb@127.0.0.1 <0.213.0> -------- Starting couch_sup
[notice] 2022-11-13T17:44:54.312988Z couchdb@127.0.0.1 <0.96.0> -------- config: [features] pluggable-storage-engines set to true for reason nil
[notice] 2022-11-13T17:44:54.357613Z couchdb@127.0.0.1 <0.96.0> -------- config: [couchdb] uuid set to 4aee0600d8bf95fc39e4a9644811669c for reason nil
[info] 2022-11-13T17:44:54.410501Z couchdb@127.0.0.1 <0.218.0> -------- open_result error {not_found,no_db_file} for _users
[info] 2022-11-13T17:44:54.476273Z couchdb@127.0.0.1 <0.212.0> -------- Apache CouchDB has started. Time to relax.

[info] 2022-11-13T17:44:54.476403Z couchdb@127.0.0.1 <0.212.0> -------- Apache CouchDB has started on http://0.0.0.0:5987/
[info] 2022-11-13T17:44:54.476647Z couchdb@127.0.0.1 <0.9.0> -------- Application couch started on node ‘couchdb@127.0.0.1’
[info] 2022-11-13T17:44:54.476725Z couchdb@127.0.0.1 <0.9.0> -------- Application ets_lru started on node ‘couchdb@127.0.0.1’
[notice] 2022-11-13T17:44:54.517546Z couchdb@127.0.0.1 <0.281.0> -------- rexi_server : started servers
[notice] 2022-11-13T17:44:54.521672Z couchdb@127.0.0.1 <0.286.0> -------- rexi_buffer : started servers
[info] 2022-11-13T17:44:54.521999Z couchdb@127.0.0.1 <0.9.0> -------- Application rexi started on node ‘couchdb@127.0.0.1’
[info] 2022-11-13T17:44:54.558480Z couchdb@127.0.0.1 <0.218.0> -------- open_result error {not_found,no_db_file} for _nodes
[warning] 2022-11-13T17:44:54.558614Z couchdb@127.0.0.1 <0.294.0> -------- creating missing database: _nodes
[info] 2022-11-13T17:44:54.613532Z couchdb@127.0.0.1 <0.218.0> -------- open_result error {not_found,no_db_file} for _dbs
[warning] 2022-11-13T17:44:54.613649Z couchdb@127.0.0.1 <0.309.0> -------- creating missing database: _dbs
[warning] 2022-11-13T17:44:54.613742Z couchdb@127.0.0.1 <0.308.0> -------- creating missing database: _dbs
[info] 2022-11-13T17:44:54.626792Z couchdb@127.0.0.1 <0.9.0> -------- Application mem3 started on node ‘couchdb@127.0.0.1’
[info] 2022-11-13T17:44:54.627019Z couchdb@127.0.0.1 <0.9.0> -------- Application fabric started on node ‘couchdb@127.0.0.1’
[info] 2022-11-13T17:44:54.659383Z couchdb@127.0.0.1 <0.9.0> -------- Application chttpd started on node ‘couchdb@127.0.0.1’
[error] 2022-11-13T17:44:54.683903Z couchdb@127.0.0.1 emulator -------- Error in process <0.348.0> on node ‘couchdb@127.0.0.1’ with exit value:
{database_does_not_exist,[{mem3_shards,load_shards_from_db,“_users”,[{file,“src/mem3_shards.erl”},{line,395}]},{mem3_shards,load_shards_from_disk,1,[{file,$
[notice] 2022-11-13T17:44:54.683931Z couchdb@127.0.0.1 <0.347.0> -------- chttpd_auth_cache changes listener died database_does_not_exist at mem3_shards:lo

Thanks for the logs @oyierphil ! I’m just adding a note that I’ve created a separate thread for this new issue. This way it will stand on its own from the original thread so that others with the same sync issue may more easily find it.

@oyierphil I’m noticing a few things.

First, what docker image are you running? You can identify this by looking at the compose file at the image tag declaration.

It appears as if CouchDB does not see any _users database, which should already exist. I’m wondering if your data drive did not mount correctly on server restart. Do you have a backup of your data drive? Let’s check by viewing /srv on the container, and by running lsblk on the server itself. We may have to mount the disk back on /srv and then restart the medic-os container.

@hareet, @mrjones
I have two other instances which are running fine, thus after spensing time looking for the culprit, I noted that storage on our instance is exhausted as below:
CHT App Errors_Storage

Looks like we underestimated the storage requirements for the project, we hope to receive about 300,000 records from about 750 CHVs. We had allocated 500 GB HDD and 16 GB RAM to the VM, which we increased to about 800 GB today and is still full, I suspect the reason the app isnt running…

Procuring a 3 TB HDD tomorrow morning and dedicating the same for this instance, then will share the results. Is there a way to calculate the resource requirements?

Running lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
fd0 2:0 1 4K 0 disk
loop0 7:0 0 55.6M 1 loop /snap/core18/2654
loop1 7:1 0 55.6M 1 loop /snap/core18/2667
loop2 7:2 0 70.8M 1 loop /snap/powershell/225
loop3 7:3 0 70.8M 1 loop /snap/powershell/226
loop4 7:4 0 63.2M 1 loop /snap/core20/1738
loop5 7:5 0 91.9M 1 loop /snap/lxd/24061
loop6 7:6 0 63.3M 1 loop /snap/core20/1778
loop7 7:7 0 91.8M 1 loop /snap/lxd/23991
loop8 7:8 0 49.6M 1 loop /snap/snapd/17883
sda 8:0 0 500G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 1G 0 part /boot
└─sda3 8:3 0 249G 0 part
└─ubuntu–vg-ubuntu–lv 253:0 0 124.5G 0 lvm /
sr0 11:0 1 1.2G 0 rom

Do you have a backup of your data drive?
No, was considering cloud backup once the app is running fine, we are done with creation of user accounts (Thanks to bulk user account creation), and have configured client purging rules to delete data at the end every Friday evening after sync, homework for January 2023
CHT App Errors_Storage_2

@oyierphil - great work figuring out the issue around disk space.

Sounds like you’re aware of the best practices for hosting an instance, but just in case we have some info on monitoring and alerting we’ve published.

@oyierphil Great analysis!
Before we add additional storage, let’s make sure we have cleared logs from services inside medic-os that we aren’t using. I’m hopeful you won’t have to add additional storage. Our documentation has you stop these extra services, but they can restart for a variety of container/server restarts and then fill log files/disk space as they continuously fail.

Can you run the following commands to tidy up your disk?

Inside your container:

/boot/svc-stop medic-core openssh
/boot/svc-stop medic-rdbms
/boot/svc-stop medic-couch2pg
rm -rf /srv/storage/medic-rdbms/logs/postgresql/*
rm -rf /srv/storage/medic-couch2pg/logs/*
rm -rf /srv/storage/medic-core/openssh/logs/*

If those services were failing, you should see significant space clear up and medic-os will be fine.

We’ve also had a separate bug with our logrotate permissions breaking if you have moved the data from onto a volume. Let’s run a few commands to make sure Logrotate is working:

Inside the container, please run:

chown root:root /etc/logrotate.conf && chown root:root /etc/logrotate.d/*

/sbin/logrotate -vf /etc/logrotate.conf

@hareet
I did run all the commands as above, no space was cleared up, I share the last output of running ```
/sbin/logrotate -vf /etc/logrotate.conf

copying /srv/storage/audit/haproxy.log to /srv/storage/audit/haproxy.log.1
*error: error writing to /srv/storage/audit/haproxy.log.1: No space left on device*
*error: error copying /srv/storage/audit/haproxy.log to /srv/storage/audit/haproxy.log.1: No space left on device*

rotating pattern: /var/log/unattended-upgrades/unattended-upgrades.log
/var/log/unattended-upgrades/unattended-upgrades-dpkg.log
/var/log/unattended-upgrades/unattended-upgrades-shutdown.log
 forced from command line (6 rotations)
empty log files are not rotated, old logs are removed
considering log /var/log/unattended-upgrades/unattended-upgrades.log
  log /var/log/unattended-upgrades/unattended-upgrades.log does not exist -- skipping
Creating new state
considering log /var/log/unattended-upgrades/unattended-upgrades-dpkg.log
  log /var/log/unattended-upgrades/unattended-upgrades-dpkg.log does not exist -- skipping
Creating new state
considering log /var/log/unattended-upgrades/unattended-upgrades-shutdown.log
  log /var/log/unattended-upgrades/unattended-upgrades-shutdown.log does not exist -- skipping
Creating new state
root@a56cf857d4ca:/srv/storage/medic-core#

@hareet
Thank you for the clean up script, the app has been running after cleanup this morning. Syncing has been happening, I haven’t informed everyone to sync, some CHVs noted we are on and have been syncing.
We moved the VM to the new data store, syncing has been going on, monitoring app performance now, will follow advise by @mrjones on monitoring and alerting

@hareet
Our app has just gone down again, and I noticed medic-core service wasn’t running. I restarted the same with the following feedback:

Debug: Service ‘medic-core/nginx’ exited with status 143
Info: Service ‘medic-core/nginx’ restarted successfully
Success: Finished restarting services in package ‘medic-core’

I have deleted all the extra log files and ran the clean up script again, I note an extra file created in the logs folders with the file name X…log.1.gz, the old log file remains.
I further highlight some code I don’t understand:

old log /srv/storage/audit/haproxy.log.0.gz does not exist
log /srv/storage/audit/haproxy.log.105.gz doesn’t exist – won’t try to dispose of it
copying /srv/storage/audit/haproxy.log to /srv/storage/audit/haproxy.log.1

  • error: error writing to /srv/storage/audit/haproxy.log.1: No space left on device*
    error: error copying /srv/storage/audit/haproxy.log to /srv/storage/audit/haproxy.log.1: No space left on device

rotating pattern: /var/log/unattended-upgrades/unattended-upgrades.log
/var/log/unattended-upgrades/unattended-upgrades-dpkg.log
/var/log/unattended-upgrades/unattended-upgrades-shutdown.log
forced from command line (6 rotations)
empty log files are not rotated, old logs are removed
considering log /var/log/unattended-upgrades/unattended-upgrades.log
log /var/log/unattended-upgrades/unattended-upgrades.log does not exist – skipping
Creating new state
considering log /var/log/unattended-upgrades/unattended-upgrades-dpkg.log
log /var/log/unattended-upgrades/unattended-upgrades-dpkg.log does not exist – skipping
Creating new state

Our app is not running, thus no one can sync.
After going through the config files one by one, I have a ERROR Received error code 404 connecting to https://xxxxx. Check the server and and try again, time to take a break now?

Hi Everyone
Yesterday, I noted that some services were not running, we are still down as below:
docker exec -it medic-os /boot/supervisor-inspect
Package ‘horticulturalist’:
Service ‘horticulturalist’:
Status: Up
Up: 2 seconds, Restarts: 227
Attributes: watched, running, expected
Service PID: 54161, Supervisor PID: 344

Package ‘medic-api’:
Service ‘medic-api’:
Status: Failure
Up: 4 seconds, Restarts: 213
Attributes: watched, expected
Service PID: None, Supervisor PID: 387

Package ‘medic-core’:
Service ‘couchdb’:
Status: Up
Up: 2 seconds, Restarts: 279
Attributes: watched, running, expected
Service PID: 54192, Supervisor PID: 428
Service ‘nginx’:
Status: Up
Up: 1238 seconds, Restarts: 1
Attributes: watched, running, expected
Service PID: 4823, Supervisor PID: 475
Service ‘openssh’:
Status: Disabled
Up: 0 seconds, Restarts: 0
Attributes: disabled
Service PID: None, Supervisor PID: None

Package ‘medic-couch2pg’:
Service ‘medic-couch2pg’:
Status: Disabled
Up: 0 seconds, Restarts: 270
Attributes: disabled
Service PID: None, Supervisor PID: None

Package ‘medic-rdbms’:
Service ‘postgresql’:
Status: Disabled
Up: 0 seconds, Restarts: 338
Attributes: disabled
Service PID: None, Supervisor PID: None

Package ‘medic-sentinel’:
Service ‘medic-sentinel’:
Status: Up
Up: 0 seconds, Restarts: 237
Attributes: watched, running, expected
Service PID: 54774, Supervisor PID: 691

Package ‘system-services’:
Service ‘cron’:
Status: Up
Up: 1307 seconds, Restarts: 0
Attributes: watched, running, expected
Service PID: 757, Supervisor PID: 740
Service ‘syslog’:
Status: Up
Up: 1307 seconds, Restarts: 0
Attributes: watched, running, expected
Service PID: 784, Supervisor PID: 772

docker exec -it medic-os head -n70 /srv/storage/medic-api/logs/medic-api.log
[2023-01-09 22:02:45] 2023-01-09 22:02:45 INFO: Running server checks…
[2023-01-09 22:02:45] Node Environment Options: ‘–max_old_space_size=8192’
[2023-01-09 22:02:45] Node Version: 8.11.4 in production mode
[2023-01-09 22:02:45] COUCH_URL http://haproxy:5984/medic
[2023-01-09 22:02:45] COUCH_NODE_NAME couchdb@127.0.0.1
Expected a 401 when accessing db without authentication.
Instead we got a 503
[2023-01-09 22:02:48] 2023-01-09 22:02:48 ERROR: Fatal error initialising medic-api
[2023-01-09 22:02:48] 2023-01-09 22:02:48 ERROR: { Error: CouchDB security seems to be misconfigured, see: cht-core/DEVELOPMENT.md at master · medic/cht-core · GitHub
[2023-01-09 22:02:48] at ClientRequest.net.get (/srv/software/medic-api/md5-IaUJW7p4rpaMwbWhKl5C1A==/node_modules/@medic/server-checks/src/checks.js:63:16)
[2023-01-09 22:02:48] at Object.onceWrapper (events.js:315:30)
[2023-01-09 22:02:48] at emitOne (events.js:116:13)
[2023-01-09 22:02:48] at ClientRequest.emit (events.js:211:7)
[2023-01-09 22:02:48] at HTTPParser.parserOnIncomingClient [as onIncoming] (_http_client.js:543:21)
[2023-01-09 22:02:48] at HTTPParser.parserOnHeadersComplete (_http_common.js:112:17)
[2023-01-09 22:02:48] at Socket.socketOnData (_http_client.js:440:20)
[2023-01-09 22:02:48] at emitOne (events.js:116:13)
[2023-01-09 22:02:48] at Socket.emit (events.js:211:7)
[2023-01-09 22:02:48] at addChunk (_stream_readable.js:263:12)
[2023-01-09 22:02:48] [stack]: ‘Error: CouchDB security seems to be misconfigured, see: cht-core/DEVELOPMENT.md at master · medic/cht-core · GitHub at ClientRequest.net.get (/srv/software/medic-api/md5-IaUJW7p4rpaMwbWhKl5C1A==/node_modules/@medic/server-checks/src/checks.js:63:16)\n at Object.onceWrapper (events.js:315:30)\n at emitOne (events.js:116:13)\n at ClientRequest.emit (events.js:211:7)\n at HTTPParser.parserOnIncomingClient [as onIncoming] (_http_client.js:543:21)\n at HTTPParser.parserOnHeadersComplete (_http_common.js:112:17)\n at Socket.socketOnData (_http_client.js:440:20)\n at emitOne (events.js:116:13)\n at Socket.emit (events.js:211:7)\n at addChunk (_stream_readable.js:263:12)’,
[2023-01-09 22:02:48] [message]: ‘CouchDB security seems to be misconfigured, see: cht-core/DEVELOPMENT.md at master · medic/cht-core · GitHub’ }
[2023-01-09 22:02:51] 2023-01-09 22:02:51 INFO: Running server checks…
[2023-01-09 22:02:51] Node Environment Options: ‘–max_old_space_size=8192’
[2023-01-09 22:02:51] Node Version: 8.11.4 in production mode
[2023-01-09 22:02:51] COUCH_URL http://haproxy:5984/medic
[2023-01-09 22:02:51] COUCH_NODE_NAME couchdb@127.0.0.1
[2023-01-09 22:02:54] 2023-01-09 22:02:54 ERROR: Error watching sentinel changes, restarting: { FetchError: invalid json response body at http://haproxy:5984/medic-sentinel/ reason: Unexpected token < in JSON at position 0
[2023-01-09 22:02:54] at /srv/software/medic-api/md5-IaUJW7p4rpaMwbWhKl5C1A==/node_modules/node-fetch/lib/index.js:272:32
[2023-01-09 22:02:54] at
[2023-01-09 22:02:54] at process._tickCallback (internal/process/next_tick.js:188:7)
[2023-01-09 22:02:54] message: ‘invalid json response body at http://haproxy:5984/medic-sentinel/ reason: Unexpected token < in JSON at position 0’,
[2023-01-09 22:02:54] type: ‘invalid-json’,
[2023-01-09 22:02:54] [stack]: ‘FetchError: invalid json response body at http://haproxy:5984/medic-sentinel/ reason: Unexpected token < in JSON at position 0\n at /srv/software/medic-api/md5-IaUJW7p4rpaMwbWhKl5C1A==/node_modules/node-fetch/lib/index.js:272:32\n at \n at process._tickCallback (internal/process/next_tick.js:188:7)’ }
Expected a 401 when accessing db without authentication.
Instead we got a 503
[2023-01-09 22:02:54] 2023-01-09 22:02:54 ERROR: Fatal error initialising medic-api
[2023-01-09 22:02:54] 2023-01-09 22:02:54 ERROR: { Error: CouchDB security seems to be misconfigured, see: cht-core/DEVELOPMENT.md at master · medic/cht-core · GitHub
[2023-01-09 22:02:54] at ClientRequest.net.get (/srv/software/medic-api/md5-IaUJW7p4rpaMwbWhKl5C1A==/node_modules/@medic/server-checks/src/checks.js:63:16)
[2023-01-09 22:02:54] at Object.onceWrapper (events.js:315:30)
[2023-01-09 22:02:54] at emitOne (events.js:116:13)
[2023-01-09 22:02:54] at ClientRequest.emit (events.js:211:7)
[2023-01-09 22:02:54] at HTTPParser.parserOnIncomingClient [as onIncoming] (_http_client.js:543:21)
[2023-01-09 22:02:54] at HTTPParser.parserOnHeadersComplete (_http_common.js:112:17)
[2023-01-09 22:02:54] at Socket.socketOnData (_http_client.js:440:20)
[2023-01-09 22:02:54] at emitOne (events.js:116:13)
[2023-01-09 22:02:54] at Socket.emit (events.js:211:7)
[2023-01-09 22:02:54] at addChunk (_stream_readable.js:263:12)
[2023-01-09 22:02:54] [stack]: ‘Error: CouchDB security seems to be misconfigured, see: cht-core/DEVELOPMENT.md at master · medic/cht-core · GitHub at ClientRequest.net.get (/srv/software/medic-api/md5-IaUJW7p4rpaMwbWhKl5C1A==/node_modules/@medic/server-checks/src/checks.js:63:16)\n at Object.onceWrapper (events.js:315:30)\n at emitOne (events.js:116:13)\n at ClientRequest.emit (events.js:211:7)\n at HTTPParser.parserOnIncomingClient [as onIncoming] (_http_client.js:543:21)\n at HTTPParser.parserOnHeadersComplete (_http_common.js:112:17)\n at Socket.socketOnData (_http_client.js:440:20)\n at emitOne (events.js:116:13)\n at Socket.emit (events.js:211:7)\n at addChunk (_stream_readable.js:263:12)’,
[2023-01-09 22:02:54] [message]: ‘CouchDB security seems to be misconfigured, see: cht-core/DEVELOPMENT.md at master · medic/cht-core · GitHub’ }
[2023-01-09 22:02:57] 2023-01-09 22:02:57 INFO: Running server checks…
[2023-01-09 22:02:57] Node Environment Options: ‘–max_old_space_size=8192’
[2023-01-09 22:02:57] Node Version: 8.11.4 in production mode
[2023-01-09 22:02:57] COUCH_URL http://haproxy:5984/medic
[2023-01-09 22:02:57] COUCH_NODE_NAME couchdb@127.0.0.1
[2023-01-09 22:03:00] 2023-01-09 22:03:01 ERROR: Error watching sentinel changes, restarting: { FetchError: invalid json response body at http://haproxy:5984/medic-sentinel/ reason: Unexpected token < in JSON at position 0
[2023-01-09 22:03:00] at /srv/software/medic-api/md5-IaUJW7p4rpaMwbWhKl5C1A==/node_modules/node-fetch/lib/index.js:272:32
[2023-01-09 22:03:01] at
[2023-01-09 22:03:01] at process._tickCallback (internal/process/next_tick.js:188:7)
[2023-01-09 22:03:01] message: ‘invalid json response body at http://haproxy:5984/medic-sentinel/ reason: Unexpected token < in JSON at position 0’,
[2023-01-09 22:03:01] type: ‘invalid-json’,
[2023-01-09 22:03:01] [stack]: ‘FetchError: invalid json response body at http://haproxy:5984/medic-sentinel/ reason: Unexpected token < in JSON at position 0\n at /srv/software/medic-api/md5-IaUJW7p4rpaMwbWhKl5C1A==/node_modules/node-fetch/lib/index.js:272:32\n at \n at process._tickCallback (internal/process/next_tick.js:188:7)’ }
Expected a 401 when accessing db without authentication.
Instead we got a 503
[2023-01-09 22:03:01] 2023-01-09 22:03:01 ERROR: Fatal error initialising medic-api
[2023-01-09 22:03:01] 2023-01-09 22:03:01 ERROR: { Error: CouchDB security seems to be misconfigured, see: cht-core/DEVELOPMENT.md at master · medic/cht-core · GitHub
[2023-01-09 22:03:01] at ClientRequest.net.get (/srv/software/medic-api/md5-IaUJW7p4rpaMwbWhKl5C1A==/node_modules/@medic/server-checks/src/checks.js:63:16)
[2023-01-09 22:03:01] at Object.onceWrapper (events.js:315:30)
[2023-01-09 22:03:01] at emitOne (events.js:116:13)
[2023-01-09 22:03:01] at ClientRequest.emit (events.js:211:7)
[2023-01-09 22:03:01] at HTTPParser.parserOnIncomingClient [as onIncoming] (_http_client.js:543:21)
root@boda-project:/var/community-testing-config-test# docker exec -it medic-os head -n70 /srv/storage/medic-api/logs/medic-api.log
[2023-01-09 22:02:45] 2023-01-09 22:02:45 INFO: Running server checks…
[2023-01-09 22:02:45] Node Environment Options: ‘–max_old_space_size=8192’
[2023-01-09 22:02:45] Node Version: 8.11.4 in production mode
[2023-01-09 22:02:45] COUCH_URL http://haproxy:5984/medic
[2023-01-09 22:02:45] COUCH_NODE_NAME couchdb@127.0.0.1
Expected a 401 when accessing db without authentication.
Instead we got a 503
[2023-01-09 22:02:48] 2023-01-09 22:02:48 ERROR: Fatal error initialising medic-api
[2023-01-09 22:02:48] 2023-01-09 22:02:48 ERROR: { Error: CouchDB security seems to be misconfigured, see: cht-core/DEVELOPMENT.md at master · medic/cht-core · GitHub
[2023-01-09 22:02:48] at ClientRequest.net.get (/srv/software/medic-api/md5-IaUJW7p4rpaMwbWhKl5C1A==/node_modules/@medic/server-checks/src/checks.js:63:16)
[2023-01-09 22:02:48] at Object.onceWrapper (events.js:315:30)
[2023-01-09 22:02:48] at emitOne (events.js:116:13)
[2023-01-09 22:02:48] at ClientRequest.emit (events.js:211:7)
[2023-01-09 22:02:48] at HTTPParser.parserOnIncomingClient [as onIncoming] (_http_client.js:543:21)
[2023-01-09 22:02:48] at HTTPParser.parserOnHeadersComplete (_http_common.js:112:17)
[2023-01-09 22:02:48] at Socket.socketOnData (_http_client.js:440:20)
[2023-01-09 22:02:48] at emitOne (events.js:116:13)
[2023-01-09 22:02:48] at Socket.emit (events.js:211:7)
[2023-01-09 22:02:48] at addChunk (_stream_readable.js:263:12)
[2023-01-09 22:02:48] [stack]: ‘Error: CouchDB security seems to be misconfigured, see: cht-core/DEVELOPMENT.md at master · medic/cht-core · GitHub at ClientRequest.net.get (/srv/software/medic-api/md5-IaUJW7p4rpaMwbWhKl5C1A==/node_modules/@medic/server-checks/src/checks.js:63:16)\n at Object.onceWrapper (events.js:315:30)\n at emitOne (events.js:116:13)\n at ClientRequest.emit (events.js:211:7)\n at HTTPParser.parserOnIncomingClient [as onIncoming] (_http_client.js:543:21)\n at HTTPParser.parserOnHeadersComplete (_http_common.js:112:17)\n at Socket.socketOnData (_http_client.js:440:20)\n at emitOne (events.js:116:13)\n at Socket.emit (events.js:211:7)\n at addChunk (_stream_readable.js:263:12)’,
[2023-01-09 22:02:48] [message]: ‘CouchDB security seems to be misconfigured, see: cht-core/DEVELOPMENT.md at master · medic/cht-core · GitHub’ }
[2023-01-09 22:02:51] 2023-01-09 22:02:51 INFO: Running server checks…
[2023-01-09 22:02:51] Node Environment Options: ‘–max_old_space_size=8192’
[2023-01-09 22:02:51] Node Version: 8.11.4 in production mode
[2023-01-09 22:02:51] COUCH_URL http://haproxy:5984/medic
[2023-01-09 22:02:51] COUCH_NODE_NAME couchdb@127.0.0.1
[2023-01-09 22:02:54] 2023-01-09 22:02:54 ERROR: Error watching sentinel changes, restarting: { FetchError: invalid json response body at http://haproxy:5984/medic-sentinel/ reason: Unexpected token < in JSON at position 0
[2023-01-09 22:02:54] at /srv/software/medic-api/md5-IaUJW7p4rpaMwbWhKl5C1A==/node_modules/node-fetch/lib/index.js:272:32
[2023-01-09 22:02:54] at
[2023-01-09 22:02:54] at process._tickCallback (internal/process/next_tick.js:188:7)
[2023-01-09 22:02:54] message: ‘invalid json response body at http://haproxy:5984/medic-sentinel/ reason: Unexpected token < in JSON at position 0’,
[2023-01-09 22:02:54] type: ‘invalid-json’,
[2023-01-09 22:02:54] [stack]: ‘FetchError: invalid json response body at http://haproxy:5984/medic-sentinel/ reason: Unexpected token < in JSON at position 0\n at /srv/software/medic-api/md5-IaUJW7p4rpaMwbWhKl5C1A==/node_modules/node-fetch/lib/index.js:272:32\n at \n at process._tickCallback (internal/process/next_tick.js:188:7)’ }
Expected a 401 when accessing db without authentication.
Instead we got a 503
[2023-01-09 22:02:54] 2023-01-09 22:02:54 ERROR: Fatal error initialising medic-api
[2023-01-09 22:02:54] 2023-01-09 22:02:54 ERROR: { Error: CouchDB security seems to be misconfigured, see: cht-core/DEVELOPMENT.md at master · medic/cht-core · GitHub
[2023-01-09 22:02:54] at ClientRequest.net.get (/srv/software/medic-api/md5-IaUJW7p4rpaMwbWhKl5C1A==/node_modules/@medic/server-checks/src/checks.js:63:16)
[2023-01-09 22:02:54] at Object.onceWrapper (events.js:315:30)
[2023-01-09 22:02:54] at emitOne (events.js:116:13)
[2023-01-09 22:02:54] at ClientRequest.emit (events.js:211:7)
[2023-01-09 22:02:54] at HTTPParser.parserOnIncomingClient [as onIncoming] (_http_client.js:543:21)
[2023-01-09 22:02:54] at HTTPParser.parserOnHeadersComplete (_http_common.js:112:17)
[2023-01-09 22:02:54] at Socket.socketOnData (_http_client.js:440:20)
[2023-01-09 22:02:54] at emitOne (events.js:116:13)
[2023-01-09 22:02:54] at Socket.emit (events.js:211:7)
[2023-01-09 22:02:54] at addChunk (_stream_readable.js:263:12)
[2023-01-09 22:02:54] [stack]: ‘Error: CouchDB security seems to be misconfigured, see: cht-core/DEVELOPMENT.md at master · medic/cht-core · GitHub at ClientRequest.net.get (/srv/software/medic-api/md5-IaUJW7p4rpaMwbWhKl5C1A==/node_modules/@medic/server-checks/src/checks.js:63:16)\n at Object.onceWrapper (events.js:315:30)\n at emitOne (events.js:116:13)\n at ClientRequest.emit (events.js:211:7)\n at HTTPParser.parserOnIncomingClient [as onIncoming] (_http_client.js:543:21)\n at HTTPParser.parserOnHeadersComplete (_http_common.js:112:17)\n at Socket.socketOnData (_http_client.js:440:20)\n at emitOne (events.js:116:13)\n at Socket.emit (events.js:211:7)\n at addChunk (_stream_readable.js:263:12)’,
[2023-01-09 22:02:54] [message]: ‘CouchDB security seems to be misconfigured, see: cht-core/DEVELOPMENT.md at master · medic/cht-core · GitHub’ }
[2023-01-09 22:02:57] 2023-01-09 22:02:57 INFO: Running server checks…
[2023-01-09 22:02:57] Node Environment Options: ‘–max_old_space_size=8192’
[2023-01-09 22:02:57] Node Version: 8.11.4 in production mode
[2023-01-09 22:02:57] COUCH_URL http://haproxy:5984/medic
[2023-01-09 22:02:57] COUCH_NODE_NAME couchdb@127.0.0.1
[2023-01-09 22:03:00] 2023-01-09 22:03:01 ERROR: Error watching sentinel changes, restarting: { FetchError: invalid json response body at http://haproxy:5984/medic-sentinel/ reason: Unexpected token < in JSON at position 0
[2023-01-09 22:03:00] at /srv/software/medic-api/md5-IaUJW7p4rpaMwbWhKl5C1A==/node_modules/node-fetch/lib/index.js:272:32
[2023-01-09 22:03:01] at
[2023-01-09 22:03:01] at process._tickCallback (internal/process/next_tick.js:188:7)
[2023-01-09 22:03:01] message: ‘invalid json response body at http://haproxy:5984/medic-sentinel/ reason: Unexpected token < in JSON at position 0’,
[2023-01-09 22:03:01] type: ‘invalid-json’,
[2023-01-09 22:03:01] [stack]: ‘FetchError: invalid json response body at http://haproxy:5984/medic-sentinel/ reason: Unexpected token < in JSON at position 0\n at /srv/software/medic-api/md5-IaUJW7p4rpaMwbWhKl5C1A==/node_modules/node-fetch/lib/index.js:272:32\n at \n at process._tickCallback (internal/process/next_tick.js:188:7)’ }
Expected a 401 when accessing db without authentication.
Instead we got a 503
[2023-01-09 22:03:01] 2023-01-09 22:03:01 ERROR: Fatal error initialising medic-api
[2023-01-09 22:03:01] 2023-01-09 22:03:01 ERROR: { Error: CouchDB security seems to be misconfigured, see: cht-core/DEVELOPMENT.md at master · medic/cht-core · GitHub
[2023-01-09 22:03:01] at ClientRequest.net.get (/srv/software/medic-api/md5-IaUJW7p4rpaMwbWhKl5C1A==/node_modules/@medic/server-checks/src/checks.js:63:16)
[2023-01-09 22:03:01] at Object.onceWrapper (events.js:315:30)
[2023-01-09 22:03:01] at emitOne (events.js:116:13)
[2023-01-09 22:03:01] at ClientRequest.emit (events.js:211:7)
[2023-01-09 22:03:01] at HTTPParser.parserOnIncomingClient [as onIncoming] (_http_client.js:543:21)

I have tried to follow the online troubleshooting sessions to have our app, including https://forum.communityhealthtoolkit.org/t/error-failed-to-get-a-response/2294/5, so far none has worked, looks like something has failed, trying to have everything up this morning, and not lose the data, any ideas?

Hi @oyierphil

Looking specifically at this excerpt from your log:

It looks like your CouchDb is not working correctly: you’re getting a 503 response code for a request that should result in success. Can you please check your CouchDb logs for clues?

@diana
Wanted to run docker system prune -a --volumes, following the guide by @nyika to reset CouchDB, we have a lot of data, a bit careful on the commands so that I dont lose any

Looking at the logs, seems CouchDB is running on localhost, couchdb@127.0.0.1, [error] 2023-01-10T06:50:12.363591Z couchdb@127.0.0.1 emulator -------- Error in process <0.272.0> on node ‘couchdb@127.0.0.1’

Running tail -f /srv/storage/medic-core/couchdb/logs/startup.log, I get a loop that doesn’t stop as below:
[error] 2023-01-10T08:24:59.671336Z couchdb@127.0.0.1 emulator -------- Error in process <0.272.0> on node ‘couchdb@127.0.0.1’ with exit value:
{badarg,[{ets,member,[mem3_openers,<<“medic-user-XYZ-meta”>>],[]},{mem3_shards,maybe_spawn_shard_writer,3,[{file,“src/mem3_shards.erl”},{line,476}]},{mem3_shards,load_shards_from_db,2,[{file,“src/mem3_shards.erl”},{line,381}]},{mem3_shards,load_shards_from_disk,1,[{file,“src/mem3_shards.erl”},{line,370}]},{mem3_shards,for_db,2,[{file,“src/mem3_shards.erl”},{line,59}]},{fabric_view_all_docs,go,5,[{file,“src/fabric_view_all_docs.erl”},{line,24}]},{couch_db,‘-get_design_docs/1-fun-0-’,1,[{file,“src/couch_db.erl”},{line,627}]}]}

Trying to access Fauxton brings 404 error

@oyierphil

I can’t make much from this error, can you please confirm that you copied the entire error? I know it can be tricky when there’s a loop.
Since the original issue was about no spare disk space, could you please check that you indeed have free disk space now?

Thanks!

@diana, we purchased a 3.84 TB HDD and dedicated it to the instance, thus I think space is taken care of, my problem is the issues of app crashing persists, and since yesterday, haven’t managed to have it running, we are currently at around 50 K records, target 300 K records, reason for a permanent fix.
[info] 2023-01-10T08:22:39.442823Z couchdb@127.0.0.1 <0.9.0> -------- Application khash started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:39.450312Z couchdb@127.0.0.1 <0.9.0> -------- Application couch_event started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:39.450539Z couchdb@127.0.0.1 <0.9.0> -------- Application hyper started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:39.457085Z couchdb@127.0.0.1 <0.9.0> -------- Application ibrowse started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:39.463119Z couchdb@127.0.0.1 <0.9.0> -------- Application ioq started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:39.463321Z couchdb@127.0.0.1 <0.9.0> -------- Application mochiweb started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:39.472482Z couchdb@127.0.0.1 <0.212.0> -------- Apache CouchDB 2.3.1 is starting.
[info] 2023-01-10T08:22:39.472570Z couchdb@127.0.0.1 <0.213.0> -------- Starting couch_sup
[notice] 2023-01-10T08:22:39.482610Z couchdb@127.0.0.1 <0.96.0> -------- config: [features] pluggable-storage-engines set to true for reason nil
[info] 2023-01-10T08:22:39.590670Z couchdb@127.0.0.1 <0.212.0> -------- Apache CouchDB has started. Time to relax.
[info] 2023-01-10T08:22:39.590895Z couchdb@127.0.0.1 <0.212.0> -------- Apache CouchDB has started on http://0.0.0.0:5987/
[info] 2023-01-10T08:22:39.591163Z couchdb@127.0.0.1 <0.9.0> -------- Application couch started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:39.591242Z couchdb@127.0.0.1 <0.9.0> -------- Application ets_lru started on node ‘couchdb@127.0.0.1’
[error] 2023-01-10T08:22:39.621402Z couchdb@127.0.0.1 <0.260.0> -------- Could not get design docs for <<“shards/80000000-9fffffff/medic-user-anastasia_okello-meta.1669742095”>> error:{badarg,[{ets,member,[mem3_openers,<<“medic-user-anastasia_okello-meta”>>],[]},{mem3_shards,maybe_spawn_shard_writer,3,[{file,“src/mem3_shards.erl”},{line,476}]},{mem3_shards,load_shards_from_db,2,[{file,“src/mem3_shards.erl”},{line,381}]},{mem3_shards,load_shards_from_disk,1,[{file,“src/mem3_shards.erl”},{line,370}]},{mem3_shards,for_db,2,[{file,“src/mem3_shards.erl”},{line,59}]},{fabric_view_all_docs,go,5,[{file,“src/fabric_view_all_docs.erl”},{line,24}]},{couch_db,‘-get_design_docs/1-fun-0-’,1,[{file,“src/couch_db.erl”},{line,627}]}]}
[error] 2023-01-10T08:22:39.621496Z couchdb@127.0.0.1 emulator -------- Error in process <0.272.0> on node ‘couchdb@127.0.0.1’ with exit value:
{badarg,[{ets,member,[mem3_openers,<<“medic-user-anastasia_okello-meta”>>],[]},{mem3_shards,maybe_spawn_shard_writer,3,[{file,“src/mem3_shards.erl”},{line,476}]},{mem3_shards,load_shards_from_db,2,[{file,“src/mem3_shards.erl”},{line,381}]},{mem3_shards,load_shards_from_disk,1,[{file,“src/mem3_shards.erl”},{line,370}]},{mem3_shards,for_db,2,[{file,“src/mem3_shards.erl”},{line,59}]},{fabric_view_all_docs,go,5,[{file,“src/fabric_view_all_docs.erl”},{line,24}]},{couch_db,‘-get_design_docs/1-fun-0-’,1,[{file,“src/couch_db.erl”},{line,627}]}]}
[notice] 2023-01-10T08:22:39.624499Z couchdb@127.0.0.1 <0.279.0> -------- rexi_server : started servers
[notice] 2023-01-10T08:22:39.628541Z couchdb@127.0.0.1 <0.283.0> -------- rexi_buffer : started servers
[info] 2023-01-10T08:22:39.628795Z couchdb@127.0.0.1 <0.9.0> -------- Application rexi started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:39.669903Z couchdb@127.0.0.1 <0.9.0> -------- Application mem3 started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:39.670279Z couchdb@127.0.0.1 <0.9.0> -------- Application fabric[2023-01-10 08:22:41] {“Kernel pid terminated”,application_controller,“{application_terminated,couch_log,shutdown}”}
Crash dump is being written to: erl_crash.dump…[info] 2023-01-10T08:22:43.966931Z couchdb@127.0.0.1 <0.9.0> -------- Application couch_log started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:43.972637Z couchdb@127.0.0.1 <0.9.0> -------- Application folsom started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:44.020758Z couchdb@127.0.0.1 <0.9.0> -------- Application couch_stats started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:44.020994Z couchdb@127.0.0.1 <0.9.0> -------- Application khash started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:44.029092Z couchdb@127.0.0.1 <0.9.0> -------- Application couch_event started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:44.029320Z couchdb@127.0.0.1 <0.9.0> -------- Application hyper started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:44.036195Z couchdb@127.0.0.1 <0.9.0> -------- Application ibrowse started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:44.042446Z couchdb@127.0.0.1 <0.9.0> -------- Application ioq started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:44.042646Z couchdb@127.0.0.1 <0.9.0> -------- Application mochiweb started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:44.052563Z couchdb@127.0.0.1 <0.212.0> -------- Apache CouchDB 2.3.1 is starting.
[info] 2023-01-10T08:22:44.052653Z couchdb@127.0.0.1 <0.213.0> -------- Starting couch_sup
[notice] 2023-01-10T08:22:44.063314Z couchdb@127.0.0.1 <0.96.0> -------- config: [features] pluggable-storage-engines set to true for reason nil
[info] 2023-01-10T08:22:44.172059Z couchdb@127.0.0.1 <0.212.0> -------- Apache CouchDB has started. Time to relax.
[info] 2023-01-10T08:22:44.172320Z couchdb@127.0.0.1 <0.212.0> -------- Apache CouchDB has started on http://0.0.0.0:5987/
[info] 2023-01-10T08:22:44.172670Z couchdb@127.0.0.1 <0.9.0> -------- Application couch started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:44.173048Z couchdb@127.0.0.1 <0.9.0> -------- Application ets_lru started on node ‘couchdb@127.0.0.1’
[error] 2023-01-10T08:22:44.200255Z couchdb@127.0.0.1 <0.260.0> -------- Could not get design docs for <<“shards/80000000-9fffffff/medic-user-anastasia_okello-meta.1669742095”>> error:{badarg,[{ets,member,[mem3_openers,<<“medic-user-anastasia_okello-meta”>>],[]},{mem3_shards,maybe_spawn_shard_writer,3,[{file,“src/mem3_shards.erl”},{line,476}]},{mem3_shards,load_shards_from_db,2,[{file,“src/mem3_shards.erl”},{line,381}]},{mem3_shards,load_shards_from_disk,1,[{file,“src/mem3_shards.erl”},{line,370}]},{mem3_shards,for_db,2,[{file,“src/mem3_shards.erl”},{line,59}]},{fabric_view_all_docs,go,5,[{file,“src/fabric_view_all_docs.erl”},{line,24}]},{couch_db,‘-get_design_docs/1-fun-0-’,1,[{file,“src/couch_db.erl”},{line,627}]}]}
[error] 2023-01-10T08:22:44.200474Z couchdb@127.0.0.1 emulator -------- Error in process <0.272.0> on node ‘couchdb@127.0.0.1’ with exit value:
{badarg,[{ets,member,[mem3_openers,<<“medic-user-anastasia_okello-meta”>>],[]},{mem3_shards,maybe_spawn_shard_writer,3,[{file,“src/mem3_shards.erl”},{line,476}]},{mem3_shards,load_shards_from_db,2,[{file,“src/mem3_shards.erl”},{line,381}]},{mem3_shards,load_shards_from_disk,1,[{file,“src/mem3_shards.erl”},{line,370}]},{mem3_shards,for_db,2,[{file,“src/mem3_shards.erl”},{line,59}]},{fabric_view_all_docs,go,5,[{file,“src/fabric_view_all_docs.erl”},{line,24}]},{couch_db,‘-get_design_docs/1-fun-0-’,1,[{file,“src/couch_db.erl”},{line,627}]}]}
[notice] 2023-01-10T08:22:44.203548Z couchdb@127.0.0.1 <0.279.0> -------- rexi_server : started servers
[notice] 2023-01-10T08:22:44.206526Z couchdb@127.0.0.1 <0.283.0> -------- rexi_buffer : started servers
[info] 2023-01-10T08:22:44.206809Z couchdb@127.0.0.1 <0.9.0> -------- Application rexi started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:44.235828Z couchdb@127.0.0.1 <0.9.0> -------- Application mem3 started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:44.235902Z couchdb@127.0.0.1 <0.9.0> -------- Application fabric[2023-01-10 08:22:45] {“Kernel pid terminated”,application_controller,“{application_terminated,couch_log,shutdown}”}
Crash dump is being written to: erl_crash.dump…[info] 2023-01-10T08:22:48.631919Z couchdb@127.0.0.1 <0.9.0> -------- Application couch_log started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:48.637156Z couchdb@127.0.0.1 <0.9.0> -------- Application folsom started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:48.683888Z couchdb@127.0.0.1 <0.9.0> -------- Application couch_stats started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:48.684123Z couchdb@127.0.0.1 <0.9.0> -------- Application khash started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:48.691647Z couchdb@127.0.0.1 <0.9.0> -------- Application couch_event started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:48.691869Z couchdb@127.0.0.1 <0.9.0> -------- Application hyper started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:48.698596Z couchdb@127.0.0.1 <0.9.0> -------- Application ibrowse started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:48.704658Z couchdb@127.0.0.1 <0.9.0> -------- Application ioq started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:48.704856Z couchdb@127.0.0.1 <0.9.0> -------- Application mochiweb started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:48.713968Z couchdb@127.0.0.1 <0.212.0> -------- Apache CouchDB 2.3.1 is starting.
[info] 2023-01-10T08:22:48.714057Z couchdb@127.0.0.1 <0.213.0> -------- Starting couch_sup
[notice] 2023-01-10T08:22:48.724377Z couchdb@127.0.0.1 <0.96.0> -------- config: [features] pluggable-storage-engines set to true for reason nil
[info] 2023-01-10T08:22:48.832903Z couchdb@127.0.0.1 <0.212.0> -------- Apache CouchDB has started. Time to relax.
[info] 2023-01-10T08:22:48.833023Z couchdb@127.0.0.1 <0.212.0> -------- Apache CouchDB has started on http://0.0.0.0:5987/
[info] 2023-01-10T08:22:48.833345Z couchdb@127.0.0.1 <0.9.0> -------- Application couch started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:48.833437Z couchdb@127.0.0.1 <0.9.0> -------- Application ets_lru started on node ‘couchdb@127.0.0.1’
[error] 2023-01-10T08:22:48.870201Z couchdb@127.0.0.1 emulator -------- Error in process <0.268.0> on node ‘couchdb@127.0.0.1’ with exit value:
{badarg,[{ets,member,[mem3_openers,<<“medic-user-anastasia_okello-meta”>>],[]},{mem3_shards,maybe_spawn_shard_writer,3,[{file,“src/mem3_shards.erl”},{line,476}]},{mem3_shards,load_shards_from_db,2,[{file,“src/mem3_shards.erl”},{line,381}]},{mem3_shards,load_shards_from_disk,1,[{file,“src/mem3_shards.erl”},{line,370}]},{mem3_shards,for_db,2,[{file,“src/mem3_shards.erl”},{line,59}]},{fabric_view_all_docs,go,5,[{file,“src/fabric_view_all_docs.erl”},{line,24}]},{couch_db,‘-get_design_docs/1-fun-0-’,1,[{file,“src/couch_db.erl”},{line,627}]}]
[error] 2023-01-10T08:22:48.870250Z couchdb@127.0.0.1 <0.260.0> -------- Could not get design docs for <<“shards/80000000-9fffffff/medic-user-anastasia_okello-meta.1669742095”>> error:{badarg,[{ets,member,[mem3_openers,<<“medic-user-anastasia_okello-meta”>>],[]},{mem3_shards,maybe_spawn_shard_writer,3,[{file,“src/mem3_shards.erl”},{line,476}]},{mem3_shards,load_shards_from_db,2,[{file,“src/mem3_shards.erl”},{line,381}]},{mem3_shards,load_shards_from_disk,1,[{file,“src/mem3_shards.erl”},{line,370}]},{mem3_shards,for_db,2,[{file,“src/mem3_shards.erl”},{line,59}]},{fabric_view_all_docs,go,5,[{file,“src/fabric_view_all_docs.erl”},{line,24}]},{couch_db,‘-get_design_docs/1-fun-0-’,1,[{file,“src/couch_db.erl”},{line,627}]}]}
[notice] 2023-01-10T08:22:48.877790Z couchdb@127.0.0.1 <0.279.0> -------- rexi_server : started servers
[notice] 2023-01-10T08:22:48.882161Z couchdb@127.0.0.1 <0.283.0> -------- rexi_buffer : started servers
[info] 2023-01-10T08:22:48.882463Z couchdb@127.0.0.1 <0.9.0> -------- Application rexi started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:48.926548Z couchdb@127.0.0.1 <0.9.0> -------- Application mem3 started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:48.926846Z couchdb@127.0.0.1 <0.9.0> -------- Application fabric[2023-01-10 08:22:50] {“Kernel pid terminated”,application_controller,“{application_terminated,couch_log,shutdown}”}
Crash dump is being written to: erl_crash.dump…[info] 2023-01-10T08:22:53.251555Z couchdb@127.0.0.1 <0.9.0> -------- Application couch_log started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:53.257189Z couchdb@127.0.0.1 <0.9.0> -------- Application folsom started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:53.304658Z couchdb@127.0.0.1 <0.9.0> -------- Application couch_stats started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:53.304894Z couchdb@127.0.0.1 <0.9.0> -------- Application khash started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:53.312714Z couchdb@127.0.0.1 <0.9.0> -------- Application couch_event started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:53.312943Z couchdb@127.0.0.1 <0.9.0> -------- Application hyper started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:53.319791Z couchdb@127.0.0.1 <0.9.0> -------- Application ibrowse started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:53.326112Z couchdb@127.0.0.1 <0.9.0> -------- Application ioq started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:53.326309Z couchdb@127.0.0.1 <0.9.0> -------- Application mochiweb started on node ‘couchdb@127.0.0.1’
[info] 2023-1-10T08:22:53.336257Z couchdb@127.0.0.1 <0.212.0> -------- Apache CouchDB 2.3.1 is starting.
[info] 2023-01-10T08:22:53.336347Z couchdb@127.0.0.1 <0.213.0> -------- Starting couch_sup
[notice] 2023-01-10T08:22:53.346781Z couchdb@127.0.0.1 <0.96.0> -------- config: [features] pluggable-storage-engines set to true for reason nil
[info] 2023-01-10T08:22:53.454718Z couchdb@127.0.0.1 <0.212.0> -------- Apache CouchDB has started. Time to relax.
[info] 2023-01-10T08:22:53.454839Z couchdb@127.0.0.1 <0.212.0> -------- Apache CouchDB has started on http://0.0.0.0:5987/
[info] 2023-01-10T08:22:53.455133Z couchdb@127.0.0.1 <0.9.0> -------- Application couch started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:53.455339Z couchdb@127.0.0.1 <0.9.0> -------- Application ets_lru started on node ‘couchdb@127.0.0.1’
[notice] 2023-01-10T08:22:53.485512Z couchdb@127.0.0.1 <0.279.0> -------- rexi_server : started servers
[error] 2023-01-10T08:22:53.487455Z couchdb@127.0.0.1 <0.260.0> -------- Could not get design docs for <<“shards/80000000-9fffffff/medic-user-anastasia_okello-meta.1669742095”>> error:{badarg,[{ets,member,[mem3_openers,<<“medic-user-anastasia_okello-meta”>>],[]},{mem3_shards,maybe_spawn_shard_writer,3,[{file,“src/mem3_shards.erl”},{line,476}]},{mem3_shards,load_shards_from_db,2,[{file,“src/mem3_shards.erl”},{line,381}]},{mem3_shards,load_shards_from_disk,1,[{file,“src/mem3_shards.erl”},{line,370}]},{mem3_shards,for_db,2,[{file,“src/mem3_shards.erl”},{line,59}]},{fabric_view_all_docs,go,5,[{file,“src/fabric_view_all_docs.erl”},{line,24}]},{couch_db,‘-get_design_docs/1-fun-0-’,1,[{file,“src/couch_db.erl”},{line,627}]}]}
[error] 2023-01-10T08:22:53.487531Z couchdb@127.0.0.1 emulator -------- Error in process <0.272.0> on node ‘couchdb@127.0.0.1’ with exit value:
{badarg,[{ets,member,[mem3_openers,<<“medic-user-anastasia_okello-meta”>>],[]},{mem3_shards,maybe_spawn_shard_writer,3,[{file,“src/mem3_shards.erl”},{line,476}]},{mem3_shards,load_shards_from_db,2,[{file,“src/mem3_shards.erl”},{line,381}]},{mem3_shards,load_shards_from_disk,1,[{file,“src/mem3_shards.erl”},{line,370}]},{mem3_shards,for_db,2,[{file,“src/mem3_shards.erl”},{line,59}]},{fabric_view_all_docs,go,5,[{file,“src/fabric_view_all_docs.erl”},{line,24}]},{couch_db,‘-get_design_docs/1-fun-0-’,1,[{file,“src/couch_db.erl”},{line,627}]}]}
[notice] 2023-01-10T08:22:53.491099Z couchdb@127.0.0.1 <0.283.0> -------- rexi_buffer : started servers
[info] 2023-01-10T08:22:53.491305Z couchdb@127.0.0.1 <0.9.0> -------- Application rexi started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:53.517273Z couchdb@127.0.0.1 <0.9.0> -------- Application mem3 started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:53.517385Z couchdb@127.0.0.1 <0.9.0> -------- Application fabric[2023-01-10 08:22:55] {“Kernel pid terminated”,application_controller,“{application_terminated,couch_log,shutdown}”}
Crash dump is being written to: erl_crash.dump…[info] 2023-01-10T08:22:57.806559Z couchdb@127.0.0.1 <0.9.0> -------- Application couch_log started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:57.811803Z couchdb@127.0.0.1 <0.9.0> -------- Application folsom started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:57.857594Z couchdb@127.0.0.1 <0.9.0> -------- Application couch_stats started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:57.857829Z couchdb@127.0.0.1 <0.9.0> -------- Application khash started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:57.865316Z couchdb@127.0.0.1 <0.9.0> -------- Application couch_event started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:57.865541Z couchdb@127.0.0.1 <0.9.0> -------- Application hyper started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:57.872251Z couchdb@127.0.0.1 <0.9.0> -------- Application ibrowse started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:57.878281Z couchdb@127.0.0.1 <0.9.0> -------- Application ioq started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:57.878485Z couchdb@127.0.0.1 <0.9.0> -------- Application mochiweb started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:57.887926Z couchdb@127.0.0.1 <0.212.0> -------- Apache CouchDB 2.3.1 is starting.
[info] 2023-01-10T08:22:57.888016Z couchdb@127.0.0.1 <0.213.0> -------- Starting couch_sup
[notice] 2023-01-10T08:22:57.902276Z couchdb@127.0.0.1 <0.96.0> -------- config: [features] pluggable-storage-engines set to true for reason nil
[info] 2023-01-10T08:22:58.071128Z couchdb@127.0.0.1 <0.212.0> -------- Apache CouchDB has started. Time to relax.
[info] 2023-01-10T08:22:58.071282Z couchdb@127.0.0.1 <0.212.0> -------- Apache CouchDB has started on http://0.0.0.0:5987/
[info] 2023-01-10T08:22:58.071617Z couchdb@127.0.0.1 <0.9.0> -------- Application couch started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:58.071979Z couchdb@127.0.0.1 <0.9.0> -------- Application ets_lru started on node ‘couchdb@127.0.0.1’
[error] 2023-01-10T08:22:58.113103Z couchdb@127.0.0.1 <0.260.0> -------- Could not get design docs for <<“shards/80000000-9fffffff/medic-user-anastasia_okello-meta.1669742095”>> error:{badarg,[{ets,member,[mem3_openers,<<“medic-user-anastasia_okello-meta”>>],[]},{mem3_shards,maybe_spawn_shard_writer,3,[{file,“src/mem3_shards.erl”},{line,476}]},{mem3_shards,load_shards_from_db,2,[{file,“src/mem3_shards.erl”},{line,381}]},{mem3_shards,load_shards_from_disk,1,[{file,“src/mem3_shards.erl”},{line,370}]},{mem3_shards,for_db,2,[{file,“src/mem3_shards.erl”},{line,59}]},{fabric_view_all_docs,go,5,[{file,“src/fabric_view_all_docs.erl”},{line,24}]},{couch_db,‘-get_design_docs/1-fun-0-’,1,[{file,“src/couch_db.erl”},{line,627}]}]}
[error] 2023-01-10T08:22:58.113130Z couchdb@127.0.0.1 emulator -------- Error in process <0.272.0> on node ‘couchdb@127.0.0.1’ with exit value:
{badarg,[{ets,member,[mem3_openers,<<“medic-user-anastasia_okello-meta”>>],[]},{mem3_shards,maybe_spawn_shard_writer,3,[{file,“src/mem3_shards.erl”},{line,476}]},{mem3_shards,load_shards_from_db,2,[{file,“src/mem3_shards.erl”},{line,381}]},{mem3_shards,load_shards_from_disk,1,[{file,“src/mem3_shards.erl”},{line,370}]},{mem3_shards,for_db,2,[{file,“src/mem3_shards.erl”},{line,59}]},{fabric_view_all_docs,go,5,[{file,“src/fabric_view_all_docs.erl”},{line,24}]},{couch_db,‘-get_design_docs/1-fun-0-’,1,[{file,“src/couch_db.erl”},{line,627}]}]}
[notice] 2023-01-10T08:22:58.117928Z couchdb@127.0.0.1 <0.279.0> -------- rexi_server : started servers
[notice] 2023-01-10T08:22:58.122052Z couchdb@127.0.0.1 <0.283.0> -------- rexi_buffer : started servers
[info] 2023-01-10T08:22:58.122339Z couchdb@127.0.0.1 <0.9.0> -------- Application rexi started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:58.163477Z couchdb@127.0.0.1 <0.9.0> -------- Application mem3 started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:22:58.163636Z couchdb@127.0.0.1 <0.9.0> -------- Application fabric[2023-01-10 08:22:59] {“Kernel pid terminated”,application_controller,“{application_terminated,couch_log,shutdown}”}
Crash dump is being written to: erl_crash.dump…[info] 2023-01-10T08:23:02.624519Z couchdb@127.0.0.1 <0.9.0> -------- Application couch_log started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:23:02.631611Z couchdb@127.0.0.1 <0.9.0> -------- Application folsom started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:23:02.680323Z couchdb@127.0.0.1 <0.9.0> -------- Application couch_stats started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:23:02.680572Z couchdb@127.0.0.1 <0.9.0> -------- Application khash started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:23:02.689232Z couchdb@127.0.0.1 <0.9.0> -------- Application couch_event started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:23:02.689467Z couchdb@127.0.0.1 <0.9.0> -------- Application hyper started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:23:02.696694Z couchdb@127.0.0.1 <0.9.0> -------- Application ibrowse started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:23:02.703155Z couchdb@127.0.0.1 <0.9.0> -------- Application ioq started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:23:02.703354Z couchdb@127.0.0.1 <0.9.0> -------- Application mochiweb started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:23:02.712945Z couchdb@127.0.0.1 <0.212.0> -------- Apache CouchDB 2.3.1 is starting.

[info] 2023-01-10T08:23:02.713036Z couchdb@127.0.0.1 <0.213.0> -------- Starting couch_sup
[notice] 2023-01-10T08:23:02.724053Z couchdb@127.0.0.1 <0.96.0> -------- config: [features] pluggable-storage-engines set to true for reason nil
[info] 2023-01-10T08:23:02.842198Z couchdb@127.0.0.1 <0.212.0> -------- Apache CouchDB has started. Time to relax.

[info] 2023-01-10T08:23:02.842380Z couchdb@127.0.0.1 <0.212.0> -------- Apache CouchDB has started on http://0.0.0.0:5987/
[info] 2023-01-10T08:23:02.842611Z couchdb@127.0.0.1 <0.9.0> -------- Application couch started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:23:02.842858Z couchdb@127.0.0.1 <0.9.0> -------- Application ets_lru started on node ‘couchdb@127.0.0.1’
[error] 2023-01-10T08:23:02.871774Z couchdb@127.0.0.1 <0.260.0> -------- Could not get design docs for <<“shards/80000000-9fffffff/medic-user-anastasia_okello-meta.1669742095”>> error:{badarg,[{ets,member,[mem3_openers,<<“medic-user-anastasia_okello-meta”>>],[]},{mem3_shards,maybe_spawn_shard_writer,3,[{file,“src/mem3_shards.erl”},{line,476}]},{mem3_shards,load_shards_from_db,2,[{file,“src/mem3_shards.erl”},{line,381}]},{mem3_shards,load_shards_from_disk,1,[{file,“src/mem3_shards.erl”},{line,370}]},{mem3_shards,for_db,2,[{file,“src/mem3_shards.erl”},{line,59}]},{fabric_view_all_docs,go,5,[{file,“src/fabric_view_all_docs.erl”},{line,24}]},{couch_db,‘-get_design_docs/1-fun-0-’,1,[{file,“src/couch_db.erl”},{line,627}]}]}
[error] 2023-01-10T08:23:02.871911Z couchdb@127.0.0.1 emulator -------- Error in process <0.268.0> on node ‘couchdb@127.0.0.1’ with exit value:
{badarg,[{ets,member,[mem3_openers,<<“medic-user-anastasia_okello-meta”>>],[]},{mem3_shards,maybe_spawn_shard_writer,3,[{file,“src/mem3_shards.erl”},{line,476}]},{mem3_shards,load_shards_from_db,2,[{file,“src/mem3_shards.erl”},{line,381}]},{mem3_shards,load_shards_from_disk,1,[{file,“src/mem3_shards.erl”},{line,370}]},{mem3_shards,for_db,2,[{file,“src/mem3_shards.erl”},{line,59}]},{fabric_view_all_docs,go,5,[{file,“src/fabric_view_all_docs.erl”},{line,24}]},{couch_db,‘-get_design_docs/1-fun-0-’,1,[{file,“src/couch_db.erl”},{line,627}]}]}

[notice] 2023-01-10T08:23:02.877944Z couchdb@127.0.0.1 <0.279.0> -------- rexi_server : started servers
[notice] 2023-01-10T08:23:02.882269Z couchdb@127.0.0.1 <0.283.0> -------- rexi_buffer : started servers
[info] 2023-01-10T08:23:02.882584Z couchdb@127.0.0.1 <0.9.0> -------- Application rexi started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:23:02.928360Z couchdb@127.0.0.1 <0.9.0> -------- Application mem3 started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:23:02.928532Z couchdb@127.0.0.1 <0.9.0> -------- Application fabric[2023-01-10 08:23:04] {“Kernel pid terminated”,application_controller,“{application_terminated,couch_log,shutdown}”}

Crash dump is being written to: erl_crash.dump…[info] 2023-01-10T08:23:07.238969Z couchdb@127.0.0.1 <0.9.0> -------- Application couch_log started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:23:07.244588Z couchdb@127.0.0.1 <0.9.0> -------- Application folsom started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:23:07.291025Z couchdb@127.0.0.1 <0.9.0> -------- Application couch_stats started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:23:07.291260Z couchdb@127.0.0.1 <0.9.0> -------- Application khash started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:23:07.298939Z couchdb@127.0.0.1 <0.9.0> -------- Application couch_event started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:23:07.299170Z couchdb@127.0.0.1 <0.9.0> -------- Application hyper started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:23:07.305703Z couchdb@127.0.0.1 <0.9.0> -------- Application ibrowse started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:23:07.311847Z couchdb@127.0.0.1 <0.9.0> -------- Application ioq started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:23:07.312056Z couchdb@127.0.0.1 <0.9.0> -------- Application mochiweb started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:23:07.321546Z couchdb@127.0.0.1 <0.212.0> -------- Apache CouchDB 2.3.1 is starting.

[info] 2023-01-10T08:23:07.321633Z couchdb@127.0.0.1 <0.213.0> -------- Starting couch_sup
[notice] 2023-01-10T08:23:07.331425Z couchdb@127.0.0.1 <0.96.0> -------- config: [features] pluggable-storage-engines set to true for reason nil
[info] 2023-01-10T08:23:07.434147Z couchdb@127.0.0.1 <0.212.0> -------- Apache CouchDB has started. Time to relax.

[info] 2023-01-10T08:23:07.434390Z couchdb@127.0.0.1 <0.212.0> -------- Apache CouchDB has started on http://0.0.0.0:5987/
[info] 2023-01-10T08:23:07.434803Z couchdb@127.0.0.1 <0.9.0> -------- Application couch started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:23:07.435240Z couchdb@127.0.0.1 <0.9.0> -------- Application ets_lru started on node ‘couchdb@127.0.0.1’
[error] 2023-01-10T08:23:07.466789Z couchdb@127.0.0.1 <0.260.0> -------- Could not get design docs for <<“shards/80000000-9fffffff/medic-user-anastasia_okello-meta.1669742095”>> error:{badarg,[{ets,member,[mem3_openers,<<“medic-user-anastasia_okello-meta”>>],[]},{mem3_shards,maybe_spawn_shard_writer,3,[{file,“src/mem3_shards.erl”},{line,476}]},{mem3_shards,load_shards_from_db,2,[{file,“src/mem3_shards.erl”},{line,381}]},{mem3_shards,load_shards_from_disk,1,[{file,“src/mem3_shards.erl”},{line,370}]},{mem3_shards,for_db,2,[{file,“src/mem3_shards.erl”},{line,59}]},{fabric_view_all_docs,go,5,[{file,“src/fabric_view_all_docs.erl”},{line,24}]},{couch_db,‘-get_design_docs/1-fun-0-’,1,[{file,“src/couch_db.erl”},{line,627}]}]}
[error] 2023-01-10T08:23:07.466982Z couchdb@127.0.0.1 emulator -------- Error in process <0.272.0> on node ‘couchdb@127.0.0.1’ with exit value:
{badarg,[{ets,member,[mem3_openers,<<“medic-user-anastasia_okello-meta”>>],[]},{mem3_shards,maybe_spawn_shard_writer,3,[{file,“src/mem3_shards.erl”},{line,476}]},{mem3_shards,load_shards_from_db,2,[{file,“src/mem3_shards.erl”},{line,381}]},{mem3_shards,load_shards_from_disk,1,[{file,“src/mem3_shards.erl”},{line,370}]},{mem3_shards,for_db,2,[{file,“src/mem3_shards.erl”},{line,59}]},{fabric_view_all_docs,go,5,[{file,“src/fabric_view_all_docs.erl”},{line,24}]},{couch_db,‘-get_design_docs/1-fun-0-’,1,[{file,“src/couch_db.erl”},{line,627}]}]}

[notice] 2023-01-10T08:23:07.470050Z couchdb@127.0.0.1 <0.279.0> -------- rexi_server : started servers
[notice] 2023-01-10T08:23:07.473052Z couchdb@127.0.0.1 <0.283.0> -------- rexi_buffer : started servers
[info] 2023-01-10T08:23:07.473337Z couchdb@127.0.0.1 <0.9.0> -------- Application rexi started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:23:07.502494Z couchdb@127.0.0.1 <0.9.0> -------- Application mem3 started on node ‘couchdb@127.0.0.1’
[info] 2023-01-10T08:23:07.502702Z couchdb@127.0.0.1 <0.9.0> -------- Application fabric[2023-01-10 08:23:09] {“Kernel pid terminated”,application_controller,“{application_terminated,couch_log,shutdown}”}

@hareet, wanted to see my space usage by running docker system df, and I see can reclaim around 1.2 GB of space
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 7 2 1.96 GB 1.209GB (61%)
Containers 2 2 8.221 MB 0B (0%)
Local Volumes 8 1 101.4 GB 1.763GB (1%)
Build Cache 0 0 0B 0B

Running the commands above creates more logs for the different services every time you run, please advise

Thanks for sharing the whole error, this does provide more information.

I’ve never encountered this error before, so tagging @hareet for more insight. It does seem like your data is corrupted somehow or Couch has trouble accessing it.

@diana, thank you. Is there another way of confirming is storage for the containers and data is used up?
Trying to figure why CouchDb and medic-core aren’t starting, and how to clean logs, still no way through…