Error 404: "Database does not exist." when adding a credential

Calling /api/v1/credentials/mykey returns Error 404

curl -X PUT -H "Content-Type: text/plain"  http://admin:pass@instance/api/v1/credentials/mykey -d '"secret"'

{
    "error": "not_found",
    "reason": "Database does not exist."
}

CHT Version: 4.11
Couch Version: 3.3.3
CHT Installation: Kubernetes
Couch Installation: Clustered With 3 Nodes

Alternatives
Adding the credential to each of the 3 nodes using the old way

Images
public.ecr.aws/medic/cht-api:4.11.0
public.ecr.aws/medic/cht-couchdb:4.11.0
public.ecr.aws/medic/cht-couchdb:4.8.0
public.ecr.aws/medic/cht-couchdb:4.8.0
public.ecr.aws/medic/cht-haproxy:4.11.0
public.ecr.aws/medic/cht-haproxy-healthcheck:4.11.0
public.ecr.aws/medic/cht-sentinel:4.11.0
medicmobile/upgrade-service:0.31

@diana hoping you might have an idea here because I am stumped! :thinking:

The error is like the api server cannot find the routing for /api/v1/credentials/:key (but it definitely should be there). Robert and I were looking at this live during a call and he was able to hit other API endpoints on the server with no issue, so it does not seem to be a url/connectivity issue.

I tried testing that exact curl command against a locally running 4.11.0 instance (though not in k8s with 3 Couch notes) and the command worked fine.

Is there something with k8s/multiple-nodes that could be causing problems here? Looking through the code, nothing jumped out as being a likely candidate…

The credentials get saved into a medic-vault database, but it would be a shame if this is the message the user gets when this database doesn’t exist.

Is there something with k8s/multiple-nodes that could be causing problems here?

It shouldn’t, because we no longer save credentials in the couch config.

@diana @jkuester

in couch, there already exists a database called medic-vault

Thanks @rmayore

It would be helpful to see what 404s in your case.
Can you please try to isolate your call to the /api/v1/credentials/mykey endpoint and share your API logs and haproxy logs?

Just adding some happy path notes here in case it’s helpful.

On my docker based single node, indeed this call works as expected:

curl -X PUT -H "Content-Type: text/plain" https://medic:password@192-168-68-23.local-ip.medicmobile.org:10446/api/v1/credentials/mykey2 -d "secret"

I can artificially reproduce the 404 error by deleting the medic-vault database in fauxton. Tailing the API and HAProxy logs in two terminals with docker logs -f 4_11_404_test-haproxy-1 and docker logs -f 4_11_404_test-api-1 and doing my curl command with the database having been deleted, I see these logs in the API container:

2025-03-25T20:46:43.707 DEBUG: Checking for a configured outgoing message service 
2025-03-25T20:46:45.208 REQ: e50adc31-ea9f-4381-b897-9d69139c1329 192.168.68.23 medic PUT /api/v1/credentials/mykey2 HTTP/1.0
2025-03-25T20:46:45.225 RES: e50adc31-ea9f-4381-b897-9d69139c1329 192.168.68.23 medic PUT /api/v1/credentials/mykey2 HTTP/1.0 404 - 16.266 ms

And these logs in the HAProxy container:

<150>Mar 25 20:46:45 haproxy[12]: 172.30.0.6,couchdb,200,1,0,0,GET,/_session,-,medic,'-',323,1,148,'curl/8.12.1'
<150>Mar 25 20:46:45 haproxy[12]: 172.30.0.6,couchdb,200,1,0,0,GET,/_session,-,medic,'-',323,1,148,'curl/8.12.1'
<150>Mar 25 20:46:45 haproxy[12]: 172.30.0.6,couchdb,200,1,0,0,GET,/_node/_local/_config/couch_httpd_auth/secret,-,medic,'-',257,1,35,'-'
<150>Mar 25 20:46:45 haproxy[12]: 172.30.0.6,couchdb,404,1,0,0,GET,/medic-vault/credential:mykey2,-,medic,'-',294,1,58,'-'
<150>Mar 25 20:46:45 haproxy[12]: 172.30.0.6,couchdb,404,1,1,0,PUT,/medic-vault/credential:mykey2,-,medic,'{"_id":"credential:mykey2","password":"***"}',294,1,58,'-'

I note that restarting all the containers causes the medic-vault database to get recreated.

Again - all in docker compose single couch node testing.

1 Like

Quick update to share that I had the same results of curl successfully, deleting the medic-vault database, seeing the 404 and restarting the instance restoring the medic-vault database - but this time on a clustered couch install (now that I can actually run it!)

That is - I’m unable to reproduce this in docker on single or multi-node couch.

Thanks @mrjones .

@rmayore when you have time, can you please share the logs I’ve requested above?

@diana

haproxy
<150>Mar 26 19:14:27 haproxy[12]: 10.123.24.177,couchdb-3.busia-ls-prod.svc.cluster.local,404,9,0,0,GET,/api/v1/credentials/test_key,-,rmayore,'-',294,9,58,'PostmanRuntime/7.37.3'

api

2025-03-26T19:14:27.132 REQ: f82cafb9-eb11-4881-8237-7b40f5d9bf29 185.203.122.86 rmayore GET /api/v1/credentials/test_key HTTP/1.1
2025-03-26T19:14:27.156 RES: f82cafb9-eb11-4881-8237-7b40f5d9bf29 185.203.122.86 rmayore GET /api/v1/credentials/test_key HTTP/1.1 404 58 23.148 ms

@diana

Another strange thing I’ve noticed is haproxy is generating an insane amount of logs (between 150 and 200 lines every second) logging calls to GET,/medic-vault/credential:mykey. Is this normal behavior?

mykey is a test key I tried to add 6 days ago when I created this post.

Thanks a lot for the logs @rmayore . This is very helpful. I’ll look into this today.

2 Likes

@rmayore I just noticed that you’ve reported running different images of CouchDb. Is this true? If yes, this is a serious problem and you should update the old images immediately. The upgrade service had a bug where it didn’t update all couchdbs in the cluster. Your version mismatch can cause you serious issues.

As for the issue with the credentials 404, I think the only explanation I have is that the curl command was somehow incorrect and it didn’t do a PUT, but a GET instead. When you do a GET, indeed you would get this 404 (because we only register a PUT route for this controller).

As for the repeated haproxy errors, my guess is that API or Sentinel are trying to use this key to authenticate some calls to outbound or some other external service. Can you please confirm whether you have added some configuration to push data using this key?

1 Like

Locally, I got this error running an independently installed couchdb but the api call works with the medic couchdb instance.

@bernard

This is not meant to work with independently installed CouchDb, this is an API endpoint that should work with a CHT CouchDb instance.

1 Like

@diana @mrjones Been testing requests to the endpoint and this is what I’ve found out.

  1. When you make a PUT request to http://url/api/v1/credentials/test_key, or url/api/v1/credentials/test_credential via curl -L, you get a {"code":401,"error":"unauthorized"}. My assumption for this is because we have ssl-direct set up on our ingress, it redirects but strips the auth header. The request is logged in api logs as a PUT.

  2. When you make a PUT request to http://url/api/v1/credentials/test_key, or url/api/v1/credentials/test_credential via Postman, you get a 404 {"error": "not_found", "reason": "Database does not exist."}. I have no idea how this happens, since it should follow redirects the same way as curl. Another difference is the request is logged in api logs as a GET, hence the database not found error.

  3. When you make a PUT request to https://url/api/v1/credentials/test_key via Postman or Curl, you get a 200 successful response.

@rmayore - thanks for the follow up!

Can you please run your tests against a clean install of the CHT using default values and with no customization? Here’s what I did to test:

  1. set up an instance of CHT 4.15 with Docker Helper
  2. verify login works with user medic and password password in the web GUI
  3. run my test curl commands being sure to use PUT as the verb and the same user and password as above:
    curl -X PUT -H "Content-Type: text/plain"  https://medic:password@192-168-68-23.local-ip.medicmobile.org:10443/api/v1/credentials/test_key -d "secret"
    

The response I get from this:

{
  "ok": true
}

This ensures there’s no changes in ingres, any changes by postman and is the most raw request possible.

Once this works, you can test other methods and with other variables to see what is introducing any errors.

Hi @rmayore

Your point nbr 3. clears up the fact that the issue is somewhere in your redirect setup an not in the CHT. Thanks for testing.

As for 1. and 2., it looks like your ssl-redirect is faulty in a number of ways, by either transforming requests or dropping headers. This is quite concerning, and I would address this as a critical issue in your system. I hope you find a way to work it out!