Duplication of messages in TextIt (RapidPro)

Hmm, not sure what’s going on then. I could definitely see the messages being sent out by RapidPro, yet no status updates were saved on the couchdb docs for those messages. @cliff, if this happens again perhaps you can see how it can be observed or isolated with support of the community.

1 Like

I’ll try to find some time next week to check whether rapidpro actually changed behavior.
But even if it is a problem of incompatibility between CHT and RapidPro, any fix we push will require you to upgrade to latest CHT (4.12?)

1 Like

Thanks Diana for digging in. These 2-way-texting projects are upgrading to 4.x in the near term, so cht-core upgrade would be fine if needed.

2 Likes

thanks @marc am on it

thanks much @diana this will be super helpful because we are about to do a pilot at 2 facilities and this will unblock us

hello @diana

Just wanted to share an update here, we have upgrade from 3.17.0 to 4.10.0 but still facing the message duplication issue ie sms state failing to update in CHT. It’s really blocking our pilot

The issue has not been fixed in any version of the CHT @cliff. So your upgrade is not expected to solve your issue.
We would have to fix this in a new version and release, and then you can upgrade.

1 Like

:thinking:oh i see , thanks @diana

@diana could i file an issue for this? , i don’t see one tracking this work

I have already created an issue: RapidPro API changes result in mishandling of errors · Issue #9467 · medic/cht-core · GitHub

2 Likes

@diana - we are ready to go-live in Jamaica and eager to ensure this has been fixed. Do you have any updates for us? Thanks for the support.

Hi @Beth and @cliff

We’re putting together a ~4.13.x branch that has some quick fixes to the issues listed here. This will be like a hot-patch kind of situation where you install this branch and test it on your server.
If all goes well, we’ll include the fixes in a next release.

1 Like

Hi @Beth and @cliff

A patch is now available for installation which @diana and I think will resolve this issue. Given the quick turnaround this hasn’t been tested as thoroughly as usual but I suggest you try it out and see if it unblocks you. This is definitely pre-release quality so the usual warnings apply, make sure you back everything up first.

To install the patch, go to the App Management > Upgrade page, open the “Pre-releases” accordion, under “Branches” you should see one called “9467-update-rapidpro (~4.13.0)”, and click “Stage” or “Install”. Given you’re already running 4.10.0 it should be quite straightforward.

Let us know if you have any other issues and we can make further fixes.

2 Likes

thanks alot @gareth and @diana for the patch , let me install and test it out and will report the feedback oh how it goes

helo @diana tried the upgrade via App Management but it drops an error of Failed to Upgrade as seen in the demo

Have you tried a second time (just in case)? If that doesn’t work, you’ll need to check the logs to see what the actual error is.

thanks @gareth
i have tried the a couple of times , i decided to restart all the containers plus the upgrade-service and
from the api logs am seeing

2024-10-18T11:12:42.863 RES: b15ab8af-951c-42fa-835b-0fd4625919cd 41.210.147.112 - GET /medic/_design/medic-client/_view/messages_by_contact_date?group_level=1 HTTP/1.0 200 - 56.312 ms
2024-10-18T11:12:42.926 INFO: Last upgrade log is already final. 
2024-10-18T11:12:42.926 INFO: Valid Upgrade log tracking file was not found. Not updating. 
2024-10-18T11:12:42.927 ERROR: Error thrown while installing: RequestError: AggregateError
    at new RequestError (/service/api/node_modules/request-promise-core/lib/errors.js:14:15)
    at plumbing.callback (/service/api/node_modules/request-promise-core/lib/plumbing.js:87:29)
    at Request.RP$callback [as _callback] (/service/api/node_modules/request-promise-core/lib/plumbing.js:46:31)
    at self.callback (/service/api/node_modules/request/request.js:185:22)
    at Request.emit (node:events:519:28)
    at Request.onRequestError (/service/api/node_modules/request/request.js:877:8)
    at ClientRequest.emit (node:events:519:28)
    at Socket.socketErrorListener (node:_http_client:500:9)
    at Socket.emit (node:events:519:28)
    at emitErrorNT (node:internal/streams/destroy:169:8)
    at emitErrorCloseNT (node:internal/streams/destroy:128:3)
    at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
  name: 'RequestError',
  message: 'AggregateError',
  cause: AggregateError [ECONNREFUSED]: 
      at internalConnectMultiple (node:net:1117:18)
      at afterConnectMultiple (node:net:1684:7) {
    [stack]: 'AggregateError [ECONNREFUSED]: \n' +
      '    at internalConnectMultiple (node:net:1117:18)\n' +
      '    at afterConnectMultiple (node:net:1684:7)',
    [errors]: [
      Error: connect ECONNREFUSED ::1:5100
          at createConnectionError (node:net:1647:14)
          at afterConnectMultiple (node:net:1677:16) {
        [stack]: 'Error: connect ECONNREFUSED ::1:5100\n' +
          '    at createConnectionError (node:net:1647:14)\n' +
          '    at afterConnectMultiple (node:net:1677:16)',
        [message]: 'connect ECONNREFUSED ::1:5100',
        errno: -111,
        code: 'ECONNREFUSED',
        syscall: 'connect',
        address: '::1',
        port: 5100
      },
      Error: connect ECONNREFUSED 127.0.0.1:5100
          at createConnectionError (node:net:1647:14)
          at afterConnectMultiple (node:net:1677:16) {
        [stack]: 'Error: connect ECONNREFUSED 127.0.0.1:5100\n' +
          '    at createConnectionError (node:net:1647:14)\n' +
          '    at afterConnectMultiple (node:net:1677:16)',
        [message]: 'connect ECONNREFUSED 127.0.0.1:5100',
        errno: -111,
        code: 'ECONNREFUSED',
        syscall: 'connect',
        address: '127.0.0.1',
        port: 5100
      },
      [length]: 2
    ],
    code: 'ECONNREFUSED',
    [Symbol(kIsNodeError)]: [Getter]
  },
  error: AggregateError [ECONNREFUSED]: 
      at internalConnectMultiple (node:net:1117:18)
      at afterConnectMultiple (node:net:1684:7) {
    [stack]: 'AggregateError [ECONNREFUSED]: \n' +
      '    at internalConnectMultiple (node:net:1117:18)\n' +
      '    at afterConnectMultiple (node:net:1684:7)',
    [errors]: [
      Error: connect ECONNREFUSED ::1:5100
          at createConnectionError (node:net:1647:14)
          at afterConnectMultiple (node:net:1677:16) {
        [stack]: 'Error: connect ECONNREFUSED ::1:5100\n' +
          '    at createConnectionError (node:net:1647:14)\n' +
          '    at afterConnectMultiple (node:net:1677:16)',
        [message]: 'connect ECONNREFUSED ::1:5100',
        errno: -111,
        code: 'ECONNREFUSED',
        syscall: 'connect',
        address: '::1',
        port: 5100
      },
      Error: connect ECONNREFUSED 127.0.0.1:5100
          at createConnectionError (node:net:1647:14)
          at afterConnectMultiple (node:net:1677:16) {
        [stack]: 'Error: connect ECONNREFUSED 127.0.0.1:5100\n' +
          '    at createConnectionError (node:net:1647:14)\n' +
          '    at afterConnectMultiple (node:net:1677:16)',
        [message]: 'connect ECONNREFUSED 127.0.0.1:5100',
        errno: -111,
        code: 'ECONNREFUSED',
        syscall: 'connect',
        address: '127.0.0.1',
        port: 5100
      },
      [length]: 2
    ],
    code: 'ECONNREFUSED',
    [Symbol(kIsNodeError)]: [Getter]
  },
  [stack]: 'RequestError: AggregateError\n' +
    '    at new RequestError (/service/api/node_modules/request-promise-core/lib/errors.js:14:15)\n' +
    '    at plumbing.callback (/service/api/node_modules/request-promise-core/lib/plumbing.js:87:29)\n' +
    '    at Request.RP$callback [as _callback] (/service/api/node_modules/request-promise-core/lib/plumbing.js:46:31)\n' +
    '    at self.callback (/service/api/node_modules/request/request.js:185:22)\n' +
    '    at Request.emit (node:events:519:28)\n' +
    '    at Request.onRequestError (/service/api/node_modules/request/request.js:877:8)\n' +
    '    at ClientRequest.emit (node:events:519:28)\n' +
    '    at Socket.socketErrorListener (node:_http_client:500:9)\n' +
    '    at Socket.emit (node:events:519:28)\n' +
    '    at emitErrorNT (node:internal/streams/destroy:169:8)\n' +
    '    at emitErrorCloseNT (node:internal/streams/destroy:128:3)\n' +
    '    at process.processTicksAndRejections (node:internal/process/task_queues:82:21)'
} 
2024-10-18T11:12:43.117 REQ: 1237cef6-fa37-4b4d-80e8-45a50bacecd1 41.210.147.112 - GET /api/v2/upgrade HTTP/1.0
2024-10-18T11:12:43.126 INFO: Last upgrade log is already final. 
2024-10-18T11:12:43.129 RES: 1237cef6-fa37-4b4d-80e8-45a50bacecd1 41.210.147.112 - GET /api/v2/upgrade HTTP/1.0 200 81 11.011 ms
2024-10-18T11:12:43.132 REQ: 46d43917-4c85-49eb-b7b0-edd65b5e48b5 41.210.147.112 - POST /medic-user-medic-meta/_all_docs? HTTP/1.0
2024-10-18T11:12:43.138 RES: 46d43917-4c85-49eb-b7b0-edd65b5e48b5 41.210.147.112 - POST /medic-user-medic-meta/_all_docs? HTTP/1.0 200 - 5.173 ms
2024-10-18T11:12:43.159 REQ: 1abcfc23-ceb3-44c0-8d03-75c076995e11 41.210.147.112 - POST /medic/_all_docs?include_docs=true HTTP/1.0
2024-10-18T11:12:43.165 RES: 1abcfc23-ceb3-44c0-8d03-75c076995e11 41.210.147.112 - POST /medic/_all_docs?include_docs=true HTTP/1.0 200 - 5.114 ms
2024-10-18T11:12:43.426 REQ: 69ae5094-1e99-45b7-99c5-0e00cd7120f3 41.210.147.112 - GET /api/deploy-info HTTP/1.0
2024-10-18T11:12:43.429 RES: 69ae5094-1e99-45b7-99c5-0e00cd7120f3 41.210.147.112 - GET /api/deploy-info HTTP/1.0 304 - 2.334 ms
2024-10-18T11:12:43.458 REQ: ba64724e-0466-4e52-badb-80c2e110b56c 41.210.147.112 - POST /medic-user-medic-meta/_all_docs? HTTP/1.0
2024-10-18T11:12:43.465 RES: ba64724e-0466-4e52-badb-80c2e110b56c 41.210.147.112 - POST /medic-user-medic-meta/_all_docs? HTTP/1.0 200 - 6.226 ms
2024-10-18T11:12:53.177 DEBUG: Checking for a configured outgoing message service 
2024-10-18T11:12:53.177 DEBUG: Checking for pending outgoing messages 
2024-10-18T11:12:53.182 INFO: Sending 0 messages 
2024-10-18T11:13:01.170 REQ: c4c856bf-d01b-4470-b32e-98eca5bee094 41.210.147.112 - GET /medic/_changes?feed=longpoll&heartbeat=10000&since=740-g1AAAAOReJyV0jFOwzAYBWCLIrGwMHAH1AESJ6njqQgkECO0PkBsx4qiFiZmTsEVoPUlOEUv0TPQmvc7Q5cKyfLwJEf5ZP9-C8bYeTey7MK8vZvO6tuci-sMK1_g00nD9KVS874bNVdLbJw5XRirs2M__MMQovTY-zVJyyjJUky4lqkSEK-nIexJmkZJC85bnqdKQILGuWYk3USpKXhubJMqAZnr3vsVSU_DmVrJhZukSkDW-iOEHUn3USp5U7aVS5WA7F9P2YtSn4fXK2turU19PTAzMIhvDJ-08aA5FKEy6doKDOI3hB_SHqJWFdIJmXpPMDswiGelNqQ9Ri0ztZUitV3DwDaIL--3pJVDL6yR0tXpGga2RaBpgbS7oRtFVtfF0Zb1f0yL-kw&limit=25 HTTP/1.0

Anything in the upgrade service logs? Try “staging” the install, and see how that goes? That log snippet didn’t contain enough information to work out exactly where it’s failing.

hello @gareth i have tried the staging phase App Management - 18 October 2024 | Loom
Not sure. how to get the logs for the upgrade-service, tried sudo docker logs upgrade-service-cht-upgrade-service-1
but it only returns the listening ports as seen below

ubuntu@ip-172-31-51-14:~/cht/upgrade-service$ cat docker-compose.yml
version: '3.9'

services:
  cht-upgrade-service:
    restart: always
    image: public.ecr.aws/s5s3h4s7/cht-upgrade-service:latest
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ${DOCKER_CONFIG_PATH}:/config
      - ${CHT_COMPOSE_PATH}:/docker-compose
    networks:
      - cht-net
    environment:
      - COUCHDB_USER
      - COUCHDB_PASSWORD
      - COUCHDB_SECRET
      - COUCHDB_UUID
      - COUCHDB_DATA
      - COUCHDB_SERVERS
      - CLUSTER_PEER_IPS
      - SVC_NAME
      - SVC1_NAME
      - SVC2_NAME
      - SVC3_NAME
      - COUCHDB_LOG_LEVEL
      - MARKET_URL_READ
      - BUILDS_SERVER
      - NGINX_HTTP_PORT
      - NGINX_HTTPS_PORT
      - CERTIFICATE_MODE
      - SSL_VOLUME_MOUNT_PATH
      - SSL_CERT_FILE_PATH
      - SSL_KEY_FILE_PATH
      - COMMON_NAME
      - EMAIL
      - COUNTRY
      - STATE
      - LOCALITY
      - ORGANISATION
      - DEPARTMENT
      - DOCKER_CONFIG=/config
      - CHT_COMPOSE_PROJECT_NAME=${CHT_COMPOSE_PROJECT_NAME:-cht}
      - CHT_NETWORK=${CHT_NETWORK:-cht-net}
      - DOCKER_CONFIG_PATH
      - CHT_COMPOSE_PATH
      - HEALTHCHECK_LOG_LEVEL

networks:
  cht-net:
    name: ${CHT_NETWORK:-cht-net}
ubuntu@ip-172-31-51-14:~/cht/upgrade-service$ sudo docker logs --tail 50 upgrade-service-cht-upgrade-service-1
Listening on port 5008
Listening on port 5008
Listening on port 5008

The upgrade service has very limited logs - I recommend running the compress and archive log script. It’ll get 'em all!