Hello,
I am currently working on a project involving development with the CHT Toolkit.
The target workflow is as follows:
-
A Community Health Worker (CHW) fills out a form for an individual.
-
As part of the form-filling process, the CHW captures oral images and answers additional questions.
-
The captured images should then be passed to an AI model on the device to estimate a risk score for oral cancer.
However, I am currently stuck on how to pass the captured images from the form to the AI model. From my understanding, the AI model may only be executed through a JavaScript extension script within the CHW Medic app.
I would appreciate guidance on the following questions:
-
How can we transfer the captured image data to the AI model running inside the JavaScript extension script?
-
Is there any alternative approach (other than a JS extension script) to trigger the AI model based on form actions and pass the captured images to it?
2 Likes
If there are any code reference or pointers can be shared, that will be helpful. 
@Chetan_Gupta welcome! Thank you for sharing your workflow!
I think when you say “JS extension script” you might be talking about extension-libs. Currently, extension-libs are a powerful tool for doing synchronous processing on input data. However, they may not be appropriate for many types of AI workloads which may need to be more interactive or asynchronous.
One good option for more complex workflows might be to leverage the Android app launcher functionality. Basically you implement the core of your AI workflow as an Android app (installed on the device alongside the cht-android app). Then, in your CHT form you can configure the form to launch your target Android app (providing desired input data from the form) via a normal Android intent. The user continues the workflow in the other app and when that completes, the user can be automatically re-directed back into the CHT form (along with response data from the external app).
Audere has recently used this approach for integrating with the CHT to use the HealthPulseAI to evaluate photos of RDT tests. I would definitely encourage you to join us in the CHT Roundup Call this Thursday (March 12th)! It is going to feature a presentation about the deployment of this functionality in Kenya and Uganda.
Another angle that may be worth considering is that we are currently developing a UI Extension framework for the CHT that will allow custom web components to be rendered within the CHT application. The first iteration is not going to integrate directly with CHT forms, but it would allow for an interactive custom page within the app that might provide the power and flexibility needed for your workflow. Unfortunately, development for this feature is currently just beginning… 
1 Like
Thanks @jkuester for such quick reply!!!
I will definitely join for the tomorrow call.
I still would like to know some more details on how the captured images inside CHT form can be passed onto another target Android App.
Or do we capture the images in the target Android App itself, but then I also want to store these images in the CHT DB also.
Please let me know if you can shed some details on how the images is being fed to another target workflow.
1 Like
Capturing images during a form is actually one of the primary use-cases for the Android app launcher feature! If you just want to use the default camera app to take a picture, you can target the android.media.action.IMAGE_CAPTURE intent. The captured picture data will be converted to base64 and will be returned to the form data model. Once the picture data is in the form model, you can store it with the report when you complete the form and/or pass it along to additional Android app launcher calls (if you want to send the data to a target Android app for evaluation).
Alternatively, you can also handle the image capture yourself in the target Android app. Once the user has taken the picture and the evaluation is complete, you can return just the evaluation results to the form data model (or you can also include the picture data if you want to store that with the report). One thing to consider when determining if you want to capture the image in the target Android app or not is that cht-android will compress the image as it stores it in the form data model. This may not be ideal for subsequent AI evaluation of the image. Of course, you can always customize/disable this compression in a fork of cht-android (this is what we did for the HealthPulse integration).
Also, sorry for not remembering to send you this before, but here is a link to an example form that might be useful when tinkering with the Android app launcher. There is some more details in the xlsx, but essentially this form just demonstrates capturing an image via the default camera app and also using a 3rd party app to read text data from a barcode.
1 Like