We are going to share a story every week for the 12 weeks of summer, showing how health organizations are using organizations to transform patients’ outcomes and increase productivity. Thanks to Nas Taibi, Solutions Architect, details, how, low-code / no-code services, for the fourth blog in our series, introducing AI into medical imaging is no longer limited to coding experts.
The shift towards values-based care has seen healthcare facilities seek out low-code and no-code innovations that accelerate operating results and create financially sustainable care systems.
A recent idea that is gaining attention in the enterprise imaging world is AI enhancements – using machine learning to process, analyze, and interpret medical images. Embedding technology into enterprise imaging systems improves the decision-making process of physicians and places a lighter burden on reporting practitioners. Using low-code or no-code technology, professionals find that they can work quicker and better than before.
Picture View: Machine Learning and Ultrasound
Imagine a health facility. It seeks to use machine learning to automate and improve the predictive accuracy of a fetal gestational stage. Traditionally, sonographers have manually measured bipolar diameter and head circumference using callipers.
The machine learning model analyzes legacy ultrasound medical images, with manual measurements taken. This would closely match the accuracy of the original sonographer’s findings. With AI on their side, the facility can capture this data, then embed it into ultrasound images, which can be used during training as a reference data point.
Visual images: data preparation and cleaning
Now, imagine an engineering team creating a prototype of a solution to collect, clean and pre-process those images. The collected sample data will be used to train and test the model.
The medical imaging system stores files in Azure’s binary large object storage. Adding each file triggers the Azure Logic App workflow. The message is pulled and extracted from the extracted URL before it grabs JPEG and JSON from the DICOM image. Next, the system performs an optical character recognition on the image – essentially allowing it to view the photo and eject metadata.
Low-code technology in play
AI tools and framework adoption in healthcare sector is increasing. Fully managed cloud-based machine learning services can be used to train, deploy and manage large-scale models. Then there is little code / no code tool.
You do not need to be a technical expert to use them. There is no need to learn to code. All you need is access to Azure Cognitive Services, which offers pre-built machine learning models.
The following code is Azure Machine Learning Studio to help streamline the development process. It lets you deploy pre-built machine-learning algorithms, then adds datasets that integrate with custom applications.
Used together, these Microsoft services make it easy to change the workplace without coding skills. Instead, you can focus on delivering better ROI, a better experience for employees, and higher quality care.
Step 1 – Finding Measurements
Healthcare facilities across the country are facing a similar problem: some ultrasound pictures are essentially screenshots of screenshots.
While modern machines are able to embed all those important measurements into the image, these older images include all the important measurements taken by the sonographer during the scan. The first, phase, is acquiring these images.
Step 2 – Pixel Extraction and Conversion
The next step sees you extracting the pixels of the image, then use the open-source tool to convert the original DICOM file to JPEG.
Armed with this JPEG, it is time to play the image through optical character recognition. Since this is achieved through Microsoft’s Smart Cognitive Services, it is easy to perform.
See this though, as this process often replaces personally identifiable information, such as names. Worse, it is displayed as a banner in pixel data, therefore, it becomes mandatory to identify and mask it.