Project

General

Profile

Actions

2021 AimsPlanning

2020-2021 Projects

ISO LEVELS ROI
  • ROIcreate and save to a new RTSTRUCT file.
  • ROIcreate to produce multiple contours.
  • ROIsubtract (difference between 2 ROIs)
  • ROItransfer (after Fusion)
DICOM-SR
  • DVH2DICOM-SR
  • RADIOMICS2DICOM-SR
  • CLINICALDATA2DICOM-SR

CSV data is more amenable to use by ML software. DICOM-SR is a means of colocating the information in the patient's chart along with the other DICOM data for the patient (e.g. in a PACS or PACS-like system). I recommend we simultaneously create the reverse conversion. It's a useful testing technique: round trip conversion comparing original to what went through the wash. Also, if the means for exchanging the data is via DICOM, an ML researcher will probably need to convert to CSV for their code. An alternative is to create a Pandas conversion from DICOM (pandas to csv is built in to pandas).

  1. DICOM Structure Reports should utilize TID 1500
  2. References: Apparently there is a 3D-Slicer extension https://qiicr.gitbook.io/quantitativereporting-guide/ with code that might be a useful starting point https://github.com/QIICR/QuantitativeReporting/blob/master/DICOMPlugins/DICOMTID1500Plugin.py and they have a test sample: https://github.com/QIICR/QuantitativeReporting/releases/download/test-data/SR.zip , which is used in their automated testing: https://github.com/QIICR/QuantitativeReporting/blob/master/QuantitativeReporting/QRUtils/testdata.py
GUI
  • PET/CT VIEWER

Is this assuming that the CT and PET are already co-registered, e.g. from a PET-CT scanner? Or just that the PET should be overlaid on CT, and image co-registration is a requirement (see below)?

  • Image Fusion

Image Fusion consists of both image co-registration (doing the math to get a 4x4 matrix to transform from a coordinate in the PET data reference frame to the matching anatomical location in the CT data reference frame) and image overlay (e.g. having a grey scale CT and a hot-metal PET displayed on the same pixels, or a checkerboard alternating between the two different datasets within the same window). There are a large variety of image co-registration approaches and algorithms. Manual registration is usually a good starting point, and is a good learning exercise for graphics programmers.

Batch Processing

Process ISO2ROI, SUV2ROI, DVH2DICOM-SR, RADIOMICS2DICOM-SR and CLINICALDATA2DICOM-SR for a directory

Radiomics Configuration
  • Select function groups

from https://en.wikipedia.org/wiki/Radiomics : Radiomic features can be divided into five groups: size and shape based–features, descriptors of the image intensity histogram , descriptors of the relationships between image voxels (e.g. gray-level co-occurrence matrix ( GLCM ), run length matrix ( RLM ), size zone matrix ( SZM ), and neighborhood gray tone difference matrix ( NGTDM ) derived textures, textures extracted from filtered images, and fractal features .

  • Select specific functions

need list of functions for selection.
current specification found in get_radiomics_df() in View/PyradiProgressBar.py, but that is defaulting to what is provided by pyradiomics
see https://pyradiomics.readthedocs.io/en/latest/features.html for what is available

Machine Learning

clarification required. Is the intent to provide hooks so that an ML training algorithm can be applied to the extracted radiomics features?
is the intent to have a plug-in architecture with inversion of control, or a pluggable factory, or something else?

----------
For 2021

Project Introduction

Learning Curve
  1. Learning Curve reduction
  2. Unit Testing/Testing Automation
  3. DICOM specification/’Oncology domain knowledge’ climbing exercise
  4. Workflow of code review (GitHub flow)
  5. Better task tracking
  6. Earlier and more documentation of code
Presentation
  1. Not something you stick at the end
  2. What is the purpose?

How does this feature achieve the purpose?

Updated by Peter Qian about 4 years ago · 1 revisions