Thursday, September 22, 2011

Delousing Denoising CT

I mentioned below about the possibility of a SPECT/CT purchase, which may not happen for a while, especially given the fact that no one yet makes a 64-slice version. But there does seem to be some chance to purchase a shiny new CT scanner and a bundled advanced imaging package. Our hospital needs to upgrade for a number of reasons, not the least of which is the desire to decrease radiation dose.

Most of the technology and mathematics involved in cutting dose is beyond the scope of this article (and my meager intelligence). Better detectors and newer tubes play a part, of course, and there are other ways we can improve our acquisition.

Remember, the whole point of X-ray-based imaging is to see what's inside the patient's body, and we generally do this by mapping in some fashion the density (more accurately, the attenuation coefficient) of the various tissues. In the X-ray world, which includes Computed Tomography as most use the term, we do this by passing a beam of X-rays through the victim, I mean patient, and do something to detect it at the other end. Wilhelm Roentgen, our HERO, discovered that placing a piece of film on the other side of the body part would act as a detector, showing us how much radiation got through which parts, yielding a nice map of Mrs. Roentgen's hand.


Not very sharp, I'm afraid, and the amount of radiation Roentgen's old Crookes Tube produced would scare the pants of any self-respecting health physicist today. Various improvements have come about in the last 110 years, or so, including far more sophisticated tubes, better film, the addition of a phosphorescent screen, Bucky grids, and of course, the conversion from film to Digital and Computed Radiography (DR and CR).

CT is a different but related X-ray animal. Click on the animation below:


You don't want to hear about the math (which has been around since discovered by an Austrian named Radon in 1917), but basically, passing a beam of X-rays through an object from various perspectives yields data that can be reconstructed into an image of the original object. Here is how it works with a CT of the head:


Ever since CT came about, there have been attempts to lower the dosage of X-rays required to get a decent picture. In the last year or so, perhaps due to scary articles like THIS, dose reduction has become quite in vogue. I don't mean to be flippant, as radiation is something that deserves respect and careful handling, much like electricity, but I just have to laugh a bit about the mania that has taken over. The potential dangers are nothing new, and we really don't need to panic. We've been dealing with this for quite a while.

The main principle of handling radiation in the imaging world is called ALARA: As Low As Reasonably Achievable. That doesn't mean we avoid necessary studies, but we are simply attempting to do what we can to lower the dose, and still answer the question that prompted the exam in the first place. Working within this framework gives us some direction.

So...in very basic terms, we can reduce the dose by decreasing the radiation passed through the patient. We can use higher energy beams which do indeed pass through the patient more cleanly, if you will, stopped by less tissue, and therefore depositing less energy, but, alas, we reach a point where we don't get much detail. We can send fewer X-rays through the patient, but too few and we don't see, ummm, stuff. OR, we can increase the sensitivity of the detector, so we don't need so many X-rays. OR.... we can use mathematics to recover information from a crappy image, thus lowering the dose and "rescuing" the picture later.

You REALLY don't want the details of the mathematics involved here, but the newest scanners use something called iterative reconstruction to this end. Let me just borrow the definition from the Wikipedia:


The reconstruction of an image from the acquired data is an inverse problem. Often, it is not possible to exactly solve the inverse problem directly. In this case, a direct algorithm has to approximate the solution, which might cause visible reconstruction artifacts in the image. Iterative algorithms approach the correct solution using multiple iteration steps, which allows to obtain a better reconstruction at the cost of a higher computation time.

In computed tomography, this approach was the one first used by Hounsfield. There are a large variety of algorithms, but each starts with an assumed image, computes projections from the image, compares the original projection data and updates the image based upon the difference between the calculated and the actual projections.

There are typically five components to iterative image reconstruction algorithms, e.g. .[2]
  1. An object model that expresses the unknown continuous-space function f(r) that is to be reconstructed in terms of a finite series with unknown coefficients that must be estimated from the data.
  2. A system model that relates the unknown object to the "ideal" measurements that would be recorded in the absence of measurement noise. Often this is a linear model of the form \mathbf{A}x.
  3. statistical model that describes how the noisy measurements vary around their ideal values. Often Gaussian noise orPoisson statistics are assumed.
  4. cost function that is to be minimized to estimate the image coefficient vector. Often this cost function includes some form of regularization.
  5. An algorithm, usually iterative, for minimizing the cost function, including some initial estimate of the image and some stopping criterion for terminating the iterations.
Aren't you glad you asked? Basically, you keep plugging your image back into the computer until it looks good. The major vendors all have iterative recon in one form or another. Now, I must give credit to GE, whose new VEO system (which just won FDA approval) goes this one better, and I'll let my friends from Medgadget tell you how:

For decades, the standard CT image reconstruction algorithm has been filtered back projection, which uses mathematical methods to reconstruct tomographic images from the projections that are obtained by the circling detectors. More recently, a new reconstruction algorithm, adaptive statistical iterative reconstruction (ASIR), has been introduced that performs modeling of the noise distribution, cutting radiation dose by up to 80% for many applications.

Model-based iterative reconstruction (MBIR), employed by Veo, goes a step further by incorporating a physical model of the CT system into the reconstruction process to characterize the data acquisition phase, including noise, beam hardening, and scatter. It has the potential to cut radiation doses even more but is computationally more demanding, leading to longer reconstruction times (which will gradually become less of a problem with ever increasing computing power). It may potentially deliver lower noise, increased resolution, improved low contrast detectability and fewer artifacts. Veo is available on the GE Discovery CT750 HD system, and is suitable for use throughout the body.
This is really, really clever. The resulting image (bottom pane) looks pretty good as compared to standard reconstruction (top pane):


I'm assuming the modeling has to be done for each individual machine, because there are variances in even the most precisely-made product. No doubt there is scanning of some some standard phantom followed by back-tracking to form a mathematical version of what the scanner looks like to the average photon. Keep in mind, though, this is all done in software, not in hardware, and software can be reverse-engineered. Thus, I doubt GE is going to have this exclusively for much longer. Still, credit where credit is due. This was a rather brilliant innovation. It does seem to take a LOT of computing power to run these numbers, however, and the reconstruction is far from instant. I'm thinking GE needs to set up something like the old SETI desktop program wherein concerned individuals could donate their computers' idle time to the processing of medical images.

No one has mentioned applying the above techniques to old scanners, but there are a lot of them out there, and they need some low-dose love, too. I've encountered two vendors who promise to provide that love. For a price, that is, and substantially more than what love goes for on the street corner not far from one of our hospitals.

The good folks from Sapheneia in Sweden are very anxious to sell you this:
The Sapheneia product Clarity Server is a software providing image quality enhancement optimized for greater diagnostic confidence. Clarity image processing algorithms enable radiologists to lower radiation dose exposures during image acquisition.

Clarity directly addresses the continued medical community concerns of increased radiation exposure to primarily pediatric and female patients and safety concerns for the clinical staff.

Clarity incorporates image-filtering techniques that are configurable based on medical modality and medical observer. Clarity utilizes 3D information for the image optimization and enables both noise reduction and edge enhancement in the same image.

Clarity is applicable to current CT modality technology and supports older generation technology, extending the lifetime of existing instrumentation.
To be characteristically blunt and pugilistic, the Clarity Server is a computer that sits between the CT and the PACS, massaging the data, and prettifying the images that you have deliberately scanned at suboptimal parameters, in hopes of recovery to robust diagnostic status. I'm not so sure about this approach. (Could you tell?) While I haven't confirmed it, the third paragraph from Sapheneia suggests to me that all they are doing is simple digital filtration of the images, smoothing (Gaussian noise reduction) and then edge-enhancing, as you can do with the free Photoshop clone, Gimp2. This should be something that is built into your PACS viewer, and so it has been, at least partially, in AMICAS PACS since version 3.x. OK, AMICAS just gives you a few steps of edge enhancement, but it proves the concept. One press of the "S" key sharpens every CT slice in your study.

Here's a single slice with progressing grades of edge enhancement:

Original

Mild Enhancement

Moderate Enhancement
Too Much Enhancement!
Now, let's try an experiment with Gimp...We'll take the mildly-enhanced image and smooth it and then sharpen it a bit..
Mild Enhancement
Mild Enhancement with Smoothing and Resharpening
I didn't spend a lot of time on the processed image, but you get the idea...you can, to some extent, decrapify, I mean denoise, a suboptimal image.

The real question we have to ask is whether or not data gets lost in the process. Sapheneia shows charts and so on that show various parameters are improved, mainly SNR and CNR (signal and contrast to noise ratio, respectively) but that doesn't necessarily mean that some details aren't smoothed out in the process. I haven't yet found a paper that proves or disproves my paranoia, but I'm going to keep looking. In the meantime, Sepheneia has apparently jacked up the price on the magic box considerably in light of the dose mania over here in the States.

Another vendor offers similar denoising as part of their advanced imaging suite. Vital Images, now owned by Toshiba, has made it through my door in spite of misgivings from way back, not to mention the fact that Larry D. still works for them, and the new Vitrea looks pretty good. It includes a denoisify function which applies similar filtering to legacy CT images. It can be toggled on and off, something which I'm not sure the Sapheneia box can do. From Vital:
The Noise Reduction menu
Most of the Vitrea protocols contain a new menu button in the bottom right side of the MPR viewer and the 3D viewer to open the menu for the Noise Reduction settings.
The menu contains a list of predefined filters and tools to create, save and modify custom filters

Applying an existing filter
To apply an existing filter in both MPR and 3D viewer, open the Noise Reduction menu and select one of the available filters. The filters are sorted from more conservative – on the top of the list (Preserving the small details - less blurry but with limited noise reduction) to the most aggressive filters at the bottom of the list (strongly reducing noise but they may blur the thin details in the images).

The Classic filters
2 numbers (Classic) or 3 numbers (Advanced) follow the custom filter names they also appear in any filter annotations in the MPR or 3D images. Example: Smooth_25x30.

The first number (here: 25) is the smoothness of the result. Small smoothness values (such as 4 or 8) will preserve the image sharpness but with limited noise reduction. Larger values (12-40) strongly reduces the noise with increasing compromise in the image sharpness.

The second number (here: 30) is the Contrast. A low contrast value will make the filter more sensitive to the orientation and strength of the images edges. In other words, it will preserve structures with a low contrast but it will limit the denoising strength.

Higher values for contrast will preserve only stronger structures and will strongly reduce the noise at the expense of blurring low contrasted details.

The Advanced filters
The Advanced filters have 3 values: Smoothness, Contrast and Structure.

The Smoothness and Contrast are discussed earlier in this document (see Classic Filters).

The Structure number indicates how much 3D structure (like small vessels, focal lesions, etc.) you want to preserve at the expense of a reduced noise reduction. Low values for Structure preserve the details but may preserve speckles in the images and reduce the denoising strength. High values for structure will remove speckles but may preserve fewer edges.
Sure sounds like simple image filtering to me. These can be applied to 3D renderings, a nice touch.

Now, there are far more advanced methods to denoisify than simple filters, and if you really want to punish yourself, have a look at THIS thesis about using wavelets (the same stuff used in teleradiology compression) to do the job. I'm not going there, but you'll be glad to know that there is a GIMP denoisifying wavelet filter app for that.

In the end, we have a few choices to achieve dose reduction. We can all go out an buy new scanners with the most efficient X-ray tubes and detectors, and the best iterative reconstruction.  These babies bring in the dose for a cardiac CT, for example, below 1mSV, which is very, very low indeed.  But if you don't have a couple of million lying around, should you invest in one of the after-the-fact denoisers?  I'm not yet certain. They can definately give you a prettier picture, but will they obscure important findings in the process?  That is indeed the Million Dollar question.

Of course, the most-overlooked path to dose reduction, like the best but least-respected contraceptive, is abstinence. Think before you order a scan.

Nahhhhhh. Never mind. 

2 comments :

Celticpiping said...

Great, now GIMP is gonna havea dropdown for "Radiological"

:p

great post Dr.

Anonymous said...

but why do I hate ASIR so much?
Sure, we have it. When we got our GE HD750, they installed it, enthused over it, championed it, and were so disappointed, when we said we just don't actually like it.
It makes all images look waxy, plastic, as if you have just woken up, and not yet rubbed the sleep out of your eyes.

For me, I hate ASIR images. Give me a good old FBP image any day. And as for the dose nazis.........well.......no soup for them!