Deep learning in photoacoustic imaging

Deep learning in photoacoustic imaging combines the hybrid imaging modality of photoacoustic imaging (PA) with the rapidly evolving field of deep learning. Photoacoustic imaging is based on the photoacoustic effect, in which optical absorption causes a rise in temperature, which causes a subsequent rise in pressure via thermo-elastic expansion.[1] This pressure rise propagates through the tissue and is sensed via ultrasonic transducers. Due to the proportionality between the optical absorption, the rise in temperature, and the rise in pressure, the ultrasound pressure wave signal can be used to quantify the original optical energy deposition within the tissue.[2]

Depiction of photoacoustic tomography

Photoacoustic imaging has applications of deep learning in both photoacoustic computed tomography (PACT) and photoacoustic microscopy (PAM). PACT utilizes wide-field optical excitation and an array of unfocused ultrasound transducers.[1] Similar to other computed tomography methods, the sample is imaged at multiple view angles, which are then used to perform an inverse reconstruction algorithm based on the detection geometry (typically through universal backprojection,[3] modified delay-and-sum,[4] or time reversal [5][6]) to elicit the initial pressure distribution within the tissue. PAM on the other hand uses focused ultrasound detection combined with weakly-focused optical excitation (acoustic resolution PAM or AR-PAM) or tightly-focused optical excitation (optical resolution PAM or OR-PAM).[7] PAM typically captures images point-by-point via a mechanical raster scanning pattern. At each scanned point, the acoustic time-of-flight provides axial resolution while the acoustic focusing yields lateral resolution.[1]

Applications of deep learning in PACT

The one of the first applications of deep learning in PACT was by Reiter et al.[8] in which a deep neural network was trained to learn spatial impulse responses and locate photoacoustic point sources. The resulting mean axial and lateral point location errors on 2,412 of their randomly selected test images were 0.28 mm and 0.37 mm respectively. After this initial implementation, the applications of deep learning in PACT have branched out primarily into removing artifacts from acoustic reflections,[9] sparse sampling,[10][11][12] limited-view,[13][14][15] and limited-bandwidth.[16][14][17][18] There has also been some recent work in PACT toward using deep learning for wavefront localization.[19]. There have been networks based on fusion of information from two different reconstructions to improve the reconstruction using deep learning fusion based networks.[20]

Using deep learning to locate photoacoustic point sources

Traditional photoacoustic beamforming techniques modeled photoacoustic wave propagation by using detector array geometry and the time-of-flight to account for differences in the PA signal arrival time. However, this technique failed to account for reverberant acoustic signals caused by acoustic reflection, resulting in acoustic reflection artifacts that corrupt the true photoacoustic point source location information. In Reiter et al.,[8] a convolutional neural network (similar to a simple VGG-16 [21] style architecture) was used that took pre-beamformed photoacoustic data as input and outputted a classification result specifying the 2-D point source location.

Removing acoustic reflection artifacts (in the presence of multiple sources and channel noise)

Building on the work of Reiter et al.,[8] Allman et al. [9] utilized a full VGG-16 [21] architecture to locate point sources and remove reflection artifacts within raw photoacoustic channel data (in the presence of multiple sources and channel noise). This utilization of deep learning trained on simulated data produced in the MATLAB k-wave library, and then later reaffirmed their results on experimental data.

Ill-posed PACT reconstruction

In PACT, tomographic reconstruction is performed, in which the projections from multiple solid angles are combined to form an image. When reconstruction methods like filtered backprojection or time reversal, are ill-posed inverse problems [22] due to sampling under the Nyquist-Shannon's sampling requirement or with limited-bandwidth/view, the resulting reconstruction contains image artifacts. Traditionally these artifacts were removed with slow iterative methods like total variation minimization, but the advent of deep learning approaches has opened a new avenue that utilizes a priori knowledge from network training to remove artifacts. In the deep learning methods that seek to remove these sparse sampling, limited-bandwidth, and limited-view artifacts, the typical workflow involves first performing the ill-posed reconstruction technique to transform the pre-beamformed data into a 2-D representation of the initial pressure distribution that contains artifacts. Then, a convolutional neural network (CNN) is trained to remove the artifacts, in order to produce an artifact-free representation of the ground truth initial pressure distribution.

Using deep learning to remove sparse sampling artifacts

When the density of uniform tomographic view angles is under what is prescribed by the Nyquist-Shannon's sampling theorem, it is said that the imaging system is performing sparse sampling. Sparse sampling typically occurs as a way of keeping production costs low and improving image acquisition speed.[10] The typical network architectures used to remove these sparse sampling artifacts are U-net[10][12] and Fully Dense (FD) U-net.[11] Both of these architectures contain a compression and decompression phase. The compression phase learns to compress the image to a latent representation that lacks the imaging artifacts and other details.[23] The decompression phase then combines with information passed by the residual connections in order to add back image details without adding in the details associated with the artifacts.[23] FD U-net modifies the original U-net architecture by including dense blocks that allow layers to utilize information learned by previous layers within the dense block.[11]

Removing limited-view artifacts with deep learning

When a region of partial solid angles are not captured, generally due to geometric limitations, the image acquisition is said to have limited-view.[24] As illustrated by the experiments of Davoudi et al.,[12] limited-view corruptions can be directly observed as missing information in the frequency domain of the reconstructed image. Limited-view, similar to sparse sampling, makes the initial reconstruction algorithm ill-posed. Prior to deep learning, the limited-view problem was addressed with complex hardware such as acoustic deflectors[25] and full ring-shaped transducer arrays,[12][26] as well as solutions like compressed sensing,[27][28][29][30][31] weighted factor,[32] and iterative filtered backprojection.[33][34] The result of this ill-posed reconstruction is imaging artifacts that can be removed by CNNs. The deep learning algorithms used to remove limited-view artifacts include U-net[12][15] and FD U-net,[35] as well as generative adversarial networks (GANs)[14] and volumetric versions of U-net.[13] One GAN implementation of note improved upon U-net by using U-net as a generator and VGG as a discriminator, with the Wasserstein metric and gradient penalty to stabilize training (WGAN-GP).[14]

Limited-bandwidth artifact removal with deep neural networks

The limited-bandwidth problem occurs as a result of the ultrasound transducer array's limited detection frequency bandwidth. This transducer array acts like a band-pass filter in the frequency domain, attenuating both high and low frequencies within the photoacoustic signal.[15] This limited-bandwidth can cause artifacts and limit the axial resolution of the imaging system.[14] The primary deep neural network architectures used to remove limited-bandwidth artifacts have been WGAN-GP[14] and modified U-net.[15] The typical method to remove artifacts and denoise limited-bandwidth reconstructions before deep learning was Wiener filtering, which helps to expand the PA signal's frequency spectrum.[14] The primary advantage of the deep learning method over Wiener filtering is that Wiener filtering requires a high initial signal-to-noise ratio (SNR), which is not always possible, while the deep learning model has no such restriction.[14]

Applications of deep learning in PAM

Depiction of mechanical raster scanning method

Photoacoustic microscopy differs from other forms of photoacoustic tomography in that it uses focused ultrasound detection to acquire images pixel-by-pixel. PAM images are acquired as time-resolved volumetric data that is typically mapped to a 2-D projection via a Hilbert transform and maximum amplitude projection (MAP).[1] The first application of deep learning to PAM, took the form of a motion-correction algorithm.[36] This procedure was posed to correct the PAM artifacts that occur when an in vivo model moves during scanning. This movement creates the appearance of vessel discontinuities.

Deep learning to remove motion artifacts in PAM

The two primary motion artifact types addressed by deep learning in PAM are displacements in the vertical and tilted directions. Chen et al.[36] used a simple three layer convolutional neural network, with each layer represented by a weight matrix and a bias vector, in order to remove the PAM motion artifacts. Two of the convolutional layers contain RELU activation functions, while the last has no activation function.[36] Using this architecture, kernel sizes of 3 × 3, 4 × 4, and 5 × 5 were tested, with the largest kernel size of 5 × 5 yielding the best results.[36] After training, the performance of the motion correction model was tested and performed well on both simulation and in vivo data.[36]

See also

References

  1. Wang, Lihong V. (2009-08-29). "Multiscale photoacoustic microscopy and computed tomography". Nature Photonics. 3 (9): 503–509. Bibcode:2009NaPho...3..503W. doi:10.1038/nphoton.2009.157. ISSN 1749-4885. PMC 2802217. PMID 20161535.
  2. Beard, Paul (2011-08-06). "Biomedical photoacoustic imaging". Interface Focus. 1 (4): 602–631. doi:10.1098/rsfs.2011.0028. ISSN 2042-8898. PMC 3262268. PMID 22866233.
  3. Xu, Minghua; Wang, Lihong V. (2005-01-19). "Universal back-projection algorithm for photoacoustic computed tomography". Physical Review E. 71 (1): 016706. Bibcode:2005PhRvE..71a6706X. doi:10.1103/PhysRevE.71.016706. hdl:1969.1/180492. PMID 15697763.
  4. Kalva, Sandeep Kumar; Pramanik, Manojit (August 2016). "Experimental validation of tangential resolution improvement in photoacoustic tomography using modified delay-and-sum reconstruction algorithm". Journal of Biomedical Optics. 21 (8): 086011. Bibcode:2016JBO....21h6011K. doi:10.1117/1.JBO.21.8.086011. ISSN 1083-3668. PMID 27548773.
  5. Bossy, Emmanuel; Daoudi, Khalid; Boccara, Albert-Claude; Tanter, Mickael; Aubry, Jean-François; Montaldo, Gabriel; Fink, Mathias (2006-10-30). "Time reversal of photoacoustic waves" (PDF). Applied Physics Letters. 89 (18): 184108. Bibcode:2006ApPhL..89r4108B. doi:10.1063/1.2382732. ISSN 0003-6951.
  6. Treeby, Bradley E; Zhang, Edward Z; Cox, B T (2010-09-24). "Photoacoustic tomography in absorbing acoustic media using time reversal". Inverse Problems. 26 (11): 115003. Bibcode:2010InvPr..26k5003T. doi:10.1088/0266-5611/26/11/115003. ISSN 0266-5611.
  7. Wang, Lihong V.; Yao, Junjie (2016-07-28). "A Practical Guide to Photoacoustic Tomography in the Life Sciences". Nature Methods. 13 (8): 627–638. doi:10.1038/nmeth.3925. ISSN 1548-7091. PMC 4980387. PMID 27467726.
  8. Reiter, Austin; Bell, Muyinatu A. Lediju (2017-03-03). Oraevsky, Alexander A; Wang, Lihong V (eds.). "A machine learning approach to identifying point source locations in photoacoustic data". Photons Plus Ultrasound: Imaging and Sensing 2017. International Society for Optics and Photonics. 10064: 100643J. Bibcode:2017SPIE10064E..3JR. doi:10.1117/12.2255098.
  9. Allman, Derek; Reiter, Austin; Bell, Muyinatu A. Lediju (June 2018). "Photoacoustic Source Detection and Reflection Artifact Removal Enabled by Deep Learning". IEEE Transactions on Medical Imaging. 37 (6): 1464–1477. doi:10.1109/TMI.2018.2829662. ISSN 1558-254X. PMC 6075868. PMID 29870374.
  10. Antholzer, Stephan; Haltmeier, Markus; Schwab, Johannes (2019-07-03). "Deep learning for photoacoustic tomography from sparse data". Inverse Problems in Science and Engineering. 27 (7): 987–1005. doi:10.1080/17415977.2018.1518444. ISSN 1741-5977. PMC 6474723. PMID 31057659.
  11. Guan, Steven; Khan, Amir A.; Sikdar, Siddhartha; Chitnis, Parag V. (February 2020). "Fully Dense UNet for 2-D Sparse Photoacoustic Tomography Artifact Removal". IEEE Journal of Biomedical and Health Informatics. 24 (2): 568–576. arXiv:1808.10848. doi:10.1109/jbhi.2019.2912935. ISSN 2168-2194. PMID 31021809.
  12. Davoudi, Neda; Deán-Ben, Xosé Luís; Razansky, Daniel (2019-09-16). "Deep learning optoacoustic tomography with sparse data". Nature Machine Intelligence. 1 (10): 453–460. doi:10.1038/s42256-019-0095-3. ISSN 2522-5839.
  13. Hauptmann, Andreas; Lucka, Felix; Betcke, Marta; Huynh, Nam; Adler, Jonas; Cox, Ben; Beard, Paul; Ourselin, Sebastien; Arridge, Simon (June 2018). "Model-Based Learning for Accelerated, Limited-View 3-D Photoacoustic Tomography". IEEE Transactions on Medical Imaging. 37 (6): 1382–1393. doi:10.1109/TMI.2018.2820382. ISSN 1558-254X. PMID 29870367.
  14. Vu, Tri; Li, Mucong; Humayun, Hannah; Zhou, Yuan; Yao, Junjie (2020-03-25). "Feature article: A generative adversarial network for artifact removal in photoacoustic computed tomography with a linear-array transducer". Experimental Biology and Medicine. 245 (7): 597–605. doi:10.1177/1535370220914285. ISSN 1535-3702. PMC 7153213. PMID 32208974.
  15. Waibel, Dominik; Gröhl, Janek; Isensee, Fabian; Kirchner, Thomas; Maier-Hein, Klaus; Maier-Hein, Lena (2018-02-19). Wang, Lihong V; Oraevsky, Alexander A (eds.). "Reconstruction of initial pressure from limited view photoacoustic images using deep learning". Photons Plus Ultrasound: Imaging and Sensing 2018. International Society for Optics and Photonics. 10494: 104942S. Bibcode:2018SPIE10494E..2SW. doi:10.1117/12.2288353. ISBN 9781510614734.
  16. Awasthi, Navchetan (28 February 2020). "Deep Neural Network Based Sinogram Super-resolution and Bandwidth Enhancement for Limited-data Photoacoustic Tomography". Published in: IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control. doi:10.1109/TUFFC.2020.2977210.
  17. Awasthi, Navchetan. "Sinogram super-resolution and denoising convolutional neural network (SRCN) for limited data photoacoustic tomography". Arxiv. doi:10.13140/RG.2.2.21810.76489.
  18. Gutta, Sreedevi; Kadimesetty, Venkata Suryanarayana; Kalva, Sandeep Kumar; Pramanik, Manojit; Ganapathy, Sriram; Yalavarthy, Phaneendra K. (2017-11-02). "Deep neural network-based bandwidth enhancement of photoacoustic data". Journal of Biomedical Optics. 22 (11): 116001. Bibcode:2017JBO....22k6001G. doi:10.1117/1.jbo.22.11.116001. ISSN 1083-3668. PMID 29098811.
  19. Johnstonbaugh, Kerrick; Agrawal, Sumit; Durairaj, Deepit Abhishek; Fadden, Christopher; Dangi, Ajay; Karri, Sri Phani Krishna; Kothapalli, Sri-Rajasekhar (2020). "A Deep Learning approach to Photoacoustic Wavefront Localization in Deep-Tissue Medium". IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control: 1. doi:10.1109/tuffc.2020.2964698. ISSN 0885-3010. PMID 31944951.
  20. Awasthi, Navchetan (3 April 2019). "PA-Fuse: deep supervised approach for the fusion of photoacoustic images with distinct reconstruction characteristics". Published in: Biomedical Optics Express. 10 (5): 2227–2243. doi:10.1364/BOE.10.002227.
  21. Simonyan, Karen; Zisserman, Andrew (2015-04-10). "Very Deep Convolutional Networks for Large-Scale Image Recognition". arXiv:1409.1556 [cs.CV].
  22. Agranovsky, Mark; Kuchment, Peter (2007-08-28). "Uniqueness of reconstruction and an inversion procedure for thermoacoustic and photoacoustic tomography with variable sound speed". Inverse Problems. 23 (5): 2089–2102. arXiv:0706.0598. Bibcode:2007InvPr..23.2089A. doi:10.1088/0266-5611/23/5/016. ISSN 0266-5611.
  23. Ronneberger, Olaf; Fischer, Philipp; Brox, Thomas (2015), "U-Net: Convolutional Networks for Biomedical Image Segmentation", Lecture Notes in Computer Science, Springer International Publishing, pp. 234–241, arXiv:1505.04597, Bibcode:2015arXiv150504597R, doi:10.1007/978-3-319-24574-4_28, ISBN 978-3-319-24573-7
  24. Xu, Yuan; Wang, Lihong V.; Ambartsoumian, Gaik; Kuchment, Peter (2004-03-11). "Reconstructions in limited-view thermoacoustic tomography". Medical Physics. 31 (4): 724–733. Bibcode:2004MedPh..31..724X. doi:10.1118/1.1644531. ISSN 0094-2405. PMID 15124989.
  25. Huang, Bin; Xia, Jun; Maslov, Konstantin; Wang, Lihong V. (2013-11-27). "Improving limited-view photoacoustic tomography with an acoustic reflector". Journal of Biomedical Optics. 18 (11): 110505. Bibcode:2013JBO....18k0505H. doi:10.1117/1.jbo.18.11.110505. ISSN 1083-3668. PMC 3818029. PMID 24285421.
  26. Xia, Jun; Chatni, Muhammad R.; Maslov, Konstantin; Guo, Zijian; Wang, Kun; Anastasio, Mark; Wang, Lihong V. (2012). "Whole-body ring-shaped confocal photoacoustic computed tomography of small animals in vivo". Journal of Biomedical Optics. 17 (5): 050506. Bibcode:2012JBO....17e0506X. doi:10.1117/1.jbo.17.5.050506. ISSN 1083-3668. PMC 3382342. PMID 22612121.
  27. Sandbichler, M.; Krahmer, F.; Berer, T.; Burgholzer, P.; Haltmeier, M. (January 2015). "A Novel Compressed Sensing Scheme for Photoacoustic Tomography". SIAM Journal on Applied Mathematics. 75 (6): 2475–2494. arXiv:1501.04305. Bibcode:2015arXiv150104305S. doi:10.1137/141001408. ISSN 0036-1399.
  28. Provost, J.; Lesage, F. (April 2009). "The Application of Compressed Sensing for Photo-Acoustic Tomography". IEEE Transactions on Medical Imaging. 28 (4): 585–594. doi:10.1109/tmi.2008.2007825. ISSN 0278-0062. PMID 19272991.
  29. Haltmeier, Markus; Sandbichler, Michael; Berer, Thomas; Bauer-Marschallinger, Johannes; Burgholzer, Peter; Nguyen, Linh (June 2018). "A sparsification and reconstruction strategy for compressed sensing photoacoustic tomography". The Journal of the Acoustical Society of America. 143 (6): 3838–3848. arXiv:1801.00117. Bibcode:2018ASAJ..143.3838H. doi:10.1121/1.5042230. ISSN 0001-4966. PMID 29960458.
  30. Liang, Jinyang; Zhou, Yong; Winkler, Amy W.; Wang, Lidai; Maslov, Konstantin I.; Li, Chiye; Wang, Lihong V. (2013-07-22). "Random-access optical-resolution photoacoustic microscopy using a digital micromirror device". Optics Letters. 38 (15): 2683–6. Bibcode:2013OptL...38.2683L. doi:10.1364/ol.38.002683. ISSN 0146-9592. PMC 3784350. PMID 23903111.
  31. Duarte, Marco F.; Davenport, Mark A.; Takhar, Dharmpal; Laska, Jason N.; Sun, Ting; Kelly, Kevin F.; Baraniuk, Richard G. (March 2008). "Single-pixel imaging via compressive sampling". IEEE Signal Processing Magazine. 25 (2): 83–91. Bibcode:2008ISPM...25...83D. doi:10.1109/msp.2007.914730. hdl:1911/21682. ISSN 1053-5888.
  32. Paltauf, G; Nuster, R; Burgholzer, P (2009-05-08). "Weight factors for limited angle photoacoustic tomography". Physics in Medicine and Biology. 54 (11): 3303–3314. Bibcode:2009PMB....54.3303P. doi:10.1088/0031-9155/54/11/002. ISSN 0031-9155. PMC 3166844. PMID 19430108.
  33. Liu, Xueyan; Peng, Dong; Ma, Xibo; Guo, Wei; Liu, Zhenyu; Han, Dong; Yang, Xin; Tian, Jie (2013-05-14). "Limited-view photoacoustic imaging based on an iterative adaptive weighted filtered backprojection approach". Applied Optics. 52 (15): 3477–83. Bibcode:2013ApOpt..52.3477L. doi:10.1364/ao.52.003477. ISSN 1559-128X. PMID 23736232.
  34. Ma, Songbo; Yang, Sihua; Guo, Hua (2009-12-15). "Limited-view photoacoustic imaging based on linear-array detection and filtered mean-backprojection-iterative reconstruction". Journal of Applied Physics. 106 (12): 123104–123104–6. Bibcode:2009JAP...106l3104M. doi:10.1063/1.3273322. ISSN 0021-8979.
  35. Guan, Steven; Khan, Amir A.; Sikdar, Siddhartha; Chitnis, Parag V. (2019-11-11). "Limited View and Sparse Photoacoustic Tomography for Neuroimaging with Deep Learning". arXiv:1911.04357 [eess.IV].
  36. Chen, Xingxing; Qi, Weizhi; Xi, Lei (2019-10-29). "Deep-learning-based motion-correction algorithm in optical resolution photoacoustic microscopy". Visual Computing for Industry, Biomedicine, and Art. 2 (1): 12. doi:10.1186/s42492-019-0022-9. ISSN 2524-4442. PMC 7099543. PMID 32240397.

Photoacoustic imaging

Photoacoustic microscopy

Photoacoustic effect

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.