Vis enkel innførsel

dc.contributor.authorCronin, Neil J.
dc.contributor.authorFinni, Taija
dc.contributor.authorSeynnes, Olivier R.
dc.date.accessioned2020-10-14T09:20:09Z
dc.date.available2020-10-14T09:20:09Z
dc.date.created2020-08-24T12:19:46Z
dc.date.issued2020
dc.identifier.citationComputer Methods and Programs in Biomedicine. 2020, 196, 105583.en_US
dc.identifier.urihttps://hdl.handle.net/11250/2682688
dc.descriptionThis is an open access article under the CC BY license. (http://creativecommons.org/licenses/by/4.0/).en_US
dc.description.abstractBackground and objective Deep learning approaches are common in image processing, but often rely on supervised learning, which requires a large volume of training images, usually accompanied by hand-crafted labels. As labelled data are often not available, it would be desirable to develop methods that allow such data to be compiled automatically. In this study, we used a Generative Adversarial Network (GAN) to generate realistic B-mode musculoskeletal ultrasound images, and tested the suitability of two automated labelling approaches. Methods We used a model including two GANs each trained to transfer an image from one domain to another. The two inputs were a set of 100 longitudinal images of the gastrocnemius medialis muscle, and a set of 100 synthetic segmented masks that featured two aponeuroses and a random number of ‘fascicles’. The model output a set of synthetic ultrasound images and an automated segmentation of each real input image. This automated segmentation process was one of the two approaches we assessed. The second approach involved synthesising ultrasound images and then feeding these images into an ImageJ/Fiji-based automated algorithm, to determine whether it could detect the aponeuroses and muscle fascicles. Results Histogram distributions were similar between real and synthetic images, but synthetic images displayed less variation between samples and a narrower range. Mean entropy values were statistically similar (real: 6.97, synthetic: 7.03; p = 0.218), but the range was much narrower for synthetic images (6.91 – 7.11 versus 6.30 – 7.62). When comparing GAN-derived and manually labelled segmentations, intersection-over-union values- denoting the degree of overlap between aponeurosis labels- varied between 0.0280 – 0.612 (mean ± SD: 0.312 ± 0.159), and pennation angles were higher for the GAN-derived segmentations (25.1° vs. 19.3°; p < 0.001). For the second segmentation approach, the algorithm generally performed equally well on synthetic and real images, yielding pennation angles within the physiological range (13.8–20°). Conclusions We used a GAN to generate realistic B-mode ultrasound images, and extracted muscle architectural parameters from these images automatically. This approach could enable generation of large labelled datasets for image segmentation tasks, and may also be useful for data sharing. Automatic generation and labelling of ultrasound images minimises user input and overcomes several limitations associated with manual analysis.en_US
dc.language.isoengen_US
dc.subjectultrasounden_US
dc.subjectmuscleen_US
dc.subjectdeep learningen_US
dc.subjectmedical imagingen_US
dc.subjectgenerative adversarial networken_US
dc.subjectcycleGANen_US
dc.subjectsynthetic imageen_US
dc.titleUsing deep learning to generate synthetic B-mode musculoskeletal ultrasound imagesen_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.description.versionpublishedVersionen_US
dc.rights.holder© 2020 The Author(s)en_US
dc.source.pagenumber7en_US
dc.source.volume196en_US
dc.source.journalComputer Methods and Programs in Biomedicineen_US
dc.identifier.doi10.1016/j.cmpb.2020.105583
dc.identifier.cristin1824776
dc.description.localcodeInstitutt for fysisk prestasjonsevne / Department of Physical Performanceen_US
cristin.ispublishedtrue
cristin.fulltextoriginal
cristin.qualitycode1


Tilhørende fil(er)

Thumbnail
Thumbnail
Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel