BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Europe/Stockholm
X-LIC-LOCATION:Europe/Stockholm
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20230831T095746Z
LOCATION:Sertig
DTSTART;TZID=Europe/Stockholm:20230627T150000
DTEND;TZID=Europe/Stockholm:20230627T153000
UID:submissions.pasc-conference.org_PASC23_sess182_pap109@linklings.com
SUMMARY:Scaling Resolution of Gigapixel Whole Slide Images Using Spatial D
 ecomposition on Convolutional Neural Networks
DESCRIPTION:Paper\n\nAristeidis Tsaris (Oak Ridge National Laboratory); Jo
 sh Romero and Thorsten Kurth (NVIDIA Inc.); and Jacob Hinkle, Hong-Jun Yoo
 n, Feiyi Wang, Sajal Dash, and Georgia Tourassi (Oak Ridge National Labora
 tory)\n\nGigapixel images are prevalent in scientific domains ranging from
  remote sensing, and satellite imagery to microscopy, etc. However, traini
 ng a deep learning model at the natural resolution of those images has bee
 n a challenge in terms of both, overcoming the resource limit (e.g. HBM me
 mory constraints), as well as scaling up to a large number of GPUs. In thi
 s paper, we trained Residual neural Networks (ResNet) on 22,528 x 22,528-p
 ixel size images using a distributed spatial decomposition method on 2,304
  GPUs on the Summit Supercomputer. We applied our method on a Whole Slide 
 Imaging (WSI) dataset from The Cancer Genome Atlas (TCGA) database. WSI im
 ages can be in the size of 100,000 x 100,000 pixels or even larger, and in
  this work we studied the effect of image resolution on a classification t
 ask, while achieving state-of-the-art AUC scores. Moreover, our approach d
 oesn't need pixel-level labels, since we're avoiding patching from the WSI
  images completely, while adding the capability of training arbitrary larg
 e-size images. This is achieved through a distributed spatial decompositio
 n method, by leveraging the non-block fat-tree interconnect network of the
  Summit architecture, which enabled GPU-to-GPU direct communication. Final
 ly, detailed performance analysis results are shown, as well as a comparis
 on with a data-parallel approach when possible.\n\nDomain: Chemistry and M
 aterials, Climate, Weather and Earth Sciences, Computer Science, Machine L
 earning, and Applied Mathematics &#8232;\n\nSession Chair: Mauro Bianco (ETH Zur
 ich / CSCS)
END:VEVENT
END:VCALENDAR
