BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:America/Chicago
X-LIC-LOCATION:America/Chicago
BEGIN:DAYLIGHT
TZOFFSETFROM:-0600
TZOFFSETTO:-0500
TZNAME:CDT
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0500
TZOFFSETTO:-0600
TZNAME:CST
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20181221T160904Z
LOCATION:C2/3/4 Ballroom
DTSTART;TZID=America/Chicago:20181113T083000
DTEND;TZID=America/Chicago:20181113T170000
UID:submissions.supercomputing.org_SC18_sess325_spost127@linklings.com
SUMMARY:Precomputing Outputs of Hidden Layers to Speed Up Deep Neural Netw
 ork Training
DESCRIPTION:ACM Student Research Competition, Poster\nTech Program Reg Pas
 s, Exhibits Reg Pass\n\nPrecomputing Outputs of Hidden Layers to Speed Up 
 Deep Neural Network Training\n\nShrestha\n\nDeep learning has recently eme
 rged as a powerful technique for many tasks including image classification
 . A key bottleneck of deep learning is that the training phase takes a lot
  of time, since state-of-the-art deep neural networks have millions of par
 ameters and hundreds of hidden layers. The early layers of these deep neur
 al networks have the fewest parameters but take up the most computation.\n
 \nIn this work, we reduce training time by progressively freezing hidden l
 ayers, pre-computing their output and excluding them from training in both
  forward and backward paths in subsequent iterations. We compare this tech
 nique to the most closely related approach for speeding up the training pr
 ocess of neural network.\n\nThrough experiments on two widely used dataset
 s for image classification, we empirically demonstrate that our approach c
 an yield savings of up to 25% wall-clock time during training with no loss
  in accuracy.
URL:https://sc18.supercomputing.org/presentation/?id=spost127&sess=sess325
END:VEVENT
END:VCALENDAR

