An empirical study on improving the speed and generalization of neural networks using a parallel circuit approach

dc.contributor.authorPhan, Kien Tuong
dc.contributor.authorMaul, Tomas Henrique
dc.contributor.authorVu, Tuong Thuy
dc.date.issued2016
dc.description.abstractOne of the common problems of neural networks, especially those with many layers, consists of their lengthy training time. We attempt to solve this problem at the algorithmic level, proposing a simple parallel design which is inspired by the parallel circuits found in the human retina. To avoid large matrix calculations, we split the original network vertically into parallel circuits and let the backpropagation algorithm flow in each subnetwork independently. Experimental results have shown the speed advantage of the proposed approach but also point out that this advantage is affected by multiple dependencies. The results also suggest that parallel circuits improve the generalization ability of neural networks presumably due to automatic problem decomposition. By studying network sparsity, we partly justified this theory and proposed a potential method for improving the design.
dc.formatPp. 780–796
dc.identifier.urihttps://thuvienso.hoasen.edu.vn/handle/123456789/10895
dc.language.isoen
dc.sourceInternational Journal of Parallel Programming. Volume 45
dc.subjectNeural networks
dc.subjectParallel circuits
dc.subjectProblem decomposition
dc.subjectBackpropagation
dc.titleAn empirical study on improving the speed and generalization of neural networks using a parallel circuit approach
dc.typeArticle

Files

Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
An empirical study on improving the speed and generalization...pdf
Size:
522.91 KB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description:

Collections