تعداد نشریات | 418 |
تعداد شمارهها | 9,997 |
تعداد مقالات | 83,560 |
تعداد مشاهده مقاله | 77,800,534 |
تعداد دریافت فایل اصل مقاله | 54,843,354 |
A Hybrid Optimization Algorithm for Learning Deep Models | ||
Journal of Advances in Computer Research | ||
مقاله 4، دوره 9، شماره 4 - شماره پیاپی 34، بهمن 2018، صفحه 59-71 اصل مقاله (622.08 K) | ||
نوع مقاله: Original Manuscript | ||
نویسندگان | ||
Farnaz Hoseini1؛ Asadollah Shahbahrami* 2؛ Peyman Bayat1 | ||
1Department of Computer Engineering, Rasht Branch, Islamic Azad University, Rasht, Iran | ||
2Department of Computer Engineering, Faculty of Engineering, University of Guilan, Rasht, Iran | ||
چکیده | ||
Deep learning is one of the subsets of machine learning that is widely used in Artificial Intelligence (AI) field such as natural language processing and machine vision. The learning algorithms require optimization in multiple aspects. Generally, model-based inferences need to solve an optimized problem. In deep learning, the most important problem that can be solved by optimization is neural network training, but training a neural network can involve thousands of computers for months. In the present study, basic optimization algorithms in deep learning were evaluated. First, a performance criterion was defined based on a training dataset, which makes an objective function along with an adjustment phrase. In the optimization process, a performance criterion provides the least value for objective function. Finally, in the present study, in order to evaluate the performance of different optimization algorithms, recent algorithms for training neural networks were compared for the segmentation of brain images. The results showed that the proposed hybrid optimization algorithm performed better than the other tested methods because of its hierarchical and deeper extraction. | ||
کلیدواژهها | ||
Deep Learning؛ Optimization Algorithms؛ Stochastic Gradient Descent؛ Momentum؛ Nestrove؛ Adam | ||
آمار تعداد مشاهده مقاله: 672 تعداد دریافت فایل اصل مقاله: 224 |