Neural networks can be trained to work well for particular tasks, but hardly ever we know why they work so well. Due to the complicated architectures and an enormous number of parameters we usually have well-working black-boxes and it is hard if not impossible to make targeted changes in a trained model. In this thesis, we focus on network optimization, specifically we make networks small and simple by removing unimportant synapses, while keeping the classification accuracy of the original fully-connected networks. Based on our experience, at least 90% of the synapses are usually redundant in fully-connected networks. A pruned network consists of important parts only and therefore we can find input-output rules and make statements about individual parts of the network. To identify which synapses are unimportant a new measure is introduced. The methods are presented on six examples,
where we show the ability of our pruning algorithm 1) to find a minimal network structure; 2) to select features; 3) to detect patterns among samples; 4) to partially demystify a complicated network; 5) to rapidly reduce the learning and prediction time. The network pruning algorithm is general and applicable for any classification problem.
Annotation in English
Neural networks can be trained to work well for particular tasks, but hardly ever we know why they work so well. Due to the complicated architectures and an enormous number of parameters we usually have well-working black-boxes and it is hard if not impossible to make targeted changes in a trained model. In this thesis, we focus on network optimization, specifically we make networks small and simple by removing unimportant synapses, while keeping the classification accuracy of the original fully-connected networks. Based on our experience, at least 90% of the synapses are usually redundant in fully-connected networks. A pruned network consists of important parts only and therefore we can find input-output rules and make statements about individual parts of the network. To identify which synapses are unimportant a new measure is introduced. The methods are presented on six examples,
where we show the ability of our pruning algorithm 1) to find a minimal network structure; 2) to select features; 3) to detect patterns among samples; 4) to partially demystify a complicated network; 5) to rapidly reduce the learning and prediction time. The network pruning algorithm is general and applicable for any classification problem.
Neural networks can be trained to work well for particular tasks, but hardly ever we know why they work so well. Due to the complicated architectures and an enormous number of parameters we usually have well-working black-boxes and it is hard if not impossible to make targeted changes in a trained model. In this thesis, we focus on network optimization, specifically we make networks small and simple by removing unimportant synapses, while keeping the classification accuracy of the original fully-connected networks. Based on our experience, at least 90% of the synapses are usually redundant in fully-connected networks. A pruned network consists of important parts only and therefore we can find input-output rules and make statements about individual parts of the network. To identify which synapses are unimportant a new measure is introduced. The methods are presented on six examples,
where we show the ability of our pruning algorithm 1) to find a minimal network structure; 2) to select features; 3) to detect patterns among samples; 4) to partially demystify a complicated network; 5) to rapidly reduce the learning and prediction time. The network pruning algorithm is general and applicable for any classification problem.
Annotation in English
Neural networks can be trained to work well for particular tasks, but hardly ever we know why they work so well. Due to the complicated architectures and an enormous number of parameters we usually have well-working black-boxes and it is hard if not impossible to make targeted changes in a trained model. In this thesis, we focus on network optimization, specifically we make networks small and simple by removing unimportant synapses, while keeping the classification accuracy of the original fully-connected networks. Based on our experience, at least 90% of the synapses are usually redundant in fully-connected networks. A pruned network consists of important parts only and therefore we can find input-output rules and make statements about individual parts of the network. To identify which synapses are unimportant a new measure is introduced. The methods are presented on six examples,
where we show the ability of our pruning algorithm 1) to find a minimal network structure; 2) to select features; 3) to detect patterns among samples; 4) to partially demystify a complicated network; 5) to rapidly reduce the learning and prediction time. The network pruning algorithm is general and applicable for any classification problem.
Nastudujte problematiku trénování neuronových sítí pro klasifikaci a možnost optimalizace/prořezávání počtu parametrů sítě.
Navrhněte algoritmus pro analýzu a prořezání parametrů neuronové sítě.
Připravte vhodné testovací úlohy.
Navržený algoritmus vyhodnoťte a výsledky porovnejte s referenčními modely.
Research Plan
Nastudujte problematiku trénování neuronových sítí pro klasifikaci a možnost optimalizace/prořezávání počtu parametrů sítě.
Navrhněte algoritmus pro analýzu a prořezání parametrů neuronové sítě.
Připravte vhodné testovací úlohy.
Navržený algoritmus vyhodnoťte a výsledky porovnejte s referenčními modely.
Recommended resources
[1] Mozer, M. C., Smolensky, P. Skeletonization - a Technique for Trimming the Fat From a Network via Relevance Assesment. CU-CS-421-89" Computer Science Technical Reports. 1989.\\
[2] Karnin, E. D. A Simple Procedure for Pruning Back-Propagation Trained Neural Networks. IEEE Transactions on Neural Networks. 1990.\\
[3] Lecun, Y., Denker, J. S., Solla, S. A. Optimal Brain Damage. Advances in Neural Information Processing Systems. 1990.\\
[4] Psutka, J., Müller, L., Matoušek, J., Radová, V. Mluvíme s počítačem česky. Academia, Praha, 2006.
Recommended resources
[1] Mozer, M. C., Smolensky, P. Skeletonization - a Technique for Trimming the Fat From a Network via Relevance Assesment. CU-CS-421-89" Computer Science Technical Reports. 1989.\\
[2] Karnin, E. D. A Simple Procedure for Pruning Back-Propagation Trained Neural Networks. IEEE Transactions on Neural Networks. 1990.\\
[3] Lecun, Y., Denker, J. S., Solla, S. A. Optimal Brain Damage. Advances in Neural Information Processing Systems. 1990.\\
[4] Psutka, J., Müller, L., Matoušek, J., Radová, V. Mluvíme s počítačem česky. Academia, Praha, 2006.