Παρακολούθηση
Sungho Shin
Sungho Shin
Rebellions Inc.
Η διεύθυνση ηλεκτρονικού ταχυδρομείου έχει επαληθευτεί στον τομέα rebellions.ai
Τίτλος
Παρατίθεται από
Παρατίθεται από
Έτος
Resiliency of deep neural networks under quantization
W Sung, S Shin, K Hwang
arXiv preprint arXiv:1511.06488, 2015
1782015
FPGA-based low-power speech recognition with recurrent neural networks
M Lee, K Hwang, J Park, S Choi, S Shin, W Sung
2016 IEEE International Workshop on Signal Processing Systems (SiPS), 230-235, 2016
982016
Fixed-point performance analysis of recurrent neural networks
S Shin, K Hwang, W Sung
2016 IEEE International Conference on Acoustics, Speech and Signal …, 2016
892016
Dynamic hand gesture recognition for wearable devices with low complexity recurrent neural networks
S Shin, W Sung
2016 IEEE International Symposium on Circuits and Systems (ISCAS), 2274-2277, 2016
732016
Fully neural network based speech recognition on mobile and embedded devices
J Park, Y Boo, I Choi, S Shin, W Sung
Advances in neural information processing systems 31, 2018
482018
Fixed-point optimization of deep neural networks with adaptive step size retraining
S Shin, Y Boo, W Sung
2017 IEEE International conference on acoustics, speech and signal …, 2017
452017
Stochastic precision ensemble: self-knowledge distillation for quantized deep neural networks
Y Boo, S Shin, J Choi, W Sung
Proceedings of the AAAI Conference on Artificial Intelligence 35 (8), 6794-6802, 2021
232021
Knowledge distillation for optimization of quantized deep neural networks
S Shin, Y Boo, W Sung
arXiv preprint arXiv, 2019
19*2019
Quantized neural networks: Characterization and holistic optimization
Y Boo, S Shin, W Sung
2020 IEEE Workshop on Signal Processing Systems (SiPS), 1-6, 2020
122020
Generative Knowledge Transfer for Neural Language Models
S Shin, K Hwang, W Sung
arXiv preprint arXiv:1608.04077, arXiv preprint arXiv:1608.04077, 2016
112016
Memorization capacity of deep neural networks under parameter quantization
Y Boo, S Shin, W Sung
ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and …, 2019
92019
Hlhlp: Quantized neural networks training for reaching flat minima in loss surface
S Shin, J Park, Y Boo, W Sung
Proceedings of the AAAI Conference on Artificial Intelligence 34 (04), 5784-5791, 2020
62020
Sqwa: Stochastic quantized weight averaging for improving the generalization capability of low-precision deep neural networks
S Shin, Y Boo, W Sung
ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and …, 2021
42021
S-sgd: Symmetrical stochastic gradient descent with weight noise injection for reaching flat minima
W Sung, I Choi, J Park, S Choi, S Shin
arXiv preprint arXiv:2009.02479, 2020
42020
Quantized neural network design under weight capacity constraint
S Shin, K Hwang, W Sung
arXiv preprint arXiv:1611.06342, 2016
42016
Workload-aware automatic parallelization for multi-GPU DNN training
S Shin, Y Jo, J Choi, S Venkataramani, V Srinivasan, W Sung
ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and …, 2019
32019
LightTrader: World’s first AI-enabled High-Frequency Trading Solution with 16 TFLOPS/64 TOPS Deep Learning Inference Accelerators
H Kim, S Yoo, J Bae, K Bong, Y Boo, K Charfi, HE Kim, HS Kim, J Kim, ...
2022 IEEE Hot Chips 34 Symposium (HCS), 1-10, 2022
12022
2.4 ATOMUS: A 5nm 32TFLOPS/128TOPS ML System-on-Chip for Latency Critical Applications
CH Yu, HE Kim, S Shin, K Bong, H Kim, Y Boo, J Bae, M Kwon, K Charfi, ...
2024 IEEE International Solid-State Circuits Conference (ISSCC) 67, 42-44, 2024
2024
Neural network training method and apparatus
S Shin, S Wonyong, BOO Yoonho
US Patent App. 17/526,221, 2022
2022
Quantization of Deep Neural Networks for Improving the Generalization Capability
신성호
서울대학교 대학원, 2020
2020
Δεν είναι δυνατή η εκτέλεση της ενέργειας από το σύστημα αυτή τη στιγμή. Προσπαθήστε ξανά αργότερα.
Άρθρα 1–20