Agostinelli F, Hoffman M, Sadowski P, Baldi P (2015) Learning activation functions to improve deep neural networks. https://doi.org/10.48550/arXiv.1412.6830
Amato G, Amelio A, Caroprese L et al (2024) AI for sustainability: research at Ud’A node. In: CEUR workshop proceedings, vol 3762, pp 494–498
Angileri, F, Lombardi, G, Fois, A. A systematization of the Wagner framework: graph theory conjectures and reinforcement learning. Lect Notes Comput Sci. 2025; 15243 LNAI: 325-338
Ballester-Ripoll, R, Paredes, EG, Pajarola, R. Sobol tensor trains for global sensitivity analysis. Reliab Eng Syst Saf. 2019; 183: 311-322
Chen, T, Chen, H, Liu, R, Page, C, LePage, R. A constructive proof and an extension of Cybenko’s approximation theorem. Comput Sci Stat. 1992: 163-168
Chollet F (2017) Xception: deep learning with depthwise separable convolutions, pp 1800–1807. https://doi.org/10.1109/CVPR.2017.195
Clevert D-A, Unterthiner T, Hochreiter S (2016) Fast and accurate deep network learning by exponential linear units (ELUs). https://doi.org/10.48550/arXiv.1511.07289
CurioSAI (2023) Increasing biases can be more efficient than increasing weights. https://github.com/CuriosAI/dac-dev
Cybenko, G. Approximation by superpositions of a sigmoidal function. Math Control Signals Syst. 1989; 2 (4): 303-314
DeVore, RA, Hanin, B, Petrova, G. Neural network approximation. Acta Numer. 2021; 30: 327-444
Di Cecco, A, Metta, C, Fantozzi, M. GloNets: globally connected neural networks, IDA 2024. Lect Notes Comput Sci. 2024; 14641: 53-64
Di Cecco, A, Papini, A, Metta, C. SwitchPath: enhancing exploration in neural networks learning dynamics. Lect Notes Comput Sci. 2025; 15243 LNAI: 275-291
Ding X, Zhang X, Ma N, Han J, Ding G, Sun J (2021) RepVGG: making VGG-style ConvNets great again, pp 13733–13742
Dosovitskiy A, Beyer L, Kolesnikov A et al (2021) An image is worth 16x16 words: transformers for image recognition at scale. In: ICLR. https://openreview.net/forum?id=YicbFdNTTy
Fang, L, Liu, G, Li, S, Ghamisi, P, Benediktsson, JA. Hyperspectral image classification with squeeze multibias network. IEEE Trans Geosci Remote Sens. 2019; 57 (3): 1291-1301
Goodfellow IJ, Warde-Farley D, Mirza M, Courville AC, Bengio Y (2013) Maxout networks. In: ICML 2013. JMLR workshop and conference proceedings, vol 28, pp 1319–1327
Goodfellow, IJ, Pouget-Abadie, J, Mirza, M, Xu, B, Warde-Farley, D, Ozair, S, Courville, AC, Bengio, Y. Generative adversarial networks. Commun ACM. 2020; 63 (11): 139-144
He K, Zhang X, Ren S, Sun J (2015) Delving deep into rectifiers: surpassing human-level performance on imagenet classification, vol ICCV 2015, pp 1026–1034. https://doi.org/10.1109/ICCV.2015.123
He K, Zhang X, Ren S, Sun J (2016 a) Identity mappings in deep residual networks. In: ECCV 2016. Lecture notes in computer science, vol 9908, pp 630–645
He K, Zhang X, Ren S, Sun J (2016b) Deep residual learning for image recognition. In: IEEE CVPR, pp 770–778
Howard J (2019) Imagenette and Imagewoof datasets. https://github.com/fastai/imagenette
ISIC (2019) ISIC 2019: skin lesion analysis towards melanoma detection. https://challenge.isic-archive.com
Kingma DP, Welling M (2014) Auto-encoding variational Bayes. In: 2nd International conference on learning representations, ICLR 2014. arXiv:1312.6114
Klabjan D, Harmon M (2019) Activation ensembles for deep neural networks. In: 2019 IEEE international conference on big data (big data), pp 206–214. https://doi.org/10.1109/BigData47090.2019.9006069
Krizhevsky A, Nair V, Hinton G (2009) CIFAR-10 and CIFAR-100 datasets. https://www.cs.toronto.edu/∼\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setle ngth{\oddsidemargin}{-69pt}
\begin{document}$$\sim$$\end{document}kriz/cifar.html
Larkum, ME. Are dendrites conceptually useful?. Neuroscience. 2022; 489: 4-14
Li H, Ouyang W, Wang X (2016) Multi-bias non-linear activation in deep neural networks. In: ICML 2016. JMLR workshop and conference proceedings, vol 48, pp 221–229
Liu Z, Mao H, Wu C-Y, Feichtenhofer C, Darrell T, Xie S (2022) A ConvNet for the 2020s. In: 2022 IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp 11966–11976. https://doi.org/10.1109/CVPR52688.2022.01167
Maas AL, Hannun AY, Ng AY (2013) Rectifier nonlinearities improve neural network acoustic models. In: Proceedings of the 30th international conference on machine learning, ICML 2013, vol 30, p 3
Magee, JC. Dendritic integration of excitatory synaptic input. Nat Rev Neurosci. 2000; 1 (3): 181-190
Miani, M, Parton, M, Romito, M. Curious explorer: a provable exploration strategy in policy learning. IEEE Trans Pattern Anal Mach Intell. 2024; 46 (12): 11422-11431
Morandin F, Amato G, Fantozzi M, Gini R, Metta C, Parton M (2020) SAI: a sensible artificial intelligence that plays with handicap and targets high scores in 9×\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\times$$\end{document}9 go. In: ECAI 2020, vol 325, pp 403–410. https://doi.org/10.3233/FAIA200119
Morandin F, Amato G, Fantozzi M, Gini R, Metta C, Parton M (2021) SAI: a sensible artificial intelligence that plays with handicap and targets high scores in 9x9 Go (extended version). In: AAAI21-RLG workshop. arXiv:1905.10863 [math.CS]
Morandin F, Amato G, Gini R, Metta C, Parton M, Pascutto G (2019) SAI: a sensible artificial intelligence that plays go. In: IJCNN, pp 1–8
Park S, Yun C, Lee J, Shin J (2021) Minimum width for universal approximation. In: International conference on learning representations. https://openreview.net/forum?id=O-XJwyoIF-k
Pasqualini L, Parton M, Morandin F, Amato G, Gini R, Metta C, Fantozzi M, Marchetti A (2022) Score vs. winrate in score-based games: which reward for reinforcement learning? In: ICMLA, pp 573–578
Poirazi, P, Papoutsi, A. Illuminating dendritic function with computational models. Nat Rev Neurosci. 2020; 21 (6): 303-321
Sandler M, Howard AG, Zhu M, Zhmoginov A, Chen L (2018) Mobilenetv2: inverted residuals and linear bottlenecks. In: 2018 IEEE conference on computer vision and pattern recognition, CVPR 2018, pp 4510–4520. https://doi.org/10.1109/CVPR.2018.00474
Schwartz, R, Dodge, J, Smith, NA, Etzioni, O. Green AI. Commun ACM. 2020; 63 (12): 54-63
SGEMM (2018) SGEMM GPU kernel performance. https://archive.ics.uci.edu/ml/datasets/SGEMM+GPU+kernel+performance. Accessed 03 May 2023
Shang W, Sohn K, Almeida D, Lee H (2016) Understanding and improving convolutional neural networks via concatenated rectified linear units. In: ICML 2016. Proceedings of machine learning research, vol 48, pp 2217–2225
Silver, D, Huang, A, Maddison, CJ. Mastering the game of go with deep neural networks and tree search. Nature. 2016; 529 (7587): 484-489
Sinha, M, Narayanan, R. Active dendrites and local field potentials: biophysical mechanisms and computational explorations. Neuroscience. 2022; 489: 111-142
Tan M, Le QV (2019) EfficientNet: rethinking model scaling for convolutional neural networks. In: ICML 2019. Proceedings of machine learning research, vol 97, pp 6105–6114
Vaswani A, Shazeer N, Parmar N et al (2017) Attention is all you need. In: Proceedings of the 31st international conference on neural information processing systems. NIPS’17, pp 6000–6010
Wu DJ (2020) Accelerating self-play learning in go. In: AAAI20-RLG workshop. arXiv:1902.10565