(Peer-Reviewed) High performance "non-local" generic face reconstruction model using the lightweight Speckle-Transformer (SpT) UNet
Yangyundou Wang 王杨云逗 ¹ ², Hao Wang 王昊 ³, Min Gu 顾敏 ¹ ²
¹ Institute of Photonic Chips, University of Shanghai for Science and Technology, Shanghai 200093, China
中国 上海 上海理工大学光子芯片研究院
² Centre for Artificial-Intelligence Nanophotonics, School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
中国 上海 上海理工大学 光电信息与计算机工程学院 人工智能纳米光子学中心
³ School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
中国 上海 上海理工大学 光电信息与计算机工程学院
Opto-Electronic Advances, 2023-02-25
Abstract
Significant progress has been made in computational imaging (CI), in which deep convolutional neural networks (CNNs) have demonstrated that sparse speckle patterns can be reconstructed. However, due to the limited “local” kernel size of the convolutional operator, for the spatially dense patterns, such as the generic face images, the performance of CNNs is limited.
Here, we propose a “non-local” model, termed the Speckle-Transformer (SpT) UNet, for speckle feature extraction of generic face images. It is worth noting that the lightweight SpT UNet reveals a high efficiency and strong comparative performance with Pearson Correlation Coefficient (PCC), and structural similarity measure (SSIM) exceeding 0.989, and 0.950, respectively.
Multiplexed stimulated emission depletion nanoscopy (mSTED) for 5-color live-cell long-term imaging of organelle interactome
Yuran Huang, Zhimin Zhang, Wenli Tao, Yunfei Wei, Liang Xu, Wenwen Gong, Jiaqiang Zhou, Liangcai Cao, Yong Liu, Yubing Han, Cuifang Kuang, Xu Liu
Opto-Electronic Advances
2024-07-05