Real-time scattering imaging technology is of great importance for transportation in extreme weather, scientific research in the deep ocean, emergency rescue in heavy smoke, and so on. Among existing volumetric scattering imaging methods, timeof-flight (ToF) methods based on confocal imaging architecture, isolating or modeling scattering photons, present the best visual reconstruction ability. However, these methods rely on the long acquisition time to capture more effective photons and thus fail to deal with real-time imaging tasks. In this article, aiming at providing both high-speed and high-quality scattering reconstruction, a dual optical coupling scattering transmission model is proposed to accurately describe the spatial and temporal propagation of scattered photons in the non-confocal imaging architecture. Then, a non-confocal boundary migration model (NBMM) is designed to establish the mapping between scattered measurement and object information. Besides, using the special characteristic of the temporal transfer matrix, a depth reconstruction method based on re-focusing is developed to recover the 3D structure of the object. Finally, a non-confocal imaging system is built to capture photons for all pixels simultaneously and verify the effectiveness of the proposed method. The experimental results show that the proposed method can recover 3D objects located at a one-way scattering length of 22.2 transport mean free paths (TMFPs), corresponding to the round-trip scattering length of 44.4TMFPs, which is 6.9 times more than typical non-confocal methods. It operates 600 times faster than confocal methods and only requires 100 ms exposure time, which is very beneficial to a variety of real-time scattering applications.
Non-Confocal Boundary Migration Model
Schematic diagram of non-confocal light propagation and pipelines of forward image formation and inverse reconstruction. (a) A pulsed laser emits the light into the scattering scenarios. The photons are forward scattered to the object \(O\) and backscattered to the detector. The detector (SPAD) captures the scattered photons to generate the measurement \(Y\). The forward and backward scattered fields are denoted as \(\phi_i(x,y,z)\) and \(\phi_e(x,y,z)\). The function and \(g(x,y,z)\) describes the object’s surface characteristics. (b) The non-confocal forward image formation pipeline from object, \(\Phi_e'(k_x,k_y,k_z)\), \(\bar\Phi_e(k_x,k_y,f)\) to measurement. (c) The pipeline of inverse reconstruction from measurement, \(\bar\Phi_e(k_x,k_y,f)\), \(\Phi_e'(k_x,k_y,k_z)\) to object.
Results
3D visualization and x-z slices of reconstructed results in fog environment. (a) Reference of 3D Lambertian objects used in this experiment. The red and blue lines represent the positions of the x-z slices. The scale bar at the bottom right of each image indicates 4.5 cm. The total depth of the fog chamber and the object depths are marked in the scale below each image. (b) Scattered measurements directly captured by the SPAD array. (c-e) Reconstruction results and x-z slices by time-gating, cross-correlation, and the proposed method, respectively. PSNR is calculated as the quantitative indicator to compare these three algorithms. The results are marked below each image in (c-e), and PSNR is in dB.