Content of FMCW Radar and Signal Processing in our journal

        Published in last 1 year |  In last 2 years |  In last 3 years |  All
    Please wait a minute...
    For Selected: Toggle Thumbnails
    A fully-integrated Doppler-assisted FMCW radar for indoor localization and vital sign sensing
    Yuqin ZHANG, Zitong ZHANG, Runxi ZHANG
    J* E* C* N* U* N* S*    2026, 2026 (2): 128-138.   DOI: 10.3969/j.issn.1000-5641.2026.02.012
    Abstract46)   HTML5)    PDF(pc) (2133KB)(23)       Save

    This paper presents a Doppler-assisted frequency-modulated continuous-wave (FMCW) radar that combines the precise range resolution capability of FMCW with the high sensitivity of Doppler radar, enabling versatile performance for indoor applications. A comprehensive analysis of low-frequency noise contributions from key receiver blocks, including the low-noise amplifier (LNA), mixer, local oscillator (LO) buffer, and analog baseband (ABB) circuits, is conducted. An “RF+LO+BB” joint noise figure (NF) improvement method is proposed to effectively suppress the low-frequency noise. To reduce frequency modulation (FM) error in charge-pump-based fractional-N phase-locked loop (PLL), a nested-PLL architecture with a co-optimized loop parameter selection method is employed, resulting in significantly improved chirp linearity. Fabricated in a 55 nm CMOS technology, the proposed Doppler-assisted FMCW radar achieves NFs of 32 dB and 12 dB at 10 Hz and 1 kHz, respectively, and a chirp linearity of 0.0039% over a 3.52 GHz chirp bandwidth (BW), resulting in a maximum detection range of 19.41 m and a range resolution of 4.7 cm. The radar occupies a die area of 12.7 mm2 and consumes 594 mW in FMCW mode and 432 mW in Doppler mode under a 3.3 V supply.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Analog-baseband circuit with large dynamic range and fine-grained gain control for 77-GHz millimeter-wave radar systems
    Zihao REN, Cong ZHANG, Yang YANG, Chunqi SHI
    J* E* C* N* U* N* S*    2026, 2026 (2): 139-150.   DOI: 10.3969/j.issn.1000-5641.2026.02.013
    Abstract31)   HTML2)    PDF(pc) (2186KB)(13)       Save

    In this study, an analog-baseband (ABB) circuit for 77 GHz automotive radar transceivers, which features a large dynamic range and fine-grained gain control, is proposed and implemented. To accommodate various detection ranges from short (15 m) to long distance (250 m), the circuit employs a Butterworth filter supporting a signal bandwidth of 400 kHz to 20 MHz with a continuously tunable gain of 10 to 57 dB. A precision gain control strategy is proposed, which enables 2.5 dB/step gain adjustments to precisely maintain the output amplitude within 460–740 mV. A feedback-based direct current (DC) offset cancellation loop effectively suppresses residual DC offsets. The hybrid digital-analog feedback automatic gain control ensures a settling time of less than 160 μs during rapid signal amplitude variations. At the output stage, a highly linear source-follower structure is implemented to minimize signal distortion while enhancing the ABB’s driving capability. Measured results indicate that under a 2.5 V power supply, the analog baseband affords a wide range of automatic gain tuning from 10 to 57 dB across the full operating bandwidth (400 kHz–20 MHz). At the maximum gain, the circuit exhibits noise figures of 42.4 and 33.1 dB at 400 kHz and 20 MHz, respectively.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Lanczos compression-based FMCW radar target detection scheme
    Ying LIU, Beibei XING, Zhixin YIN, Leilei HUANG, Chunqi SHI
    J* E* C* N* U* N* S*    2026, 2026 (2): 151-163.   DOI: 10.3969/j.issn.1000-5641.2026.02.014
    Abstract58)   HTML2)    PDF(pc) (1754KB)(20)       Save

    Frequency Modulated Continuous-Wave (FMCW) radar has been widely adopted in fields such as autonomous driving and security surveillance due to its high resolution, strong penetration capability, and low power consumption. However, traditional target detection methods based on two-dimensional Fast Fourier Transform (2D-FFT) suffer from high computational complexity and significant latency when processing large-scale intermediate frequency (IF) data, making it challenging to meet millisecond-level real-time requirements. To address these issues, this paper proposes a Lanczos compression-based accelerated Constant False Alarm Rate (CFAR) detection method deployed on an FPGA platform. By employing Krylov subspace projection to approximate principal component vectors, the computational complexity is reduced from O(NMlog(NM)) to O(NM). The algorithm first applies one-dimensional CFAR after principal component projection to obtain candidate points, followed by a refined two-dimensional CFAR window for target confirmation, effectively reducing computational load while maintaining detection accuracy. Experimental results on the Xilinx XC7Z020CLG400 FPGA platform demonstrate that the proposed scheme achieves a latency of only 0.36 ms when processing a 256×512 matrix, with an average power consumption of 0.76 W. In both public datasets and simulated multi-target scenarios, the system achieved an average false alarm rate of 0.48% and a miss rate of 2.93%, validating the high precision, low latency, and energy efficiency of the proposed method. This research provides a high-performance, low-power hardware solution for real-time target detection in millimeter-wave radar applications, with broad prospects for engineering applications.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Data-compression method for full-dimension data in FMCW radar systems and its hardware implementation
    Zhixin YIN, Ying LIU, Beibei XING, Leilei HUANG, Runxi ZHANG
    J* E* C* N* U* N* S*    2026, 2026 (2): 164-175.   DOI: 10.3969/j.issn.1000-5641.2026.02.015
    Abstract34)   HTML3)    PDF(pc) (1565KB)(11)       Save

    This study proposes a full-dimension data-compression method and its corresponding hardware-implementation scheme for frequency-modulated continuous wave (FMCW) radar processing systems. In accommodating the continuously increasing demand for higher measurement accuracies and resolutions in FMCW radar, the processing system generates a rapidly increasing data volume, thus placing a significant burden on the data-transmission bandwidth and storage resources. Hence, a compression algorithm based on k-th-order exponential Golomb encoding (EGE) is introduced. The algorithm first performs data pre-compression using exponential Golomb encoding to reduce statistical redundancy and then optimizes the compressed output through adaptive significant-bit truncation, thus achieving efficient bit-width alignment. Suitable for full-dimension data formats in FMCW radar systems, the method demonstrates strong adaptability, high compression efficiency, and the ability to support real-time data processing with a hardware-friendly architecture. Experimental results indicate that the proposed algorithm reduces storage consumption by more than 50% while preserving data quality, thus significantly enhancing both processing efficiency and data-transmission capacity while demonstrating significant potential for practical engineering applications.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    A high-speed adaptive γ filtering hardware implementation scheme for back projection imaging
    Yukun CHENG, Yingjian HAO, Jingqian WANG, Leilei HUANG
    J* E* C* N* U* N* S*    2026, 2026 (2): 176-186.   DOI: 10.3969/j.issn.1000-5641.2026.02.016
    Abstract26)   HTML2)    PDF(pc) (1714KB)(9)       Save

    With the increasing complexity of public security, indoor security inspection demands higher imaging precision and real-time performance. Traditional X-ray and millimeter-wave imaging systems exhibit limitations in safety, resolution, and anti-interference capability. Near-field synthetic aperture radar, with its high resolution and non-contact advantages, has emerged as a promising alternative. However, although the back projection algorithm achieves precise focusing, speckle noise significantly degrades image quality, limiting its practical application. To address this issue, this paper proposes and implements a fast filtering strategy and hardware-oriented solution for back projection imaging. At the algorithm level, local statistics are rapidly computed using integral images, combined with adaptive γ modeling to achieve efficient speckle suppression while preserving edge details. At the hardware level, image blocking, multi-unit parallel reuse, and pipelined architecture are employed to accelerate filtering and reduce latency, while neighborhood extension is used to mitigate edge distortion caused by blocking. Experimental results demonstrate that, for a 300×300 synthetic aperture data, the filtering time is reduced to approximately 6.67 ms, equivalent number of looks increases from 5.19 to 11.47, edge structure deviation decreases from 0.19 to 0.13, and peak signal-to-noise ratio reaches 39.27 dB, significantly outperforming traditional Lee or Kuan filters. Hardware implementation results indicate that the proposed architecture achieves advantages in resource utilization and real-time performance, confirming its practicality in efficient and scalable indoor synthetic aperture radar image filtering and providing reliable technical support for next-generation indoor security inspection systems.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    SAR imaging algorithm and hardware implementation based on FFT dimensionality reduction and sub-bin compensation
    Yingjian HAO, Yukun CHENG, Jingqian WANG, Leilei HUANG
    J* E* C* N* U* N* S*    2026, 2026 (2): 187-198.   DOI: 10.3969/j.issn.1000-5641.2026.02.017
    Abstract48)   HTML2)    PDF(pc) (1747KB)(8)       Save

    Synthetic aperture radar (SAR) has shown significant promise for high-resolution near-field imaging owing to its unique advantages in applications such as autonomous driving, industrial non-destructive testing, and security screening. However, conventional high-resolution SAR imaging relies on large two-dimensional fast Fourier transforms (FFTs) (e.g., 1024×1024), which results in high computational complexity and substantial memory bandwidth requirements that hinder real-time processing on resource-constrained embedded platforms such as field-programmable gate arrays (FPGA) or systems-on-chip (SoC). To address this challenge, we propose a co-optimized low-complexity millimeter-wave SAR imaging scheme based on an algorithm with designated hardware. First, the size of the FFT in both the range and azimuth dimensions is reduced from 1024 to 512, which reduces the computational load significantly. Subsequently, zero-padding with center alignment is applied to the results of matched filtering in the frequency domain to achieve multiplication-free upsampling. Finally, a three-point parabolic sub-bin interpolation technique is introduced to compensate for grid-mismatch errors caused by dimensionality reduction. To validate the effectiveness of the proposed method, a complete near-field SAR data acquisition system based on the TI AWR1843 millimeter-wave radar chip was developed. This system consists of radar control, mechanical scanning, data acquisition, and transmission modules. The imaging experiments were conducted on real metallic targets. The experimental results demonstrate that when implemented on an FPGA, the proposed approach reduces DSP48 resource utilization by 55.9% and cuts the required double data rate (DDR) bandwidth by 50% compared with the baseline full-resolution method using 1024×1024 FFT, while maintaining highly consistent visual quality and structural similarity on the same pixel grid, with the degradation of the peak sidelobe ratio (PSLR) being less than 0.25 dB, and the structural similarity index (SSIM) reaches 0.96. This work provides a practical engineering solution for high-performance millimeter-wave SAR imaging on resource-constrained platforms.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Cross-layer design and multi-scenario optimization methods for CMOS high-energy-efficiency edge computing chips
    Xu WANG, Ke CHEN, Chenghua WANG, Weiqiang LIU
    J* E* C* N* U* N* S*    2026, 2026 (2): 199-213.   DOI: 10.3969/j.issn.1000-5641.2026.02.018
    Abstract50)   HTML2)    PDF(pc) (1178KB)(21)       Save

    As integrated circuit technologies progressively enter the post-Moore era, performance and energy-efficiency improvements achieved solely through transistor scaling have become increasingly difficult to sustain. Energy efficiency has thus emerged as a critical factor limiting further performance scaling and application deployment. In particular, for emerging application scenarios represented by edge computing, chip design is subject to more stringent and diverse constraints due to tight power budgets, area costs, and real-time requirements. Under these conditions, the optimization focus of advanced Complementary Metal-Oxide-Semiconductor (CMOS) chip design is shifting from process-centric scaling toward a design-driven paradigm that emphasizes cross-layer coordinated optimization across the circuit, architecture, and system levels. This paper presents a systematic and application-oriented review of key techniques and methodological frameworks for CMOS high-energy-efficiency chip design targeting edge computing. First, from the perspective of the post-Moore technological context, the intrinsic causes of energy-efficiency bottlenecks and their manifestations under low-power constraints are analyzed. Subsequently, following a cross-layer organizational hierarchy spanning the circuit level, the architectural level, and emerging computing paradigms, representative design methodologies enabling high-energy-efficiency edge computing are comprehensively reviewed, with emphasis on their underlying energy-efficiency mechanisms and design trade-offs. Building on this foundation, multiple representative edge application scenarios—including artificial intelligence inference, edge intelligence and the Internet of Things, as well as communication and signal processing—are examined to elucidate how application-specific constraints give rise to distinct energy-efficiency bottlenecks and corresponding optimization strategies. Finally, the major challenges facing CMOS high-energy-efficiency chip design for edge computing are summarized, and future research trends are discussed, with the aim of providing cross-layer design insights and systematic methodological references for high-energy-efficiency edge computing chip design under complex post-Moore application constraints.

    Table and Figures | Reference | Related Articles | Metrics | Comments0