Kehtarnavaz, Nasser

Permanent URI for this collection

Nasser Kehtarnavaz is a professor in the Department of Electrical Engineering and was made an Erik Jonsson Distinguished Professor in 2017. He also serves as head of the Signal and Image Processing (SIP) Lab. His research interests include:

  • Signal and image processing
  • Real-time signal and image processing
  • Real-time embedded processing
  • Biomedical image analysis
  • Pattern recognition
Dr. Kehtarnavaz is a Fellow of IEEE and SPIE and is a licensed professional engineer.

ORCID page


Recent Submissions

Now showing 1 - 8 of 8
  • Item
    A Real-Time Smartphone App for Unsupervised Noise Classification In Realistic Audio Environments
    (IEEE, 2019-03-07) Alamdari, Nasim; Kehtarnavaz, Nasser; Alamdari, Nasim; Kehtarnavaz, Nasser
    This paper presents a real-time unsupervised noise classifier smartphone app which is designed to operate in realistic audio environments. This app addresses the two limitations of a previously developed smartphone app for unsupervised noise classification. A voice activity detection is added to separate the presence of speech frames from noise frames and thus to lower misclassifications when operating in realistic audio environments. In addition, buffers are added to allow a stable operation of the noise classifier in the field. The unsupervised noise classification is achieved by fusing the decisions of two adaptive resonance theory unsupervised classifiers running in parallel. One classifier operates on subband features and the other operates on mel-frequency spectral coefficients. The results of field testing indicate the effectiveness of this unsupervised noise classifier app when used in realistic audio environments.
  • Item
    Convolutional Autoencoder-Based Multispectral Image Fusion
    (Institute of Electrical and Electronics Engineers Inc.) Azarang, Arian; Manoochehri, Hafez E.; Kehtarnavaz, Nasser; 0000-0001-8520-8457 (Azarang, A); 0000-0001-5183-6359 (Kehtarnavaz, N); 166234961 (Kehtarnavaz, N); Azarang, Arian; Manoochehri, Hafez E.; Kehtarnavaz, Nasser
    This paper presents a deep learning-based pansharpening method for fusion of panchromatic and multispectral images in remote sensing applications. This method can be categorized as a component substitution method in which a convolutional autoencoder network is trained to generate original panchromatic images from their spatially degraded versions. Low resolution multispectral images are then fed into the trained convolutional autoencoder network to generate estimated high resolution multispectral images. The fusion is achieved by injecting the detail map of each spectral band into the corresponding estimated high resolution multispectral bands. Full reference and no-reference metrics are computed for the images of three satellite datasets. These measures are compared with the existing fusion methods whose codes are publicly available. The results obtained indicate the effectiveness of the developed deep learning-based method for multispectral image fusion. © 2019 IEEE.
  • Item
    A Computationally Efficient Pipeline for 3d Point Cloud Reconstruction from Video Sequences
    (SPIE) Chang, Chih-Hsiang; Kehtarnavaz, Nasser; Chang, Chih-Hsiang; Kehtarnavaz, Nasser
    This paper presents a computationally efficient pipeline to achieve 3D point cloud reconstruction from video sequences. This pipeline involves a key frame selection step to improve the computational efficiency by generating reliable depth information from pair-wise frames. An outlier removal step is then applied in order to further improve the computational efficiency. The reconstruction is achieved based on a new absolute camera pose recovery approach in a computationally efficient manner. This pipeline is devised for both sparse and dense 3D reconstruction. The results obtained from video sequences exhibit higher computational efficiency and lower re-projection errors of the introduced pipeline compared to the existing pipelines.
  • Item
    A Convolutional Neural Network-Based Sensor Fusion System for Monitoring Transition Movements in Healthcare Applications
    (IEEE Computer Society) Dawar, Neha; Kehtarnavaz, Nasser
    This paper presents a convolutional neural network-based sensor fusion system to monitor six transition movements as well as falls in healthcare applications by simultaneously using a depth camera and a wearable inertial sensor. Weighted depth motion map images and inertial signal images are fed as inputs into two convolutional neural networks running in parallel, one for each sensing modality. Detection and thus monitoring of the transition movements and falls are achieved by fusing the movement scores generated by the two convolutional neural networks. The results obtained for both subject-generic and subject-specific testing indicate the effectiveness of this sensor fusion system for monitoring these transition movements and falls. © 2018 IEEE.
  • Item
    A Convolutional Neural Network Smartphone App for Real-Time Voice Activity Detection
    (IEEE - Inst Electrical Electronics Engineers Inc) Sehgal, Abhishek; Kehtarnavaz, Nasser; 0000-0001-7128-6438 (Sehgal, A); 166234961 (Kehtarnavaz, N); Sehgal, Abhishek; Kehtarnavaz, Nasser
    This paper presents a smartphone app that performs real-time voice activity detection based on convolutional neural network. Real-time implementation issues are discussed showing how the slow inference time associated with convolutional neural networks is addressed. The developed smartphone app is meant to act as a switch for noise reduction in the signal processing pipelines of hearing devices, enabling noise estimation or classification to be conducted in noise-only parts of noisy speech signals. The developed smartphone app is compared with a previously developed voice activity detection app as well as with two highly cited voice activity detection algorithms. The experimental results indicate that the developed app using convolutional neural network outperforms the previously developed smartphone app.
  • Item
    Multi-Temporal Depth Motion Maps-Based Local Binary Patterns for 3-D Human Action Recognition
    (IEEE Electrical Electronics Engineers Inc) Chen, Chen; Liu, Mengyuan; Liu, Hong; Zhang, Baochang; Han, Jungong; Kehtarnavaz, Nasser; Kehtarnavaz, Nasser
    This paper presents a local spatio-temporal descriptor for action recognistion from depth video sequences, which is capable of distinguishing similar actions as well as coping with different speeds of actions. This descriptor is based on three processing stages. In the first stage, the shape and motion cues are captured from a weighted depth sequence by temporally overlapped depth segments, leading to three improved depth motion maps (DMMs) compared with the previously introduced DMMs. In the second stage, the improved DMMs are partitioned into dense patches, from which the local binary patterns histogram features are extracted to characterize local rotation invariant texture information. In the final stage, a Fisher kernel is used for generating a compact feature representation, which is then combined with a kernel-based extreme learning machine classifier. The developed solution is applied to five public domain data sets and is extensively evaluated. The results obtained demonstrate the effectiveness of this solution as compared with the existing approaches.
  • Item
    Optimization Method to Reduce Blocking Artifacts in JPEG Images
    (SPIE) Pourreza-Shahri, Reza; Yousefi, S.; Kehtarnavaz, Nasser; 0000 0001 1724 5897 (Kehtarnavaz, N); 00001559 (Kehtarnavaz, N); 166234961 (Kehtarnavaz, N)
    This paper presents an optimization method to reduce blocking artifacts in JPEG images by utilizing the image gradient information. A closed-form solution is derived for the optimization method. To address the computational feasibility aspect of the large matrices involved in the closed-form solution, a sliding window approach is devised. The performance of the developed method is compared with several blocking artifacts reduction methods in the literature and also with the deblocking filter deployed in high efficiency video coding by examining the three measures of peak signal-to-noise ratio, generalized block-edge impairment metric (MGBIM), and structural similarity. The comparison results indicate the effectiveness of the introduced method in particular for low bit-rate JPEG images.
  • Item
    Comparison of Two Real-Time Hand Gesture Recognition Systems Involving Stereo Cameras, Depth Camera, and Inertial Sensor
    Liu, Kui; Kehtarnavaz, Nasser; Carlsohn, Matthias; 0000 0001 1724 5897 (Kehtarnavaz, N); 00001559 (Kehtarnavaz, N); 166234961 (Kehtarnavaz, N)
    This paper presents a comparison of two real-time hand gesture recognition systems. One system utilizes a binocular stereo camera set-up while the other system utilizes a combination of a depth camera and an inertial sensor. The latter system is a dual-modality system as it utilizes two different types of sensors. These systems have been previously developed in the Signal and Image Processing Laboratory at the University of Texas at Dallas and the details of the algorithms deployed in these systems are reported in previous papers. In this paper, a comparison is carried out between these two real-time systems in order to examine which system performs better for the same set of hand gestures under realistic conditions.

Works in Treasures @ UT Dallas are made available exclusively for educational purposes such as research or instruction. Literary rights, including copyright for published works held by the creator(s) or their heirs, or other third parties may apply. All rights are reserved unless otherwise indicated by the copyright owner(s).