The University of Georgia (UGA) Sensor Data Science and AI Seminars are monthly online seminars that cover interdisciplinary research topics in data science (DS), artificial intelligence (AI), statistics, engineering, biomedical informatics, and public health. We aim to bring together researchers from these fields to discuss exciting topics on DS/AI with interdisciplinary applications. If you are interested to speak in our forum, please contact Prof. Song (wsong@uga.edu) then sign up at the Speakers signup form.

Upcoming Talks
  • Speaker: Geert Leus, Delft University of Technology, The Netherlands
  • Title: Graph Signal Processing: Distributed Graph Filters
  • Date/Time: 11:00am-12:00pm EST, Friday Jan. 8, 2025 
  • Zoom Link:  https://zoom.us/j/7135472400
  • Abstract: One of the cornerstones of the field of graph signal processing are graph filters, direct analogues of time-domain filters, but intended for signals defined on graphs. In this talk, we give an overview of the graph filtering problem. More specifically, we look at the family of finite impulse response (FIR) and infinite impulse response (IIR) graph filters and show how they can be implemented in a distributed manner. To further limit the communication and computational complexity, we also generalize the state-of-the-art distributed graph filters to filters whose weights show a dependency on the nodes sharing information. These so-called edge-variant graph filters yield significant benefits in terms of filter order reduction thereby leading to amenable communication and complexity savings. The analytical and numerical results presented in this talk illustrate the potential and benefits of this general family of edge-variant graph filters.
  • Bio: Geert Leus received the M.Sc. and Ph.D. degree in Electrical Engineering from the KU Leuven, Belgium, in June 1996 and May 2000, respectively. Geert Leus is now a Full Professor at the Faculty of Electrical Engineering, Mathematics and Computer Science of the Delft University of Technology, The Netherlands. His research interests are in the broad area of signal processing, with a specific focus on wireless communications, array processing, sensor networks, and graph signal processing. Geert Leus received the 2021 EURASIP Individual Technical Achievement Award, a 2025 IEEE Signal Processing Society Best Paper Award, a 2005 IEEE Signal Processing Society Best Paper Award, and a 2002 IEEE Signal Processing Society Young Author Best Paper Award. He is a Fellow of the IEEE and a Fellow of EURASIP. Geert Leus was a Member-at-Large of the Board of Governors of the IEEE Signal Processing Society, the Chair of the IEEE Signal Processing for Communications and Networking Technical Committee, the Chair of the EURASIP Technical Area Committee on Signal Processing for Multisensor Systems, a Member of the IEEE Sensor Array and Multichannel Technical Committee, a Member of the IEEE Big Data Special Interest Group, a Member of the EURASIP Signal Processing for Communications and Networking Special Area Team, the Editor in Chief of the EURASIP Journal on Advances in Signal Processing, and the Editor in Chief of EURASIP Signal Processing. He was also on the Editorial Boards of the IEEE Transactions on Signal Processing, the IEEE Transactions on Wireless Communications, the IEEE Signal Processing Letters, and the EURASIP Journal on Advances in Signal Processing. Currently, he is a Member of the IEEE Signal Processing Theory and Methods Technical Committee and an Associate Editor of Foundations and Trends in Signal Processing.
Past Talks
  • Speaker: Pengfei Zhou, Assistant Professor, Department of Informatics and Networked Systems, School of Computing and Information, University of Pittsburgh.
  • Title: Beyond Resolution PPG Sensing for Continuous Blood Pressure Monitoring
  • Date/Time: 11:00am-12:00pm EST, Friday Dec. 13. 
  • Zoom Link:   https://zoom.us/j/7135472400?omn=91036124258
  • Abstract: Cuffless blood pressure (BP) monitoring is a critical task in the cardiovascular diseases (CVDs) domain, commonly based on Photoplethysmography (PPG) and Electrocardiogram (ECG) signals, providing insights into cardiac health. While ECG often delivers better BP monitoring performance, the acquisition via straps and patches leads to a poor user experience. In contrast, PPG enables continuous and convenient monitoring but offers less informative references. A potential approach is to convert PPG signals into ECG signals, ensuring both high convenience and optimal accuracy. However, converting PPG into ECG involves a substantial reduction in inherent entropy, necessitating a thorough understanding of the process and specific techniques to guide ECG generation. In this talk, I will introduce a blood pressure monitoring framework that achieves ECG-level performance using solely the PPG signal. A diffusion model is introduced to conduct a selective ECG-targeted generative process conditioned on PPG. Based on our experimental observations, we developed a set of techniques to significantly enhance the model’s ability to generate high-quality ECG signals. Specifically, in the forward process, we employ an adaptive search module to adapt the QRS segment within the ECG waveform. In the reverse process, we propose the scale alignment and frequency alignment modules to better guide the generative process. Extensive experiments conducted on two public datasets and one self-collected dataset demonstrate the superior performance of our proposed framework, offering a groundbreaking perspective for PPG-based continuous blood pressure monitoring.
  • Bio: Pengfei Zhou is an Assistant Professor in the Department of Informatics and Networked Systems, School of Computing and Information at the University of Pittsburgh. He leads the Mobile Intelligence and Networking Technology (MINT) research group. Prior to joining Pitt, he was a Research Scientist at ADSC, a Research Fellow at Alibaba-NTU Joint Research Institute (JRI) and a Technical Advisor for Alibaba Group. He founded fayfans Co., Ltd developing AIoT products in 2016. He received the B.E. degree in the Department of Automation from Tsinghua University, and the Ph.D. degree in the School of Computer Science and Engineering from Nanyang Technological University. He is interested in technologies and ideas that help human beings better sense and interpret our physical world so as to develop human-centered applications. Current research spans mobile and networked sensing, mobile intelligence for healthcare, and mobile networks. He won the ACM SenSys Test of Time Award in 2022.
  • Speaker: Petar M. Djurić, Dept. of ECE, Stony Brook University
  • Title: Advanced Machine Learning for Enhanced Monitoring of Fetal Well-Being During Delivery
  • Date/Time: 11:00am – 12:00pm, Friday Oct. 18
  • Zoom Link:   https://zoom.us/j/7135472400?omn=91036124258
  • Abstract: Fetal data analysis, largely centered on electronic fetal monitoring, has seen little progress since its introduction in the late 1950s. This stagnation has hindered advancements in perinatal outcomes and contributed to a rise in unnecessary operative deliveries. Meanwhile, the field of machine learning has made significant strides in data-driven inference. However, these advancements have yet to produce a game changer in obstetrics, a field ripe for innovation. The primary objective of our work is to make a breakthrough in computer-based fetal monitoring during active labor by empowering obstetricians with advanced decision-making tools, even when full ‘ground truth’ data is unavailable. We aim to develop an innovative, machine learning-driven real-time index that provides more accurate and timely assessments of fetal well-being, which will give clinicians the critical insights they need to make informed decisions. Our approach is based on Gaussian processes where we exploit the concept of open set recognition and where we implement a transparency mechanism to clearly communicate the decision-making process of the ML method to clinical personnel in an easily understandable way. The used methodology provides clinicians insight into how different features contributed to the generated estimates.
  • Bio: Petar M. Djurić obtained his B.S. and M.S. degrees in Electrical Engineering from the University of Belgrade and his Ph.D. degree in Electrical Engineering from the University of Rhode Island. Following the completion of his Ph.D., he joined Stony Brook University, where he currently holds the position of SUNY Distinguished Professor and serves as the Savitri Devi Bangaru Professor in Artificial Intelligence. Djurić also held the role of Chair of the Department of Electrical and Computer Engineering from 2016 to 2023. His research has predominantly focused on machine learning and signal and information processing. In 2012, Djurić received the EURASIP Technical Achievement Award whereas in 2008, he was appointed Chair of Excellence of Universidad Carlos III de Madrid-Banco de Santander. He has actively participated in various committees of the IEEE Signal Processing Society and served on committees for numerous professional conferences and workshops. He was the founding Editor-in-Chief of the IEEE Transactions on Signal and Information Processing Over Networks. In 2022, he was elected as a foreign member of the Serbian Academy of Engineering Sciences. Furthermore, Djurić holds the distinction of being a Fellow of IEEE, EURASIP, AAIA (Asia-Pacific Artificial Intelligence Association), and AIIA (the Industry Academy of the International Artificial Intelligence Industry Alliance). 
  • Speaker: Hamid Krim, Dept. of ECE, North Carolina State University
  • Title: Deep Structure in Data: A Way to Robust Inference
  • Date/Time: 2:00 -3:00 pm, Wed. Oct 16, 2024
  • Zoom Link:   https://zoom.us/j/7135472400?omn=97075842705
  • Abstract: Successfully exploiting data for many real-world problems depends on the quality of the extracted information and on its close correlation with a task characteristic structure.   We argue here that an almost unifying and systematic approach to representing data with theme variations matching the problem at hand, and thus adapting the extraction of relevant information to a task. Capturing the structure of information at different scales will be sought by adapted techniques to creatively distill the data, yields adapted models for unveiling a solution. Building on the classical statistical and non-parametric PCA as well as on canonical basis representation, non-linear properties are invoked to construct decomposition criteria of data to result in increasingly complex and informative structures.  Information typically enjoying a limited number of degrees of freedom relative to the ambient space, we propose a lower rank structure for the information space relative to its embedding space. The resulting self-representativity under a union-of-subspaces (UoS) structure is natural and may be viewed as a piece-wise linear (PWL) approximation of a generally non-linear manifold real structure of an information space. We show a sufficient condition to use a L1 optimization to reveal the underlying UoS structure, and further propose a bi-sparsity model (RoSure) as an effective strategy. This structural characterization, albeit powerful for many applications, can be shown to be limited in large scale data (images) with commonly shared features. We make a case for further refinement by invoking a joint and principled scale-structure atomic characterization, which is demonstrated to improve performance. This resulting Deep Dictionary Learning approach is based on symbiotically formulating a classification problem regularized by a reconstruction problem. A theoretical rationale is also provided to contrast this work to Convolutional Neural Networks, with a demonstrably competitive performance. Substantiating examples are provided, and the application and performance of these approaches are shown for a wide range of problems such as video segmentation and object classification.
  • Bio: Hamid Krim received his BSc. MSc.  and Ph.D. in Electrical Engineering from the University of Southern California, University of Washington, and Northeastern University. He was a Member of Technical Staff at AT&T Bell Labs, where he has conducted R&D in the areas of telephony and digital communication systems/subsystems. Following an NSF postdoctoral fellowship at Foreign Centers of Excellence, LSS/University of Orsay, Paris, France, he joined the Laboratory for Information and Decision Systems, MIT, Cambridge, MA as a Research Scientist, performing and supervising research. He is presently Professor of Electrical Engineering in the ECE Department, North Carolina State University, Raleigh, leading the Vision, Information and Statistical Signal Theories and Applications Laboratory. His research interests are in statistical signal and image analysis and mathematical modeling with a keen emphasis on applied problems in classification and recognition using geometric and topological tools. His research has been funded by many Federal and Industrial agencies, including the NSF Career award. He has served on the IEEE editorial board of SP, and the TCs of SPTM and Big Data Initiative, as well as an AE of the new IEEE Transactions on SP on Information Processing on Networks, and of the IEEE SP Magazine. He is also one of the 2015-2016 Distinguished Lecturers of the IEEE SP Society.
  • Speaker: Gonzalo Mateos, Dept. of ECE, University of Rochester
  • Title: Learning with Graphs
  • Date/Time: 11:00 – 12:00 pm (EST), Oct. 25, 2024
  • Zoom Link:  TBD.
  • Abstract: This talk is broadly about learning from network data, which arises for instance with applications involving online social media, recommendation systems, transportation, and network neuroscience. By fruitfully exploiting the inductive biases in relational data, graph neural networks (GNNs) have attained unprecedented performance in various machine learning tasks, including node/graph classification, link prediction, and graph generation. To provide additional motivation, I will start with a user-friendly and didactic introduction to graph signal processing. The goal is to establish the foundations and basic concepts that will be useful to introduce graph GNNs in an intuitive way. After discussing architectures and key properties that make GNNs the model of choice when it comes to learning from relational data, I will highlight several success stories of GNN-based learning for Amazon’s recommendation system, Google Maps navigation, antibiotic discovery, and our own work on explainable brain age prediction.
  • Bio: Gonzalo Mateos earned the B.Sc. degree from Universidad de la Republica, Uruguay, in 2005, and the M.Sc. and Ph.D. degrees from the University of Minnesota, Twin Cities, in 2009 and 2011, all in electrical engineering. He joined the University of Rochester, Rochester, NY, in 2014, where he is currently an Associate Professor with the Department of Electrical and Computer Engineering, the Department of Computer Science (secondary appointment), as well as the Associate Director for Research at the University of Rochester’s Goergen Institute for Data Science. He also was the Asaro Biggar Family Fellow in Data Science (2020-23). During the 2013 academic year, he was a visiting scholar with the Computer Science Department at Carnegie Mellon University. From 2004 to 2006, he worked as a Systems Engineer at Asea Brown Boveri (ABB), Uruguay. His research interests lie in the areas of statistical learning from complex data, network science, decentralized optimization, and graph signal processing, with applications in brain connectivity, causal discovery, wireless network monitoring, power grid analytics, and information diffusion.
  • Speaker: Tianyi Chen (Assistant Professor, Computer Science, Rensselaer Polytechnic Institute)
  • Title: Learning with Multiple Objectives – A Tale of Two Methods
  • Date/Time: 10:00 – 11:00 am (EST), Apr. 17, 2024
  • Zoom Link:  https://zoom.us/j/96659452284?pwd=SGVuQ3ZIeXJyckt6RWx3d3J3Vi9NZz09
  • Abstract: Large AI models such as ChatGPT-4 and Gemini have recently achieved breakthroughs. But at the same time, they have raised societal concerns (e.g., fairness, safety, responsibility). These stories reveal that to unlock the full potential of today’s AI models, a key step is to empower them to proficiently perform multiple tasks, handle multiple data modalities, and satisfy multiple metrics, which we generally term as multi-objective learning (MOL). However, the propel advancement of these AI models is hindered by the lack of interpretability and theoretical foundations of MOL. In this talk, I will first discuss current computational and statistical challenges in MOL, including hypergradient computations, conflicts among objectives, and generalization abilities to new instances, tasks, and metrics. Next, I will introduce our approaches to tackling these challenges through the lens of a unified first-order optimization framework including bi-level and multi-objective optimization methods, combined with statistical learning theory. If time permits, I will highlight how we can tailor these algorithms to solve specific vision, speech and wireless tasks, and outline future directions. 
  • Bio: Tianyi Chen (https://sites.ecse.rpi.edu/~chent18/) is an Assistant Professor in the Department of Electrical, Computer, and Systems Engineering at Rensselaer Polytechnic Institute (RPI), where he is jointly supported by the RPI – IBM Artificial Intelligence Research Partnership. Dr. Chen received his B. Eng. degree in Electrical Engineering from Fudan University in 2014, and the Ph.D. degree in Electrical and Computer Engineering from the University of Minnesota in 2019. Dr. Chen’s research focuses on theoretical and algorithmic foundations of optimization, machine learning, and statistical signal processing. Dr. Chen is the inaugural recipient of IEEE Signal Processing Society Best PhD Dissertation Award in 2020, a recipient of NSF CAREER Award in 2021, a recipient of Amazon Research Awards in 2022, and a recipient of Cisco Research Gifts in 2023. He is also the co-author of several best paper awards such as the Best Student Paper Award at the NeurIPS Federated Learning Workshop in 2020 and at IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) in 2021.
  • Speaker: Sitan Chen (Assistant Professor, Computer Science, Harvard University)
  • Title: Theoretical Foundations for Diffusion Models
  • Date/Time: 10:00 – 11:00 am (EST), Apr. 24, 2024
  • Zoom Link:  https://zoom.us/j/96659452284?pwd=SGVuQ3ZIeXJyckt6RWx3d3J3Vi9NZz09
  • Abstract: I will describe recent progress on providing rigorous convergence guarantees for score-based generative models (SGMs) such as denoising diffusion probabilistic models (DDPMs), which constitute the backbone of large-scale real-world generative models such as DALL⋅E 3 and Sora. In the first part of the talk, I will show that such SGMs can efficiently sample from essentially any realistic data distribution, even ones which are highly non-log-concave. In the second part of the talk, I will show how to extend these guarantees to deterministic samplers (e.g. DDIMs) based on discretizing the so-called probability flow ODE, which ultimately leads to faster convergence. All of these results assume access to an oracle for score estimation; time permitting, at the end I will briefly touch upon how to provably implement this oracle for interesting classes of distributions like Gaussian mixtures.
  • Bio: Sitan Chen is an Assistant Professor of Computer Science at Harvard University. Previously, he completed an NSF Mathematical Sciences Postdoctoral Research Fellowship at UC Berkeley, hosted by Prasad Raghavendra. He received his PhD in EECS from MIT in 2021 under the supervision of Ankur Moitra. He has been the recipient of a Paul and Daisy Soros Fellowship, an Akamai Presidential Fellowship, and the Captain Jonathan Fay Prize. His research focuses on designing algorithms with provable guarantees for fundamental problems in data science, especially in the context of generative modeling, deep learning, and quantum information.
  • Speaker: Ninghao Liu (Assistant Professor, Computer Science, UGA)
  • Title: Harmonizing Data Augmentation with Graph Self-supervised Learning
  • Date/Time: 10:00 – 11:00 am (EST), TBD, 2024
  • Zoom Link:  https://zoom.us/j/96659452284?pwd=SGVuQ3ZIeXJyckt6RWx3d3J3Vi9NZz09
  • Abstract: Dr. Ninghao Liu is an assistant professor in the School of Computing at the University of Georgia. He received Ph.D. in Computer Science from Texas A&M University in 2021. Dr. Ninghao Liu’s research interests are Graph Mining and Trustworthy AI (including Explainable AI, Model Fairness, and Machine Learning Security). He has published refereed papers at recognized venues such as KDD, WWW, ICML, ICLR, NeurIPS, CIKM. His work won the Outstanding Paper Award in ICML 2022, the Best Paper Award Shortlist in WWW 2019, and the Best Paper Award Candidate in ICDM 2019.
  • Bio: Graph self-supervised learning (SSL) is an emerging technique for modeling graph data, due to its effectiveness in learning representations and weak dependence on labels. Typical methods of graph SSL include contrastive learning and masked autoencoders. A key step in graph SSL is data augmentation. In contrastive learning, data samples are perturbed to generate positive and negative pairs; in masked autoencoders, certain features are masked and then recovered by the model. Traditional SSL methods mainly rely on random augmentation, which could accidentally break graph structures. Also, the data augmentation step is largely independent of the model training process, which could lead to suboptimal performance. In this talk, I will introduce our recent work on data augmentation for graph SSL. Specifically, we propose an explanation-guided graph augmentation method for contrastive learning to preserve the structure integrity of graph data. Then, we propose a novel graph masked autoencoder, where the model training is coordinated with graph masking, toward learning generalizable representations.
  • Speaker:  Wei Jin (Assistant Professor, Computer Science, Emory University)
  • Title: Deep Learning on Graphs: A Data-Centric Exploration
  • Date/Time: 10:00 – 11:00 am (EST), Apr. 3, 2024
  • Zoom Link:  https://zoom.us/j/96659452284?pwd=SGVuQ3ZIeXJyckt6RWx3d3J3Vi9NZz09
  • Abstract: Many learning tasks in Artificial Intelligence require dealing with graph data, ranging from biology and chemistry to finance and education. Graph neural networks (GNNs), as deep learning models, have shown exceptional capabilities in learning from graph data. Despite their successes, GNNs often grapple with challenges stemming from data size and quality. This talk emphasizes a data-centric approach to enhance GNN performance. First, I will demonstrate methods to significantly reduce graph dataset sizes while retaining essential information for model training. Next, I will introduce a model-agnostic framework that enhances the quality of imperfect input graphs, thereby boosting prediction performance. These data-centric strategies not only enhance data efficiency and quality but also complement existing models. Finally, I will introduce recent advances in graph data valuation and graph generation. Join us to explore innovative approaches for overcoming data-related challenges in graph data mining.
  • Bio: Wei Jin is an Assistant Professor of Computer Science at Emory University. He obtained his Ph.D. from Michigan State University in 2023. His research focuses on graph machine learning and data-centric AI, with notable accomplishments such as AAAI New Faculty Highlights, KAUST Rising Star in AI, Snap Research Fellowship, Most Influential Papers in KDD and WWW by Paper Digest, and top finishes in three NeurIPS competitions. He has organized tutorials and workshops at top conferences, and published in top-tier venues such as ICLR, KDD, ICML, and NeurIPS. He has served as (senior) program committee members at these conferences and received the WSDM Outstanding Program Committee Member award. 
  • Speaker:  Jie Ding (Associate Professor, School of Statistics, University of Minnesota)
  • Title: Advancing Scalable AI: From Core Principles to Modern Applications
  • Date/Time: 10:00 – 11:00 am (EST), Mar. 13, 2024
  • Zoom Link:  https://zoom.us/j/96659452284?pwd=SGVuQ3ZIeXJyckt6RWx3d3J3Vi9NZz09
  • Abstract: In the rapidly evolving realms of data and learning ecosystems, the utility of artificial intelligence (AI) hinges on its ability to scale and adapt. This talk will delve into our recent research aimed at overcoming the challenges through an adaptive continual learning framework. Distinct from traditional learning approaches, our continual learning approach emphasizes growing capability that enables learners to maintain performance on old tasks while integrating new information, rapidly adapt to new environments through the recollection of past knowledge, and actively solicit side information to accelerate learning. I will present the new theoretical underpinnings and practical learning algorithms, structured around three foundational aspects: evolving memory for capability growth, soft supervision for accelerated learning, and cross-domain assisted learning for enhanced accuracy. Applications to bioengineering and large model training will also be discussed.
  • Bio: Jie Ding is an Associate Professor at the School of Statistics, University of Minnesota. He received a Ph.D. in Engineering Sciences from Harvard University in 2017. He received his B.S. degree from Tsinghua University, where he was selected in the Math & Physics Academic Talent Program and also enrolled in the Electrical Engineering program. Jie’s research is at the intersection of artificial intelligence, statistics, and scientific computing, with current focuses on the scalability and safety of large models. He is a recipient of the NSF CAREER Award, ARO Young Investigator Award, Cisco Research Award, and Meta Research Award, and is a visiting scholar at Amazon.
  • Speaker:  Xiang Zhang (Assistant Professor, Department of Computer Science, University of North Carolina at Charlotte)
  • Title: Towards Pervasive Healthcare: Accessible Deep Learning for Medical Time Series Analysis
  • Date/Time: 10:00 – 11:00 am (EST), Mar. 20, 2024
  • Zoom Link:  https://zoom.us/j/96659452284?pwd=SGVuQ3ZIeXJyckt6RWx3d3J3Vi9NZz09
  • Abstract: Globally, 3.6 billion people lack access to essential health services, underscoring the need for pervasive healthcare that can deliver medical services universally, even in resource-limited settings. Central to this vision is the role of advanced deep learning in analyzing medical time series (MedTS) data, despite facing challenges like label scarcity, limited generalizability, and poor interpretability. This talk introduces Accessible Deep Learning for pervasive healthcare, emphasizing models that are robust and practical for real-world applications. Particularly, we address label scarcity through a novel self-supervised method that exploits the hierarchical nature of medical time series, enhancing efficiency and effectiveness. A case study on brain signal analysis for Alzheimer’s Disease-Related Dementia (ADRD) showcases the potential of our approach in making significant advancements in healthcare accessibility and affordability.
  • Bio: Dr. Xiang Zhang is an Assistant Professor in the Department of Computer Science at the University of North Carolina (UNC) at Charlotte. Before joining UNC Charlotte, he was a postdoctoral fellow at Harvard University. Xiang received his Ph.D. degree (in 2020) in Computer Science from the University of New South Wales (UNSW). His research interests lie in data mining and machine learning with applications in pervasive healthcare, medical time series, and Brain-Computer Interfaces (BCIs). Xiang’s work has been published in prestigious conferences (such as ICLR, NeurIPS, and KDD) and journals (like Nature Computational Science), and has received over 2000 citations.
  • Speaker: Ting Zhang (Department of Statistics, University of Georgia)
  • Title: High-Quantile Regression for Tail-Dependent Time Series
  • Date/Time: 10:00 – 11:00 am, Feb. 21, 2024, at Boyd #711
  • Zoom Link: https://zoom.us/j/96659452284?pwd=SGVuQ3ZIeXJyckt6RWx3d3J3Vi9NZz09
  • Abstract: Quantile regression is a popular and powerful method for studying the effect of regressors on quantiles of a response distribution. However, existing results on quantile regression were mainly developed for cases in which the quantile level is fixed, and the data are often assumed to be independent. Motivated by recent applications, we consider the situation where (i) the quantile level is not fixed and can grow with the sample size to capture the tail phenomena, and (ii) the data are no longer independent, but collected as a time series that can exhibit serial dependence in both tail and non-tail regions. To study the asymptotic theory for high-quantile regression estimators in the time series setting, we introduce a tail adversarial stability condition, which had not previously been described, and show that it leads to an interpretable and convenient framework for obtaining limit theorems for time series that exhibit serial dependence in the tail region, but are not necessarily strongly mixing. Numerical experiments are conducted to illustrate the effect of tail dependence on high-quantile regression estimators, for which simply ignoring the tail dependence may yield misleading p-values.
  • Speaker: Rishi Kamaleswaran (Associate Professor, Emory University)
  • Title: Optimizing adaptive and time-constraint clinical decision making in acute and critical care medicine
  • Date/Time: Nov 17, Friday 10-11AM ET, 2023
  • Zoom Link: https://zoom.us/j/3460258911
  • Abstract: This talk will cover key developments in the area of temporal machine learning and decision making, with a focus on work that have been developed to predict time-sensitive events earlier utilizing multi-modal data streams. We will review key applications of real-time systems in acute and critical care as the domain of interest.
  • Bio: Rishikesan (Rishi) Kamaleswaran is an Associate Professor at Emory University, Department of Biomedical Informatics, with secondary appointments in Pediatrics and Emergency Medicine. He earned his Ph.D. in Computer Science from the University of Ontario Institute of Technology in Canada. He is the Director of Translational Informatics within the School of Medicine and the Georgia CTSA. He is the Co-Director of the NIH P30 Georgia Cystic Fibrosis Research and Translation Core Center where he oversees a number of clinical informatics research. He also oversees the Emory Real-Time Data Science and Decision Support (RADS2) center that advances AI/ML applications at the bedside. His research is funded by a number of federal and regional agencies including the National Institutes of Health, Veteran’s Affairs, Department of Defence, Michael J. Fox Foundation, and the CF foundation.
  • Speaker: Amirtahà Taebi (Assistant Professor, Mississippi State University)
  • Title: Contactless Monitoring of Cardiovascular Activity Using a Smartphone
  • Date/Time: Oct. 27, Friday 10-11 AM ET, 2023
  • Zoom Link: https://zoom.us/j/6334507957
  • Abstract: Seismocardiography (SCG) is a non-invasive method to monitor the mechanical activity of the cardiovascular system based on vibrations of the chest surface. This method has shown promise in providing clinically relevant information for cardiovascular diseases such as heart failure and atrial fibrillation. In this presentation, I will present our research progress in developing novel contactless methods to acquire and analyze SCG signals. In the first part of the presentation, I will describe our efforts in developing a vision-based technique to extract chest vibrations associated with SCG from chest videos recorded by a smartphone. SCG signals are conventionally measured using accelerometers attached to the chest. Vision-based techniques can improve the accessibility of this novel cardiovascular monitoring method and make it available to the public given the wide access to smartphones. In the second part of the talk, I will discuss our progress in estimating cardiac time intervals from the SCG signals. I will discuss why it is important to carefully select the SCG measurement location on the chest. Overall, our research goal in this project is to improve patient outcomes and quality of life by developing accurate and robust cardiovascular monitoring systems based on SCG.
  • Bio: Bio: Dr. Amirtahà Taebi is an Assistant Professor of Biomedical Engineering at Mississippi State University. He is an Associate Editor in BMC Research Notes, an Editorial Board Member of Scientific Reports, and a Guest Editor in several journals including Bioengineering. Before joining MSU, he was a postdoctoral fellow of Biomedical Engineering at the University of California, Davis from 2018 to 2021. In the industrial setting, he led the Advanced Signal Processing team at Infrasonix, Inc, a biomedical engineering start-up based in GA, from 2020 to 2021. Amirtahà received his Ph.D. in Mechanical Engineering from the University of Central Florida, Orlando, in 2018 after completing his M.Sc. in Biomedical Engineering at Politecnico di Milano, Italy, in 2013. Before joining Polimi, he earned his B.Sc. in Mechanical Engineering from the Sharif University of Technology, Iran, in 2010. His current work focuses on developing non-invasive diagnosis methods and personalized treatments for cardiovascular diseases.

  • Speaker: Mehmet Kurum (Associate Professor, UGA Engineering)
  • Title: Recycling the Radio Spectrum for Science: A New Paradigm for UAS-based Precision Agriculture
  • Date/Time: Sept. 29, Friday 10-11 AM ET, 2023
  • Zoom Link: https://zoom.us/j/6334507957
  • Abstract: Demand for radio spectrum space is growing rapidly, spurred by the explosion of emerging technologies such as the Internet of Things (IoT), Unmanned Aircraft Systems (UASs), and 5G networks. Unfortunately, the growth of active wireless systems often increases radio frequency (RF) interference (RFI) in science observations. As it stands, very little of the RF spectrum is dedicated to science, and the small amount of spectrum available can fall victim to neighboring RFI or re-allocation for commercial use in the wake of the growing demand for bandwidth in commercial applications. In this talk, I will focus on how we can change the paradigm of remote sensing methods and develop next-generation technologies and ideas that are more spectrum efficient, more effective and meet the challenges of the present and future spectrum congestion. Namely, I will introduce how we can recycle existing RF communication and navigation signals to enable new remote sensing methodologies at these commercially protected bands for scientific use in a myriad of practical solutions for precision agriculture, forestry, and water conservation. First, I will provide ongoing research examples on the application of our open-sourced EM scattering model and its simulator to (1) L-band Global Navigation Satellite System (GNSS) signals as they are an excellent example of a widely available system, and (2) military geostationary communication satellite signals at P-bands, as they can interact deeper into vegetation and ground. Second, I will demonstrate how we repurposed NASA’s Ocean surface wind mission (called CYGNSS) into a tool for sensing land hydrology globally.  Finally, I will highlight our efforts to bring satellite technology into the hands of ordinary people. These will include ongoing experimental studies to demonstrate how smartphones can be turned into a “bistatic passive radar” through the reception of the ambient reflected GNSS signals by their internal antennas and GNSS chipsets to perform microwave remote sensing of water in the soil. The developed technology can usher in a host of precision irrigation applications nationwide and worldwide, emphasizing economically distressed areas and developing countries.
  • Bio: Dr. Mehmet Kurum received his B.S. degree in Electrical and Electronics Engineering from Bogazici University, Istanbul, Turkey, in 2003, followed by his M.S. and Ph.D. degrees in Electrical Engineering from George Washington University, Washington, DC, USA, in 2005 and 2009, respectively. He held Postdoctoral and Research Associate positions with the Hydrological Sciences Laboratory, NASA Goddard Space Flight Center, Greenbelt, MD, USA. From 2016 to 2022, Dr. Kurum served as an Assistant Professor at Mississippi State University (MSU), and subsequently, he held the position of Associate Professor and the Paul B. Jacob endowed chair until 2023. Currently, he is an Associate Professor in Electrical and Computer Engineering at the University of Georgia, while also serving as an Adjunct Professor at MSU. Dr. Kurum is a senior member of the IEEE Geoscience and Remote Sensing Society (GRSS) and a member of the U.S. National Committee for the International Union of Radio Science (USNC-URSI). He has been an associate editor for IEEE Transactions on Geoscience and Remote Sensing and IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing since 2021. His current research focuses on recycling the radio spectrum to address the challenges of decreasing radio spectrum space for science while exploring entirely new microwave regions for land remote sensing. Dr. Kurum was a recipient of the Leopold B. Felsen Award for excellence in electromagnetic in 2013, the International Union of Radio Science (URSI) Young Scientist Award in 2014, and the NSF CAREER award in 2022. He served as an Early Career Representative for the International URSI Commission F (Wave Propagation and Remote Sensing) from 2014-2021. 
  • Speaker: Cassie Cao (Research Scientist at Carnegie Learning)
  • Title: Leveraging Generative AI and Mutimodel Analogical Reasoning to Enhance STEM Education
  • Date/Time: July 31, Monday 9:00-10:00am EDT
  • Abstract: This talk will discuss results from a set of human-subject experiments where human decision makers are provided with algorithmic advice. We observe that human decision makers exhibit bias in their interactions with the algorithm and the algorithm could alter their decision making process. We then demonstrate that a responsive advising approach that learns when to provide advice and only provides advice at times of need can improve human decision.
  • Bio: Dr. Cassie Cao, a Research Scientist at Carnegie Learning, holds a Ph.D. in Educational Technology from the University of Sheffield. Dr. Cao specializes in the intersection of Artificial Intelligence (AI) in Education, Gamification, and Human-Computer Interaction. She is deeply passionate about exploiting the capabilities of AI, with an emphasis on sophisticated large language models, to develop advanced intelligent tutoring systems. Her methodology integrates the generation of engaging multimodal content, the construction of imaginative analogies, and the delivery of adaptive feedback. She is also proficient in the design of multi-role AI chatbots, the use of detailed learning analytics, and the implementation of gamification principles. All these elements converge to facilitate her objective: enhancing the skills of educators and optimizing students’ learning experiences within the STEM discipline. Dr. Cao’s contributions have earned recognition at prominent conferences such as IJCAI, AIED, LAK, IUI, and SIGCSE, among others. Emphasizing collaboration, she actively partners with educators and fellow researchers from diverse institutions to design and implement intelligent tutoring systems that elevate the learning journey in STEM subjects.
  • Speaker: Yiling Chen (Gordon McKay Professor of Computer Science at Harvard University)
  • Title: AI-facilitated Human Decision Making
  • Date/Time: June 9th, 9:00-10:00am EDT
  • Zoom Link: Register at: https://zoom.us/meeting/register/tJwtdOigpz8qGNOSXPreu-lhaIgaAFx5hN5Y#/registration
  • Abstract: This talk will discuss results from a set of human-subject experiments where human decision makers are provided with algorithmic advice. We observe that human decision makers exhibit bias in their interactions with the algorithm and the algorithm could alter their decision making process. We then demonstrate that a responsive advising approach that learns when to provide advice and only provides advice at times of need can improve human decision.
  • Bio: Yiling Chen is a Gordon McKay Professor of Computer Science at Harvard University. She received her Ph.D. in Information Sciences and Technology from the Pennsylvania State University in December 2005. Prior to working at Harvard, she spent two years at Yahoo! Research in New York City. Her current research focuses on topics in the intersection of computer science and social science. She was a recipient of NSF Career award, The Penn State Alumni Association Early Career Award, and paper awards at conferences including ACM EC, AAMAS, FAT* and CSCW, and was selected by IEEE Intelligent Systems as one of “AI’s 10 to Watch” in 2011.
  • Speaker: Dajiang Zhu (University of Texas at Arlington)
  • Title: Brain and AI
  • Date/Time: May 12, Friday, 10:00am-11:00am EST
  • Zoom Link: https://zoom.us/j/3684840924 (Also in-person at UGA Boyd GSRC 306)
  • Abstract: As machines continue to exceed human performance in a range of tasks, it is natural to ask how we might think about human intelligence in a future populated by super intelligent machines. One way to do this is to think about the unique computational problems posed by human lives, and in particular by our finite computational resources and finite lifespan. Thinking in these terms highlights two problems: making efficient use of our cognitive resources, and being able to learn from limited amounts of data. It also sets up a third problem: solving computational problems beyond the scale of any one individual. I will argue that these three problems pick out the key characteristics of human intelligence, and highlight some recent progress in understanding how human minds solve them.
  • Bio: Dr. Dajiang Zhu is an Assistant Professor in the Department of Computer Science & Engineering at University of Texas at Arlington (UTA). Dr. Zhu received his Ph.D. in Computer Science from the University of Georgia in 2014. Before he joined UTA, Dr. Zhu was a Post-Doctoral Scholar in the Imaging Genetics Center at the University of Southern California. His research focuses on Neuroimaging Computing and Brain-inspired AI and has published 110+ papers at top-tier conferences and journals. He is a recipient of the “Rising STARs award” of the University of Texas and his research is supported by multiple NIH R01s.
  • Speaker: Tom Griffiths (Henry R. Luce Professor of Information Technology, Consciousness and Culture Departments of Psychology and Computer Science, Princeton University)
  • Title: Understanding human intelligence through human limitations
  • Date/Time: Monday, March 13, 2022, 12 pm – 1:30 pm.
  • Zoom Link: https://zoom.us/j/6334507957 (Also in-person at UGA Driftmier 1240)
  • Abstract: As machines continue to exceed human performance in a range of tasks, it is natural to ask how we might think about human intelligence in a future populated by super intelligent machines. One way to do this is to think about the unique computational problems posed by human lives, and in particular by our finite computational resources and finite lifespan. Thinking in these terms highlights two problems: making efficient use of our cognitive resources, and being able to learn from limited amounts of data. It also sets up a third problem: solving computational problems beyond the scale of any one individual. I will argue that these three problems pick out the key characteristics of human intelligence, and highlight some recent progress in understanding how human minds solve them.
  • Bio: Dr. Griffiths is the Henry R. Luce Professor of Information Technology, Consciousness, and Culture, in the Department of Computer Science and Department of Psychology at Princeton University. His research centers on developing mathematical models of higher level cognition and understanding the formal principles that underlie people’s ability to solve the computational problems they face in everyday life. At the core of his work is mining “big data” for insights that lead to better decisions. A native of the United Kingdom, Dr. Griffiths earned his B.A. from the University of Western Australia and his Ph.D. from Stanford University.
  • Speaker: Diane Cook (Regents Professor and Huie-Rogers Chair Professor, Washington State University)
  • Title: Data Mining a Human Digital Twin
  • Date/Time: Friday, February 10, 2023, 10AM – 11AM ET
  • Zoom Linkhttps://zoom.us/j/3460258911
  • Abstract: Digital Twins are a disruptive technology that can automate human health assessment and intervention by creating a holistic, virtual replica of a physical human. The increasing availability of sensing platforms and the maturing of data mining methods support building such a replica from longitudinal, passively-sensed data. By creating such a quantified self, we can more precisely understand current and future health status. We can also anticipate the outcomes of behavior-driven interventions. In this talk, I will discuss the challenges that accompany creating human digital twins in the wild, survey emerging data mining methods that tackle these challenges, and describe some of the current and future impacts that technologies have for supporting our aging population.
  • Bio: Diane Cook is a Regents Professor in the School of Electrical Engineering and Computer Science at Washington State University, founding director of the WSU Center for Advanced Studies in Adaptive Systems (CASAS), and co-director of the WSU AI Laboratory. She is a Fellow of the IEEE and the National Academy of Inventors. Diane’s work is featured in BBC, IEEE The Institute, IEEE Spectrum, Smithsonian, The White House Fact Sheet, Scientific American, the Wall Street Journal, AARP Magazine, HGTV, and ABC News. Her research aims to create technologies that automate health monitoring and intervention. Her research group is developing machine learning methods that map a human behaviorome as a foundation for constructing a digital twin. She also conducts multidisciplinary research to leverage digital twin technologies for automatically assessing, extending, and enhancing a person’s functional independence.
  • Speaker: Rosa I. Arriaga, Ph.D. (Associate Professor, Interactive Computing, Georgia Institute of Technology)
  • Date/Time: 1/27/2023 10-11AM
  • Zoom Linkhttps://zoom.us/j/3460258911
  • Title: Designing Theory-driven Technologies and AI for Improved Health and Wellness 
  • Abstract: Computing holds the promise of alleviating the negative impact of both chronic disease and developmental disorders by scaling human effort over time and space. Four in ten adults in the US have two or more chronic illnesses, and one in six children has one or more developmental disabilities. The urgent need to manage chronic illness calls for robust and reliable technology that is readily available to integrate into care-ecologies. In this talk, I will demonstrate how human-centered computing can leverage the generalizability of theoretical frameworks to build systems for asthma, autism, PTSD, and diabetes. I will discuss the unique challenges in their context of care: for patients with asthma and diabetes, this includes poor patient engagement and lack of continuity of care; PTSD therapy is limited by over-reliance on patient self-report and clinician intuition. I will present theory-driven technology interventions that address these issues, describe how AI will transform patient care and discuss how they can lead to improved health and wellness in diverse populations. 
  • Bio: Dr. Arriaga is a Human Computer Interaction (HCI) researcher in the School of Interactive Computing. Her emphasis is on using psychosocial theories and methods to address fundamental topics of HCI and Social Computing. Her current research is mental health and chronic care management where she designs technology to increase patient engagement, support continuity of care, enhance clinical decision making and mediate patient-provider communication.  She has received funding to develop computational systems to improve PTSD treatment and recovery, and diabetic foot ulcer care from NSF Smart & Connected Health and the American Diabetes Association, respectively. She earned a Ph.D. in Psychology from Harvard University. She has been at GT since 2007, from 2019-2022 she was the Associate Chair of Graduate Studies in Interactive Computing. She advises undergraduate, Masters and PhD students. She has graduated six doctoral students with dissertations on the role of technology for Autism, PTSD, and Health Informatics.  She teaches courses in HCI in the College of Computing. Her User Experience Design MOOC for the Georgia Tech-Coursera partnership has been taken by over 350k learners (with 4.5/5 star rating) and was cited as one of the top 250 online courses of all times.  
  • Speaker: Jiebo Luo (Professor of Computer Science, University of Rochester)
  • Title: What Social Media and Machine Learning Can Inform Us at Scale and in Real time
  • Date/Time: 1/20/2023 11-12 PM ET
  • Zoom Link: https://zoom.us/j/6334507957
  • Abstract: The COVID-19 pandemic has severely affected people’s daily lives and caused tremendous economic losses worldwide. AI technologies have been employed to fight the disease and predict the disease spread. However, its impact on human behaviors, e.g., public opinion and mental health, has not received as much attention to inform the policy and decision makers. Traditionally, the studies in these fields have primarily relied on interviews or surveys, largely limited to small-scale and not-up-to-date observations. In contrast, the rise of social media provides an opportunity to study many aspects of a pandemic at scale and in real time. Meanwhile, the recent advances in machine learning and data mining allow us to perform automated data processing and analysis. We will introduce several case studies, including 1) nuanced opinions on COVID-19 vaccines, 2) depression trends, 3) attitudes towards work from home, 4) consumer hoarding behaviors, 5) personal face mask usage, and 6) what to blame for inflation.
  • Bio: Jiebo Luo is Albert Arendt Hopeman Professor of Engineering and Professor of Computer Science at the University of Rochester which he joined in 2011 after a prolific career of fifteen years at Kodak Research Laboratories. He has authored well over 500 technical papers and holds over 90 U.S. patents. His research interests include computer vision, NLP, machine learning, data mining, social media, computational social science, and digital health. He has been involved in numerous technical conferences, including serving as program co-chair of ACM Multimedia 2010, IEEE CVPR 2012, ACM ICMR 2016, and IEEE ICIP 2017, and general co-chair of ACM Multimedia 2018. He has served on the editorial boards of the IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), IEEE Transactions on Multimedia (TMM), IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), IEEE Transactions on Big Data (TBD), ACM Transactions on Intelligent Systems and Technology (TIST), Pattern Recognition, Knowledge and Information Systems (KAIS), Machine Vision and Applications, and Intelligent Medicine. He served as the Editor-in-Chief of the IEEE Transactions on Multimedia from 2020 to 2022. Professor Luo is a Fellow of ACM, AAAI, IEEE, SPIE, and IAPR, as well as a Member of Academia Europaea and the US National Academy of Inventors.
  • Speaker: Kartik Ahuja (Research Scientist, Facebook AI Research (FAIR) Meta A)
  • Title: Out-of-Distribution Generalization: Invariance Principle and Beyond
  • Date/Time: 1/20/2023 10-11AM ET
  • Zoom Link: https://zoom.us/j/6334507957
  • Abstract: Our current deep learning systems are far from being safe to be deployed in the wild. These systems can cheat and exploit shortcuts to perform well. In this talk, we start with why we need to move from statistical models to causal models to address these generalization failures. In recent years, there has been a surge in methods that take inspiration from causality to help machine learning models generalize under different distribution shifts. The invariance principle from causal inference is at the heart of many of these methods. We show that invariance alone cannot capture some key failure modes. However, we show that invariance along with information bottleneck can overcome several key failure modes of out-of-distribution generalization. In the second part of the talk, we discuss out-of-distribution generalization in the context of time-series data. We describe some new challenging datasets for time-series and how existing methods perform on them. We conclude by describing some important future directions.
  • Bio: Kartik Ahuja is a research scientist at FAIR (Meta AI). His research focuses on the theory and methods for out-of-distribution generalization and causal representation learning. Before joining FAIR, he was a postdoctoral fellow at Montreal Institute for Learning Algorithms (Mila) working with Yoshua Bengio, Irina Rish, and Ioannis Mitliagkas. Kartik received his Ph.D. from UCLA’s Electrical and Computer Engineering and his BTech MTech dual degree in Electrical Engineering from the Indian Institute of Technology, Kanpur. He also spent a year at IBM Research TJ Watson Research Center, where he worked in the Foundations of Trustworthy AI Department as an AI resident. He was the recipient of the IVADO postdoctoral fellowship, the Guru Krupa Fellowship at UCLA, and the Dissertation Year Fellowship at UCLA.

  • Speaker: Hao Wang (Assistant Professor of Computer Science, Rutgers University)
  • Title: Bayesian Deep Learning: A Probabilistic Framework to Unify Deep Learning and Graphical Models
  • Date/Time: 11/18/2022 11AM-12PM ET
  • Zoom Link: https://zoom.us/j/6334507957
  • Abstract: While perception tasks such as visual object recognition and text understanding play an important role in human intelligence, the subsequent tasks that involve inference, reasoning, and planning require an even higher level of intelligence. The past few years have seen major advances in many perception tasks using deep learning models. In terms of higher-level inference, however, probabilistic graphical models, with their ability to expressively describe properties of variables and various probabilistic relations among variables, are still more powerful and flexible. To achieve integrated intelligence that involves both perception and inference, we have been exploring along a research direction, which we call Bayesian deep learning, to tightly integrate deep learning and Bayesian models within a principled probabilistic framework. In this talk, I will present the proposed unified framework and some of our recent work on Bayesian deep learning with various applications including recommendation, social network analysis, interpretable healthcare, domain adaptation, and representation learning.
  • Bio: Hao Wang is currently an Assistant Professor in the Department of Computer Science at Rutgers University. Previously he was a Postdoctoral Associate at the Computer Science & Artificial Intelligence Lab (CSAIL) of MIT, working with Dina Katabi and Tommi Jaakkola. He received his PhD degree from the Hong Kong University of Science and Technology, as the sole recipient of the School of Engineering PhD Research Excellence Award in 2017. He has been a visiting researcher in the Machine Learning Department of Carnegie Mellon University. His research focuses on statistical machine learning, deep learning, and data mining, with broad applications on recommender systems, healthcare, user profiling, social network analysis, text mining, etc. In 2015, he was awarded the Microsoft Fellowship in Asia and the Baidu Research Fellowship for his innovation on Bayesian deep learning and its applications on data mining and social network analysis.
  • Speaker: Dongjin Song (Assistant Professor of Computer Science, University of Connecticut)
  • Title: Harnessing Deep Neural Networks for Multivariate Time Series Analysis
  • Date/Time: 10/28/2022 10-11AM ET
  • Zoom Link: https://zoom.us/j/3460258911
  • Abstract: Multivariate time series data are ubiquitous in various real-world applications, e.g., healthcare, IoT, environmental sciences, etc. However, due to the complex temporal dynamics within the data, there are immense barriers to representing and distilling useful information from multivariate time series in order to facilitate underlying applications. In this talk, I will first introduce several representative works for time series representation. Next, I will discuss our recent works for time series anomaly detection. Finally, I will show the remaining challenges and ongoing work.
  • Bio: Dongjin Song is an assistant professor in the Department of Computer Science and Engineering at the University of Connecticut (UConn). His research interests include machine learning, deep learning, data mining, and related applications for time series data and graph representation learning. Papers describing his research have been published at top-tier data science and artificial intelligence conferences, such as NeurIPS, ICML, ICLR, KDD, ICDM, SDM,  AAAI, IJCAI, CVPR, ICCV, etc. He has served as PC chairs for AI for Time Series (AI4TS) workshop at IJCAI 2022 and the Mining and Learning from Time Series workshop at KDD 2022. He has also served as senior PC members for AAAI, IJCAI, and CIKM. He won the UConn Research Excellence Research (REP) Award in 2021.
  • Speaker: Thomas Ploetz (Associate Professor, Georgia Institute of Technology)
  • Title: If only we had more data! New Ways to Deriving Sensor-Based Human Activity Recognition Systems for Challenging Scenarios
  • Date/Time: Friday, October 14, 2022, 10AM – 11AM
  • Zoom Link: https://zoom.us/j/3460258911
  • Abstract: With the proliferation of miniaturized movement sensing capabilities, human activity recognition using wearables and other forms of pervasive sensing (HAR) has seen an enormous boost in both research and practical deployments. Prominent application domains are health assessments, wellbeing monitoring, and novel interaction paradigms. Arguably, and especially when compared to video based activity recognition, the use of movement sensors such as inertial measurement units has many practical advantages including mobility, more direct movement capturing, and less privacy invasive sensing. However, those advantages come with a cost when it comes to the design of automated sensor data analysis pipelines that are dominated by machine learning methods for the analysis of multi-variate time series data, i.e., continuous, noisy, multi-modal sensor data streams.  Probably the biggest challenge lies in the absence of (very) large, labeled datasets that can readily be used for developing and training complex analysis models. In this talk I will explore the challenges for deriving machine learning based systems for sensor-based human activity recognition. Based on work done in my research group I will focus on: i) cross-modality transfer; ii) the role of representation learning in modern HAR systems that overcome the need for explicit feature engineering; iii) self-supervised methods for most effective use of small, labeled datasets; and iv) strategies for bootstrapping HAR systems for new scenarios from scratch. I will conclude with an overview and outlook of next steps in the field that promise to lead to increased robustness and more flexible applicability of contemporary, machine-learning based HAR systems. 
  • Bio: Thomas Ploetz is a Computer Scientist with expertise and almost two decades of experience in Pattern Recognition and Machine Learning research (PhD from Bielefeld University, Germany). His research agenda focuses on applied machine learning, that is developing systems and innovative sensor data analysis methods for real world applications. Primary application domain for his work is computational behavior analysis where he develops methods for automated and objective behavior assessments in naturalistic environments, thereby making opportunistic use of ubiquitous and wearable sensing methods. Main driving functions for his work are “in the wild” deployments and as such the development of systems and methods that have a real impact on people’s lives. In 2017 Thomas joined the School of Interactive Computing at the Georgia Institute of Technology in Atlanta, USA where he works as an Associate Professor of Computing. Prior to this he was an academic at the School of Computing Science at Newcastle University in Newcastle upon Tyne, UK, where he was a Reader (Assoc. Prof.) for “Computational Behaviour Analysis” affiliated with Open Lab, Newcastle’s interdisciplinary research centre for cross-disciplinary research in digital technologies. Thomas has been very active in the mobile and ubiquitous, including wearable computing community. For example, he is an editor of the Proc. of the ACM on Interactive, Mobile, Wearable, and Ubiquitous computing technology (IMWUT), has twice been co-chair of the technical program committee of the International Symposium on Wearable Computing (ISWC), and is general co-chair of the 2022 Int. Joint Conf. On Pervasive and Ubiquitous Computing (Ubicomp).
  • Speaker: Ying Guo (Professor, Department of Biostatistics and Bioinformatics, Emory University)
  • Title: Statistical Learning with Neuroimaging for Reliable and Reproducible Brain Network Analysis
  • Date/Time: Friday, May 13, 2022, 10AM – 11AM
  • Zoom Link: https://zoom.us/j/92028563034?pwd=ZDFyTWE1NVFxRWFJREwwZS8yWlBYQT09
  • Abstract: In recent years, brain network-oriented analyses have become increasingly popular in neuroimaging studies to advance understanding of neural circuits and their association with neurodevelopment, mental illnesses and aging. These analyses often encounter challenges such as low signal-to-noise ratio in neuroimaging data, the high dimensionality of brain networks, and the large number of brain connections leading to spurious findings. In this talk, we present two new statistical methods that tackle the aforementioned challenges to improve the reliability and reproducibility in brain network analysis. The first method is Structurally Informed Bayesian Gaussian Graphical Model (siGGM). siGGM provides multimodality integrative network modeling that incorporates anatomical structure to improve the reliability in investigating brain functional networks. The second method is a novel blind source separation method with low-rank structure and uniform sparsity (LOCUS). LOCUS conducts fully data-driven decomposition of multi-subject connectivity data to extract population-level latent connectivity traits which could help uncover neural circuits associated with clinical and behavioral outcomes. By using a low rank factorization structure and a novel sparsity regularization method, LOCUS is able to achieve more efficient and reproducible source separation of brain connectivity data and reduce spurious findings in connectome analysis. We discuss the theoretical properties of the methods and demonstrate their performance through simulation studies and real-world neuroimaging data examples.
  • Bio: Ying Guo is Professor in the Department of Biostatistics and Bioinformatics at Emory University and an appointed Graduate Faculty of the Emory Neuroscience Program. She is a Founding Member and current Director of the Center for Biomedical Imaging Statistics (CBIS) at Emory University. Dr. Guo’s research focus on developing analytical methods for neuroimaging and mental health studies. Her main research areas include statistical methods for agreement and reproducibility studies, brain network analysis, multimodal neuroimaging and imaging-based prediction methods. Dr. Guo is a Fellow of American Statistical Association (ASA) and Chair-Elect 2022 for the ASA Statistics in Imaging Section. She is a Standing Member of NIH Emerging Imaging Technologies in Neuroscience (EITN) Study Section and has served on the editorial boards of scientific journals in statistics and psychiatry.
  • Speaker: Jianqing Fan (Professor, Department of Oper Res and Fin. Eng, Princeton University)
  • Title: Stability and Approximability of Deep ReLU Networks in Statistical Learning
  • Date/Time: Friday, April 29, 2022, 4:30PM – 5:30PM
  • Location: Botanical Garden, Garden Club Terrace Room
  • Abstract: This talk is on the stability of deep ReLU neural networks for nonparametric regression under the assumption that the noise has only a finite 𝑝-th moment. We unveil how the optimal rate of convergence depends on p, the degree of smoothness and the intrinsic dimension in a class of nonparametric regression functions with hierarchical composition structure when both the adaptive Huber loss and deep ReLU neural networks are used. This optimal rate of convergence cannot be obtained by the ordinary least squares but can be achieved by the Huber loss with a properly chosen parameter that adapts to the sample size, smoothness, and moment parameters. A concentration inequality for the adaptive Huber ReLU neural network estimators with allowable optimization errors is also derived. To establish a matching lower bound within the class of neural network estimators using the Huber loss, we employ a different strategy from the traditional route: constructing a deep ReLU network estimator that has a better empirical loss than the true function and the difference between these two functions furnishes a low bound. This step is related to the Huberization bias, yet more critically to the approximability of deep ReLU networks. As a result, we also contribute some new results on the approximation theory of deep ReLU neural networks. (Joint work with Yihong Gu and Wenxin Zhou)
  • Bio: Dr. Jianqing Fan, is a statistician, financial econometrician, and data scientist. He is Frederick L. Moore ’18 Professor of Finance, Professor of Statistics, and Professor of Operations Research and Financial Engineering at the Princeton University where he chaired the department from 2012 to 2015. He is the winner of The 2000 COPSS Presidents’ Award, Morningside Gold Medal for Applied Mathematics (2007), Guggenheim Fellow (2009), Pao-Lu Hsu Prize (2013), Guy Medal in Silver (2014), and Noether Senior Scholar Award (2018). He got elected to Academician from Academia Sinica in 2012. Dr. Jianqing Fan is a joint editor of Journal of Business and Economics Statistics and an associate editor of Management Science (2018–), among others, was the co-editor(-in-chief) of the Annals of Statistics (2004-2006) and an editor of Probability Theory and Related Fields (2003-2005), Econometrical Journal (2007-2012), Journal of Econometrics (2012-2018; managing editor 2014-18), and on the editorial boards of a number of other journals, including Journal of the American Statistical Association (1996-2017), Econometrica (2010-2013), Annals of Statistics (1998-2003), Statistica Sinica (1996-2002), and Journal of Financial Econometrics (2009-2012). He was the past president of the Institute of Mathematical Statistics (2006-2009), and past president of the International Chinese Statistical Association (2008-2010).
  • Speaker: Vikas Singh (Vilas Distinguished Achievement Professor, University of Wisconsin-Madison)
  • Title: Why Fairness Matters in Deep Learning, Computer Vision and Medical Image Analysis
  • Date/Time: Friday, April 8, 2022, 10AM – 11AM
  • Zoom Link: https://zoom.us/j/96172874272?pwd=WnhRZXo0UTUrMlR5VjBsWnFNRCs0Zz09
  • Abstract: Machine learning algorithms underlie a broad range of modern systems that we depend on everyday – from trying to connect to a customer service agent to deciding what to watch on Netflix to how a healthcare professional makes sense of our test results and history to inform the course of treatment. While algorithmic decision making continues to integrate with and (many would argue) benefit our lives, a number of recent high profile news stories have shown troubling blind spots including potentially discriminatory behavior in sentencing/parole recommendations made by automated systems to surveillance systems that show biases against specific races and skin color. Ongoing research on the design of fair algorithms seeks to address or minimize some of these problems. In this presentation, I will describe our recent efforts focused on enforcing fairness criteria in modern deep learning systems in computer vision and brain imaging. Through some simple examples, we will discuss how the formulations – derived from mature techniques in numerical optimization – provide alternatives to so-called adversarial training which is computationally intensive, often lacks statistical interpretation and is difficult to implement in various settings. Then, we will cover some interesting but less well-studied use cases in scientific/biomedical research that will be direct beneficiaries of results emerging from fairness research.
  • Bio: Vikas is a Vilas Distinguished Achievement Professor at the University of Wisconsin Madison. His research group is focused on design and analysis of algorithms for problems in computer vision, machine learning and statistical image analysis covering a range of applications including brain and cancer imaging. This work is generously supported by various federal agencies and industrial collaborators. He is a recipient of the NSF CAREER award. Vikas’ teaching and collaborative activities include teaching classes in Computer Vision, Image analysis and Artificial Intelligence as well as collaborating with a number of industrial partners to enable real-world deployments of AI/machine learning technologies.
  • Speaker: Ji Zhu (Susan A. Murphy Collegiate Professor of Statistics, University of Michigan, Ann Arbor)
  • Title: Fast Network Community Detection with Profile-Pseudo Likelihood Methods
  • Date/Time: Thursday, March 17, 2022, 3:50PM – 4:50PM
  • Zoom Link: https://zoom.us/j/94844263091
  • Abstract: The stochastic block model is one of the most studied network models for community detection. It is known that most algorithms proposed for fitting the stochastic block model likelihood function cannot scale to large-scale networks. One prominent work that overcomes this computational challenge is Amini et al. (2013), which proposed a fast pseudo-likelihood approach for fitting stochastic block models to large sparse networks. However, this approach does not have a convergence guarantee. In this talk, we present a novel likelihood based approach that decouples row and column labels in the likelihood function, which enables a fast alternating maximization; the new method is computationally efficient and has provable convergence guarantee. We also show that the proposed method provides strongly consistent estimates of the communities in a stochastic block model. As demonstrated in simulation studies, the proposed method outperforms the pseudo-likelihood approach in terms of both estimation accuracy and computation efficiency, especially for large sparse networks. We further consider extensions of the proposed method to handle networks with degree heterogeneity and bipartite properties. This is joint work with Jiangzhou Wang, Jingfei Zhang, Binghui Liu, and Jianhua Guo.
  • Speaker: Gari Clifford (Professor of Biomedical Informatics and Biomedical Engineering, Emory University and Georgia Institute of Technology)
  • Title: Boosting the performance of a deep learner using realistic models – an application to cardiology and post-traumatic stress disorder
  • Date/Time: Friday, December 3, 2021, 12:00PM – 1:00PM
  • Zoom Link: https://zoom.us/j/98339758184?pwd=QlZFQUpzZFhHclBtaGJyS1N0cE5QZz09
  • Abstract: In this talk I will discuss the concept of leveraging large databases to improve training in smaller databases, with a particular focus on using realistic models, rather than synthetic data. Notably, as databases increase in size, the quality of data labels drops. Often, the data become noisier with rising levels of non-random missingness. Increasingly, transfer learning is being leveraged to mitigate these problems, allowing algorithms to tune on smaller (or rarer) populations while leveraging information from much larger datasets. I’ll present an emerging paradigm in which we insert an extensive model-generated database in the transfer learning process to help a deep learner explore a much larger and denser data distribution. Since a model allows the generation of realistic data beyond the boundaries of the real data, the model can help train the deep learner to extrapolate beyond the observable collection of samples. Using cardiac time series data, I’ll demonstrate that this technique provides a significant performance boost, and discuss some possible extensions and consequences.
  • Bio: Gari Clifford is a tenured Professor of Biomedical Informatics and Biomedical Engineering at Emory University and the Georgia Institute of Technology, and the Chair of the Department of Biomedical Informatics at Emory. His research team applies signal processing and machine learning to medicine to classify, track and predict health and illness. His focus research areas include critical care, digital psychiatry, global health, mHealth, neuroinformatics and perinatal health, particularly in LMIC settings. After training in Theoretical Physics, he transitioned to Machine Learning and Engineering for his doctoral work at the University of Oxford in the 1990’s. He subsequently joined MIT as a postdoctoral fellow, then a Principal Research Scientist, where he managed the creation of the MIMIC II database, the largest open-access critical care database in the world. He later returned to Oxford as an Associate Professor of Biomedical Engineering, where he helped found its Sleep & Circadian Neuroscience Institute and served as Director of the Centre for Doctoral Training in Healthcare Innovation at the Oxford Institute of Biomedical Engineering. Gari is a strong supporter of commercial translation, working closely with industry as an advisor to multiple companies, co-founding and serving as CTO of an MIT spin-out (MindChild Medical) since 2009, and co-founding and serving as CSO for Lifebell AI since 2020. Gari is a champion for open-access data and open-source software in medicine, particularly through his leadership of the PhysioNet/CinC Challenges and contributions to the PhysioNet Resource. He is committed to developing sustainable solutions to healthcare problems in resource poor locations, with much of his work focused in Guatemala.
  • Speaker: Yiran Chen (Professor of Electrical and Computer Engineering, Duke University)
  • Title: Scalable, Heterogeneity-Aware and Privacy-Enhancing Federated Learning
  • Date/Time: Friday, November 12, 2021, 10AM – 11AM
  • Zoom Link: https://zoom.us/j/98339758184?pwd=QlZFQUpzZFhHclBtaGJyS1N0cE5QZz09
  • Abstract: Federated learning has become a popular distributed machine learning paradigm for developing on-device AI applications. However, the data residing across devices is intrinsically statistically heterogeneous (i.e., following non-IID data distribution) and mobile devices usually have limited communication bandwidth to transfer local updates. Such statistical heterogeneity and communication limitation are two major bottlenecks that hinder the application of federated learning. In addition, recent works have demonstrated that sharing model updates makes federated learning vulnerable to inference attacks. In this talk, we will present our recent works on the federated learning frameworks to address the scalability and heterogeneity issues simultaneously. In addition, we will also reveal the essential reason of privacy leakage in federated learning and provide a privacy-enhancing defense mechanism accordingly.
  • Bio: Yiran Chen received B.S (1998) and M.S. (2001) from Tsinghua University and Ph.D. (2005) from Purdue University. After five years in industry, he joined University of Pittsburgh in 2010 as Assistant Professor and then was promoted to Associate Professor with tenure in 2014, holding Bicentennial Alumni Faculty Fellow. He is now the Professor of the Department of Electrical and Computer Engineering at Duke University and serving as the director of the NSF AI Institute for Edge Computing Leveraging the Next-generation Networks (Athena) and the NSF Industry–University Cooperative Research Center (IUCRC) for Alternative Sustainable and Intelligent Computing (ASIC), and the co-director of Duke Center for Computational Evolutionary Intelligence (CEI). His group focuses on the research of new memory and storage systems, machine learning and neuromorphic computing, and mobile computing systems. Dr. Chen has published 1 book and about 500 technical publications and has been granted 96 US patents. He has served as the associate editor of a dozen international academic transactions/journals and served on the technical and organization committees of more than 60 international conferences. He is now serving as the Editor-in-Chief of the IEEE Circuits and Systems Magazine. He received seven best paper awards, one best poster award, and fifteen best paper nominations from international conferences and workshops. He received many professional awards and is the distinguished lecturer of IEEE CEDA (2018-2021). He is a Fellow of the ACM and IEEE and now serves as the chair of ACM SIGDA.
  • Speaker: Jun Liu (Professor of Statistics, Harvard University)
  • Title: Data Splitting for Graphical Model Selection With FDR Control
  • Date/Time: Thursday, October 21, 2021, 3:50PM – 4:50PM
  • Zoom Link: https://zoom.us/j/99986325350?pwd=QUVqdldrMm1OMVNaNzJEai9jZkVTUT09
  • Abstract: Simultaneously finding multiple influential variables and controlling the false discovery rate (FDR) for statistical and machine learning models is a problem of renewed interest recently. A classical statistical idea is to introduce perturbations and examine their impacts on a statistical procedure. We here explore the use of data splitting (DS) for controlling FDR in learning linear, generalized linear, and graphical models. Our proposed DS procedure simply splits the data into two halves at random, and computes a statistic reflecting the consistency of the two sets of parameter estimates (e.g., regression coefficients). The FDR control can be achieved by taking advantage of such a statistic, which possesses the property that, for any null feature its sampling distribution is symmetric about 0. Furthermore, by repeated sample splitting, we propose Multiple Data Splitting (MDS) to stabilize the selection result and boost the power. Interestingly, MDS not only helps overcome the power loss caused by DS with the FDR still under control, but also results in a lower variance for the estimated FDR compared with all other considered methods. DS and MDS are straightforward conceptually, easy to implement algorithmically, and efficient computationally. Simulation results as well as a real data application show that both DS and MDS control the FDR well and MDS is often the most powerful method among all in consideration, especially when the signals are weak and correlations or partial correlations are high among the features. Our preliminary tests on nonlinear models such as generalized linear models and neural networks also show promises. The presentation is based on joint work with Chenguang Dai, Buyu Lin, and Xin Xing.
  • Bio: Dr. Jun Liu is a Professor of Statistics at Harvard University, with a joint appointment in the Harvard School of Public Health. Dr. Liu received his BS degree in mathematics in 1985 from Peking University and Ph.D. in statistics in 1991 from the University of Chicago. He held Assistant, Associate, and full professor positions at Stanford University from 1994 to 2003. Dr. Liu won the NSF CAREER Award and the Stanford Terman fellowship in 1995, won the Mitchell Award for the best statistics application paper in 2000. In 2002, he received the prestigious COPSS Presidents’ Award. He was a Medallion Lecturer of the Institute of Mathematical Statistics (IMS), a Bernoulli Lecturer in 2004, and a Kuwait Lecturer of Cambridge University in 2008. He was elected to Fellow of the IMS in 2004 and Fellow of the American Statistical Association in 2005. He served on numerous grant review panels of the NSF and NIH and editorial boards of numerous leading statistical journals. He was a co-editor of the Journal of the American Statistical Association. Dr. Liu and his collaborators introduced the statistical missing data formulation and Gibbs sampling strategies for biological sequence analysis in the early 1990s. The resulting algorithms for protein sequence analysis, gene regulation analysis, and genetic studies have been adopted by many research groups and become standard tools for computational biologists. Dr. Liu has made fundamental contributions to statistical computing and Bayesian modeling. He pioneered sequential Monte Carlo (SMC) methods invented a few novel Markov chain Monte Carlo (MCMC) techniques. His studies of SMC and MCMC algorithms have had a broad impact on both theoretical understandings and practical applications. Dr. Liu has also pioneered novel Bayesian modeling techniques for discovering subtle interactions and nonlinear relationships in high-dimensional data. Dr. Liu has published one research monograph and more than 200 research articles in leading scientific journals and is one of the ISI Highly Cited mathematicians.
  • Speaker: Xia Hu (Associate Professor of Computer Science, Rice University)
  • Title: Towards Effective Interpretation of Deep Neural Networks: Algorithms and Applications
  • Date/Time: Friday, October 15, 2021, 9:30AM – 10:30AM
  • Zoom Link: https://zoom.us/j/97005929961?pwd=bXN6MVo0bmlhN3BMRDE4SFFqYitjUT09
  • Abstract: While Deep neural networks (DNN) have achieved superior performance in many downstream applications, they are often regarded as black-boxes and are criticized by their lack of interpretability, since these models cannot provide meaningful explanations on how a certain prediction is made. Without the explanations to enhance the transparency of DNN models, it would become difficult to build up trust among end-users. In this talk, I will present a systematic framework from modeling and application perspectives for generating DNN interpretability, aiming at dealing with two main technical challenges in interpretable machine learning, i.e., faithfulness and understandability. Specifically, to tackle the faithfulness challenge of post-hoc interpretation, I will introduce how to make use of feature inversion and additive decomposition techniques to explain predictions made by two classical DNN architectures, i.e., Convolutional Neural Networks and Recurrent Neural Networks. In addition, to develop DNNs that could generate more understandable interpretation to human beings, I will present a novel training method to regularize the interpretations of a DNN with domain knowledge.
  • Bio: Dr. Xia “Ben” Hu is an Associate Professor at Rice University in the Department of Computer Science. Dr. Hu has published over 100 papers in several major academic venues, including NeurIPS, ICLR, KDD, WWW, IJCAI, AAAI, etc. An open-source package developed by his group, namely AutoKeras, has become the most used automated deep learning system on Github (with over 8,000 stars and 1,000 forks). Also, his work on deep collaborative filtering, anomaly detection and knowledge graphs have been included in the TensorFlow package, Apple production system and Bing production system, respectively. His papers have received several Best Paper (Candidate) awards from venues such as WWW, WSDM and ICDM. He is the recipient of NSF CAREER Award. His work has been cited more than 10,000 times with an h-index of 44. He was the conference General Co-Chair for WSDM 2020.
  • Speaker: Christos Davatzikos (Wallace T. Miller Sr. Professor of Radiology, University of Pennsylvania)
  • Title: Machine Learning in Neuroimaging: applications to brain aging, Alzheimer’s Disease, and Schizophrenia
  • Date/Time: Friday, September 24, 2021, 10:00AM – 11:00AM
  • Zoom Link: https://zoom.us/j/94740164574?pwd=TDQzcW5VUndieWtwY2MyT1FrcVpHdz09
  • Abstract: Machine learning has deeply penetrated the neuroimaging field in the past 15 years, by providing a means to construct imaging signatures of normal and pathologic brain states on an individual person basis. In this talk, I will discuss examples from our laboratory’s work on imaging signatures of brain aging and early stages of neurodegenerative diseases, brain development and neuropsychiatric disorders. I will discuss some challenges, such as disease heterogeneity and integration of data from multiple sites in order to achieve sample sizes required by deep learning studies. I will discuss the integration of these methods and results in the context of a dimensional neuroimaging system and its contribution to integrated, precision diagnostics.
  • Bio: Christos Davatzikos is the Wallace T. Miller Sr. Professor of Radiology at the University of Pennsylvania, and Director of the Center for Biomedical Image Computing and Analytics. He holds a secondary appointment in Electrical and Systems Engineering at Penn as well as at the Bioengineering an Applied Mathematics graduate groups. He obtained his undergraduate degree by the National Technical University of Athens, Greece in 1989, and his Ph.D. degree from Johns Hopkins, in 1994, on a Fulbright scholarship. He then joined the faculty in Radiology and later in Computer Science, where he founded and directed the Neuroimaging Laboratory. In 2002 he moved to Penn, where he founded and directed the section of biomedical image analysis. Dr. Davatzikos’ interests are in medical image analysis. He oversees a diverse research program ranging from basic problems of imaging pattern analysis and machine learning, to a variety of clinical studies of aging and Alzheimer’s Disease, schizophrenia, brain cancer, and brain development. Dr. Davatzikos has served on a variety of scientific journal editorial boards and grant review committees. He is an IEEE fellow, a fellow of the American Institute for Medical and Biological Engineering, and member of the council of distinguished investigators of the US Academy of Radiology and Biomedical Imaging Research.
Organizers
  • Tianming Liu, Distinguished Research Professor, Department of Computer Science, UGA
  • Ping Ma, Distinguished Research Professor, Department of Statistics, UGA
  • WenZhan Song, Georgia Power Mickey A. Brown Professor, College of Engineering, UGA
  • Changying “Charlie” Li, Professor, College of Engineering, UGA
  • Haijian Sun, Assistant Professor, College of Engineering, UGA