The University of Georgia (UGA) Data Science and AI Seminars are monthly online seminars that cover interdisciplinary research topics in data science (DS), artificial intelligence (AI), statistics, engineering, biomedical informatics, and public health. We aim to bring together researchers from these fields to discuss exciting topics on DS/AI with interdisciplinary applications. If you are interested to speak in our forum, please contact Prof. Song (wsong@uga.edu) then sign up at the Speakers signup form.

Upcoming Talks
  • Speaker: Jiebo Luo (Professor of Computer Science, University of Rochester)
  • Title: Vision and Language: Past, Present, and Future
  • Date/Time: August, 2022
  • Zoom Link: TBA
  • Abstract: Computer vision and natural language processing are two key branches of artificial intelligence. Since the goal of computer vision has always been automatic extraction, analysis, and understanding of useful information from a single image or a sequence of images, it is natural for vision and language to come together to enable high-level computer vision tasks. Conversely, information extracted from images and videos can facilitate natural language processing tasks. Recent advances in machine learning and deep learning are facilitating reasoning about images and text in a joint fashion. In this talk, we will review a recently active area of research at the intersection of vision and language, including video-language alignment, image and video captioning, visual question answering, image retrieval using complex text queries, image generation from textual descriptions, language grounding in images and videos, as well as multimodal machine translation and vision-aided grammar induction.
  • Bio: Jiebo Luo is a Professor of Computer Science at the University of Rochester which he joined in 2011 after a prolific career of fifteen years at Kodak Research Laboratories. He has authored over 500 technical papers and holds over 90 U.S. patents. His research interests include computer vision, NLP, machine learning, data mining, computational social science, and digital health. He has been involved in numerous technical conferences, including serving as program co-chair of ACM Multimedia 2010, IEEE CVPR 2012, ACM ICMR 2016, and IEEE ICIP 2017, as well as general co-chair of ACM Multimedia 2018. He has served on the editorial boards of the IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), IEEE Transactions on Multimedia (TMM), IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), IEEE Transactions on Big Data (TBD), ACM Transactions on Intelligent Systems and Technology (TIST), Pattern Recognition, Knowledge and Information Systems (KAIS), and Intelligent Medicine. He is the current Editor-in-Chief of the IEEE Transactions on Multimedia. Professor Luo is a Fellow of ACM, AAAI, IEEE, SPIE, and IAPR.
  • Speaker: Carl-Fredrik Westin (Professor of Radiology, Harvard Medical School)
  • Title: TBA
  • Date/Time: August, 2022
  • Zoom Link: TBA
  • Abstract: TBA.
  • Bio: TBA.
  • Speaker: Thomas Ploetz (Associate Professor, Georgia Institute of Technology)
  • Title: If only we had more data! New Ways to Deriving Sensor-Based Human Activity Recognition Systems for Challenging Scenarios
  • Date/Time: Friday, October 14, 2022, 10AM – 11AM
  • Zoom Link: https://zoom.us/j/3460258911
  • Abstract: With the proliferation of miniaturized movement sensing capabilities, human activity recognition using wearables and other forms of pervasive sensing (HAR) has seen an enormous boost in both research and practical deployments. Prominent application domains are health assessments, wellbeing monitoring, and novel interaction paradigms. Arguably, and especially when compared to video based activity recognition, the use of movement sensors such as inertial measurement units has many practical advantages including mobility, more direct movement capturing, and less privacy invasive sensing. However, those advantages come with a cost when it comes to the design of automated sensor data analysis pipelines that are dominated by machine learning methods for the analysis of multi-variate time series data, i.e., continuous, noisy, multi-modal sensor data streams.  Probably the biggest challenge lies in the absence of (very) large, labeled datasets that can readily be used for developing and training complex analysis models. In this talk I will explore the challenges for deriving machine learning based systems for sensor-based human activity recognition. Based on work done in my research group I will focus on: i) cross-modality transfer; ii) the role of representation learning in modern HAR systems that overcome the need for explicit feature engineering; iii) self-supervised methods for most effective use of small, labeled datasets; and iv) strategies for bootstrapping HAR systems for new scenarios from scratch. I will conclude with an overview and outlook of next steps in the field that promise to lead to increased robustness and more flexible applicability of contemporary, machine-learning based HAR systems. 
  • Bio: Thomas Ploetz is a Computer Scientist with expertise and almost two decades of experience in Pattern Recognition and Machine Learning research (PhD from Bielefeld University, Germany). His research agenda focuses on applied machine learning, that is developing systems and innovative sensor data analysis methods for real world applications. Primary application domain for his work is computational behavior analysis where he develops methods for automated and objective behavior assessments in naturalistic environments, thereby making opportunistic use of ubiquitous and wearable sensing methods. Main driving functions for his work are “in the wild” deployments and as such the development of systems and methods that have a real impact on people’s lives. In 2017 Thomas joined the School of Interactive Computing at the Georgia Institute of Technology in Atlanta, USA where he works as an Associate Professor of Computing. Prior to this he was an academic at the School of Computing Science at Newcastle University in Newcastle upon Tyne, UK, where he was a Reader (Assoc. Prof.) for “Computational Behaviour Analysis” affiliated with Open Lab, Newcastle’s interdisciplinary research centre for cross-disciplinary research in digital technologies. Thomas has been very active in the mobile and ubiquitous, including wearable computing community. For example, he is an editor of the Proc. of the ACM on Interactive, Mobile, Wearable, and Ubiquitous computing technology (IMWUT), has twice been co-chair of the technical program committee of the International Symposium on Wearable Computing (ISWC), and is general co-chair of the 2022 Int. Joint Conf. On Pervasive and Ubiquitous Computing (Ubicomp).
  • Speaker: Lawrence Staib (Professor of Radiology & Biomedical Imaging, Biomedical Engineering, and Electrical Engineering, Yale University)
  • Title: TBA
  • Date/Time: Fall 2022
  • Zoom Link: TBA
  • Abstract: TBA.
  • Bio: TBA.
Past Talks
  • Speaker: Ying Guo (Professor, Department of Biostatistics and Bioinformatics, Emory University)
  • Title: Statistical Learning with Neuroimaging for Reliable and Reproducible Brain Network Analysis
  • Date/Time: Friday, May 13, 2022, 10AM – 11AM
  • Zoom Link: https://zoom.us/j/92028563034?pwd=ZDFyTWE1NVFxRWFJREwwZS8yWlBYQT09
  • Abstract: In recent years, brain network-oriented analyses have become increasingly popular in neuroimaging studies to advance understanding of neural circuits and their association with neurodevelopment, mental illnesses and aging. These analyses often encounter challenges such as low signal-to-noise ratio in neuroimaging data, the high dimensionality of brain networks, and the large number of brain connections leading to spurious findings. In this talk, we present two new statistical methods that tackle the aforementioned challenges to improve the reliability and reproducibility in brain network analysis. The first method is Structurally Informed Bayesian Gaussian Graphical Model (siGGM). siGGM provides multimodality integrative network modeling that incorporates anatomical structure to improve the reliability in investigating brain functional networks. The second method is a novel blind source separation method with low-rank structure and uniform sparsity (LOCUS). LOCUS conducts fully data-driven decomposition of multi-subject connectivity data to extract population-level latent connectivity traits which could help uncover neural circuits associated with clinical and behavioral outcomes. By using a low rank factorization structure and a novel sparsity regularization method, LOCUS is able to achieve more efficient and reproducible source separation of brain connectivity data and reduce spurious findings in connectome analysis. We discuss the theoretical properties of the methods and demonstrate their performance through simulation studies and real-world neuroimaging data examples.
  • Bio: Ying Guo is Professor in the Department of Biostatistics and Bioinformatics at Emory University and an appointed Graduate Faculty of the Emory Neuroscience Program. She is a Founding Member and current Director of the Center for Biomedical Imaging Statistics (CBIS) at Emory University. Dr. Guo’s research focus on developing analytical methods for neuroimaging and mental health studies. Her main research areas include statistical methods for agreement and reproducibility studies, brain network analysis, multimodal neuroimaging and imaging-based prediction methods. Dr. Guo is a Fellow of American Statistical Association (ASA) and Chair-Elect 2022 for the ASA Statistics in Imaging Section. She is a Standing Member of NIH Emerging Imaging Technologies in Neuroscience (EITN) Study Section and has served on the editorial boards of scientific journals in statistics and psychiatry.
  • Speaker: Jianqing Fan (Professor, Department of Oper Res and Fin. Eng, Princeton University)
  • Title: Stability and Approximability of Deep ReLU Networks in Statistical Learning
  • Date/Time: Friday, April 29, 2022, 4:30PM – 5:30PM
  • Location: Botanical Garden, Garden Club Terrace Room
  • Abstract: This talk is on the stability of deep ReLU neural networks for nonparametric regression under the assumption that the noise has only a finite đť‘ť-th moment. We unveil how the optimal rate of convergence depends on p, the degree of smoothness and the intrinsic dimension in a class of nonparametric regression functions with hierarchical composition structure when both the adaptive Huber loss and deep ReLU neural networks are used. This optimal rate of convergence cannot be obtained by the ordinary least squares but can be achieved by the Huber loss with a properly chosen parameter that adapts to the sample size, smoothness, and moment parameters. A concentration inequality for the adaptive Huber ReLU neural network estimators with allowable optimization errors is also derived. To establish a matching lower bound within the class of neural network estimators using the Huber loss, we employ a different strategy from the traditional route: constructing a deep ReLU network estimator that has a better empirical loss than the true function and the difference between these two functions furnishes a low bound. This step is related to the Huberization bias, yet more critically to the approximability of deep ReLU networks. As a result, we also contribute some new results on the approximation theory of deep ReLU neural networks. (Joint work with Yihong Gu and Wenxin Zhou)
  • Bio: Dr. Jianqing Fan, is a statistician, financial econometrician, and data scientist. He is Frederick L. Moore ’18 Professor of Finance, Professor of Statistics, and Professor of Operations Research and Financial Engineering at the Princeton University where he chaired the department from 2012 to 2015. He is the winner of The 2000 COPSS Presidents’ Award, Morningside Gold Medal for Applied Mathematics (2007), Guggenheim Fellow (2009), Pao-Lu Hsu Prize (2013), Guy Medal in Silver (2014), and Noether Senior Scholar Award (2018). He got elected to Academician from Academia Sinica in 2012. Dr. Jianqing Fan is a joint editor of Journal of Business and Economics Statistics and an associate editor of Management Science (2018–), among others, was the co-editor(-in-chief) of the Annals of Statistics (2004-2006) and an editor of Probability Theory and Related Fields (2003-2005), Econometrical Journal (2007-2012), Journal of Econometrics (2012-2018; managing editor 2014-18), and on the editorial boards of a number of other journals, including Journal of the American Statistical Association (1996-2017), Econometrica (2010-2013), Annals of Statistics (1998-2003), Statistica Sinica (1996-2002), and Journal of Financial Econometrics (2009-2012). He was the past president of the Institute of Mathematical Statistics (2006-2009), and past president of the International Chinese Statistical Association (2008-2010).
  • Speaker: Vikas Singh (Vilas Distinguished Achievement Professor, University of Wisconsin-Madison)
  • Title: Why Fairness Matters in Deep Learning, Computer Vision and Medical Image Analysis
  • Date/Time: Friday, April 8, 2022, 10AM – 11AM
  • Zoom Link: https://zoom.us/j/96172874272?pwd=WnhRZXo0UTUrMlR5VjBsWnFNRCs0Zz09
  • Abstract: Machine learning algorithms underlie a broad range of modern systems that we depend on everyday – from trying to connect to a customer service agent to deciding what to watch on Netflix to how a healthcare professional makes sense of our test results and history to inform the course of treatment. While algorithmic decision making continues to integrate with and (many would argue) benefit our lives, a number of recent high profile news stories have shown troubling blind spots including potentially discriminatory behavior in sentencing/parole recommendations made by automated systems to surveillance systems that show biases against specific races and skin color. Ongoing research on the design of fair algorithms seeks to address or minimize some of these problems. In this presentation, I will describe our recent efforts focused on enforcing fairness criteria in modern deep learning systems in computer vision and brain imaging. Through some simple examples, we will discuss how the formulations – derived from mature techniques in numerical optimization – provide alternatives to so-called adversarial training which is computationally intensive, often lacks statistical interpretation and is difficult to implement in various settings. Then, we will cover some interesting but less well-studied use cases in scientific/biomedical research that will be direct beneficiaries of results emerging from fairness research.
  • Bio: Vikas is a Vilas Distinguished Achievement Professor at the University of Wisconsin Madison. His research group is focused on design and analysis of algorithms for problems in computer vision, machine learning and statistical image analysis covering a range of applications including brain and cancer imaging. This work is generously supported by various federal agencies and industrial collaborators. He is a recipient of the NSF CAREER award. Vikas’ teaching and collaborative activities include teaching classes in Computer Vision, Image analysis and Artificial Intelligence as well as collaborating with a number of industrial partners to enable real-world deployments of AI/machine learning technologies.
  • Speaker: Ji Zhu (Susan A. Murphy Collegiate Professor of Statistics, University of Michigan, Ann Arbor)
  • Title: Fast Network Community Detection with Profile-Pseudo Likelihood Methods
  • Date/Time: Thursday, March 17, 2022, 3:50PM – 4:50PM
  • Zoom Link: https://zoom.us/j/94844263091
  • Abstract: The stochastic block model is one of the most studied network models for community detection. It is known that most algorithms proposed for fitting the stochastic block model likelihood function cannot scale to large-scale networks. One prominent work that overcomes this computational challenge is Amini et al. (2013), which proposed a fast pseudo-likelihood approach for fitting stochastic block models to large sparse networks. However, this approach does not have a convergence guarantee. In this talk, we present a novel likelihood based approach that decouples row and column labels in the likelihood function, which enables a fast alternating maximization; the new method is computationally efficient and has provable convergence guarantee. We also show that the proposed method provides strongly consistent estimates of the communities in a stochastic block model. As demonstrated in simulation studies, the proposed method outperforms the pseudo-likelihood approach in terms of both estimation accuracy and computation efficiency, especially for large sparse networks. We further consider extensions of the proposed method to handle networks with degree heterogeneity and bipartite properties. This is joint work with Jiangzhou Wang, Jingfei Zhang, Binghui Liu, and Jianhua Guo.
  • Speaker: Gari Clifford (Professor of Biomedical Informatics and Biomedical Engineering, Emory University and Georgia Institute of Technology)
  • Title: Boosting the performance of a deep learner using realistic models – an application to cardiology and post-traumatic stress disorder
  • Date/Time: Friday, December 3, 2021, 12:00PM – 1:00PM
  • Zoom Link: https://zoom.us/j/98339758184?pwd=QlZFQUpzZFhHclBtaGJyS1N0cE5QZz09
  • Abstract: In this talk I will discuss the concept of leveraging large databases to improve training in smaller databases, with a particular focus on using realistic models, rather than synthetic data. Notably, as databases increase in size, the quality of data labels drops. Often, the data become noisier with rising levels of non-random missingness. Increasingly, transfer learning is being leveraged to mitigate these problems, allowing algorithms to tune on smaller (or rarer) populations while leveraging information from much larger datasets. I’ll present an emerging paradigm in which we insert an extensive model-generated database in the transfer learning process to help a deep learner explore a much larger and denser data distribution. Since a model allows the generation of realistic data beyond the boundaries of the real data, the model can help train the deep learner to extrapolate beyond the observable collection of samples. Using cardiac time series data, I’ll demonstrate that this technique provides a significant performance boost, and discuss some possible extensions and consequences.
  • Bio: Gari Clifford is a tenured Professor of Biomedical Informatics and Biomedical Engineering at Emory University and the Georgia Institute of Technology, and the Chair of the Department of Biomedical Informatics at Emory. His research team applies signal processing and machine learning to medicine to classify, track and predict health and illness. His focus research areas include critical care, digital psychiatry, global health, mHealth, neuroinformatics and perinatal health, particularly in LMIC settings. After training in Theoretical Physics, he transitioned to Machine Learning and Engineering for his doctoral work at the University of Oxford in the 1990’s. He subsequently joined MIT as a postdoctoral fellow, then a Principal Research Scientist, where he managed the creation of the MIMIC II database, the largest open-access critical care database in the world. He later returned to Oxford as an Associate Professor of Biomedical Engineering, where he helped found its Sleep & Circadian Neuroscience Institute and served as Director of the Centre for Doctoral Training in Healthcare Innovation at the Oxford Institute of Biomedical Engineering. Gari is a strong supporter of commercial translation, working closely with industry as an advisor to multiple companies, co-founding and serving as CTO of an MIT spin-out (MindChild Medical) since 2009, and co-founding and serving as CSO for Lifebell AI since 2020. Gari is a champion for open-access data and open-source software in medicine, particularly through his leadership of the PhysioNet/CinC Challenges and contributions to the PhysioNet Resource. He is committed to developing sustainable solutions to healthcare problems in resource poor locations, with much of his work focused in Guatemala.
  • Speaker: Yiran Chen (Professor of Electrical and Computer Engineering, Duke University)
  • Title: Scalable, Heterogeneity-Aware and Privacy-Enhancing Federated Learning
  • Date/Time: Friday, November 12, 2021, 10AM – 11AM
  • Zoom Link: https://zoom.us/j/98339758184?pwd=QlZFQUpzZFhHclBtaGJyS1N0cE5QZz09
  • Abstract: Federated learning has become a popular distributed machine learning paradigm for developing on-device AI applications. However, the data residing across devices is intrinsically statistically heterogeneous (i.e., following non-IID data distribution) and mobile devices usually have limited communication bandwidth to transfer local updates. Such statistical heterogeneity and communication limitation are two major bottlenecks that hinder the application of federated learning. In addition, recent works have demonstrated that sharing model updates makes federated learning vulnerable to inference attacks. In this talk, we will present our recent works on the federated learning frameworks to address the scalability and heterogeneity issues simultaneously. In addition, we will also reveal the essential reason of privacy leakage in federated learning and provide a privacy-enhancing defense mechanism accordingly.
  • Bio: Yiran Chen received B.S (1998) and M.S. (2001) from Tsinghua University and Ph.D. (2005) from Purdue University. After five years in industry, he joined University of Pittsburgh in 2010 as Assistant Professor and then was promoted to Associate Professor with tenure in 2014, holding Bicentennial Alumni Faculty Fellow. He is now the Professor of the Department of Electrical and Computer Engineering at Duke University and serving as the director of the NSF AI Institute for Edge Computing Leveraging the Next-generation Networks (Athena) and the NSF Industry–University Cooperative Research Center (IUCRC) for Alternative Sustainable and Intelligent Computing (ASIC), and the co-director of Duke Center for Computational Evolutionary Intelligence (CEI). His group focuses on the research of new memory and storage systems, machine learning and neuromorphic computing, and mobile computing systems. Dr. Chen has published 1 book and about 500 technical publications and has been granted 96 US patents. He has served as the associate editor of a dozen international academic transactions/journals and served on the technical and organization committees of more than 60 international conferences. He is now serving as the Editor-in-Chief of the IEEE Circuits and Systems Magazine. He received seven best paper awards, one best poster award, and fifteen best paper nominations from international conferences and workshops. He received many professional awards and is the distinguished lecturer of IEEE CEDA (2018-2021). He is a Fellow of the ACM and IEEE and now serves as the chair of ACM SIGDA.
  • Speaker: Jun Liu (Professor of Statistics, Harvard University)
  • Title: Data Splitting for Graphical Model Selection With FDR Control
  • Date/Time: Thursday, October 21, 2021, 3:50PM – 4:50PM
  • Zoom Link: https://zoom.us/j/99986325350?pwd=QUVqdldrMm1OMVNaNzJEai9jZkVTUT09
  • Abstract: Simultaneously finding multiple influential variables and controlling the false discovery rate (FDR) for statistical and machine learning models is a problem of renewed interest recently. A classical statistical idea is to introduce perturbations and examine their impacts on a statistical procedure. We here explore the use of data splitting (DS) for controlling FDR in learning linear, generalized linear, and graphical models. Our proposed DS procedure simply splits the data into two halves at random, and computes a statistic reflecting the consistency of the two sets of parameter estimates (e.g., regression coefficients). The FDR control can be achieved by taking advantage of such a statistic, which possesses the property that, for any null feature its sampling distribution is symmetric about 0. Furthermore, by repeated sample splitting, we propose Multiple Data Splitting (MDS) to stabilize the selection result and boost the power. Interestingly, MDS not only helps overcome the power loss caused by DS with the FDR still under control, but also results in a lower variance for the estimated FDR compared with all other considered methods. DS and MDS are straightforward conceptually, easy to implement algorithmically, and efficient computationally. Simulation results as well as a real data application show that both DS and MDS control the FDR well and MDS is often the most powerful method among all in consideration, especially when the signals are weak and correlations or partial correlations are high among the features. Our preliminary tests on nonlinear models such as generalized linear models and neural networks also show promises. The presentation is based on joint work with Chenguang Dai, Buyu Lin, and Xin Xing.
  • Bio: Dr. Jun Liu is a Professor of Statistics at Harvard University, with a joint appointment in the Harvard School of Public Health. Dr. Liu received his BS degree in mathematics in 1985 from Peking University and Ph.D. in statistics in 1991 from the University of Chicago. He held Assistant, Associate, and full professor positions at Stanford University from 1994 to 2003. Dr. Liu won the NSF CAREER Award and the Stanford Terman fellowship in 1995, won the Mitchell Award for the best statistics application paper in 2000. In 2002, he received the prestigious COPSS Presidents’ Award. He was a Medallion Lecturer of the Institute of Mathematical Statistics (IMS), a Bernoulli Lecturer in 2004, and a Kuwait Lecturer of Cambridge University in 2008. He was elected to Fellow of the IMS in 2004 and Fellow of the American Statistical Association in 2005. He served on numerous grant review panels of the NSF and NIH and editorial boards of numerous leading statistical journals. He was a co-editor of the Journal of the American Statistical Association. Dr. Liu and his collaborators introduced the statistical missing data formulation and Gibbs sampling strategies for biological sequence analysis in the early 1990s. The resulting algorithms for protein sequence analysis, gene regulation analysis, and genetic studies have been adopted by many research groups and become standard tools for computational biologists. Dr. Liu has made fundamental contributions to statistical computing and Bayesian modeling. He pioneered sequential Monte Carlo (SMC) methods invented a few novel Markov chain Monte Carlo (MCMC) techniques. His studies of SMC and MCMC algorithms have had a broad impact on both theoretical understandings and practical applications. Dr. Liu has also pioneered novel Bayesian modeling techniques for discovering subtle interactions and nonlinear relationships in high-dimensional data. Dr. Liu has published one research monograph and more than 200 research articles in leading scientific journals and is one of the ISI Highly Cited mathematicians.
  • Speaker: Xia Hu (Associate Professor of Computer Science, Rice University)
  • Title: Towards Effective Interpretation of Deep Neural Networks: Algorithms and Applications
  • Date/Time: Friday, October 15, 2021, 9:30AM – 10:30AM
  • Zoom Link: https://zoom.us/j/97005929961?pwd=bXN6MVo0bmlhN3BMRDE4SFFqYitjUT09
  • Abstract: While Deep neural networks (DNN) have achieved superior performance in many downstream applications, they are often regarded as black-boxes and are criticized by their lack of interpretability, since these models cannot provide meaningful explanations on how a certain prediction is made. Without the explanations to enhance the transparency of DNN models, it would become difficult to build up trust among end-users. In this talk, I will present a systematic framework from modeling and application perspectives for generating DNN interpretability, aiming at dealing with two main technical challenges in interpretable machine learning, i.e., faithfulness and understandability. Specifically, to tackle the faithfulness challenge of post-hoc interpretation, I will introduce how to make use of feature inversion and additive decomposition techniques to explain predictions made by two classical DNN architectures, i.e., Convolutional Neural Networks and Recurrent Neural Networks. In addition, to develop DNNs that could generate more understandable interpretation to human beings, I will present a novel training method to regularize the interpretations of a DNN with domain knowledge.
  • Bio: Dr. Xia “Ben” Hu is an Associate Professor at Rice University in the Department of Computer Science. Dr. Hu has published over 100 papers in several major academic venues, including NeurIPS, ICLR, KDD, WWW, IJCAI, AAAI, etc. An open-source package developed by his group, namely AutoKeras, has become the most used automated deep learning system on Github (with over 8,000 stars and 1,000 forks). Also, his work on deep collaborative filtering, anomaly detection and knowledge graphs have been included in the TensorFlow package, Apple production system and Bing production system, respectively. His papers have received several Best Paper (Candidate) awards from venues such as WWW, WSDM and ICDM. He is the recipient of NSF CAREER Award. His work has been cited more than 10,000 times with an h-index of 44. He was the conference General Co-Chair for WSDM 2020.
  • Speaker: Christos Davatzikos (Wallace T. Miller Sr. Professor of Radiology, University of Pennsylvania)
  • Title: Machine Learning in Neuroimaging: applications to brain aging, Alzheimer’s Disease, and Schizophrenia
  • Date/Time: Friday, September 24, 2021, 10:00AM – 11:00AM
  • Zoom Link: https://zoom.us/j/94740164574?pwd=TDQzcW5VUndieWtwY2MyT1FrcVpHdz09
  • Abstract: Machine learning has deeply penetrated the neuroimaging field in the past 15 years, by providing a means to construct imaging signatures of normal and pathologic brain states on an individual person basis. In this talk, I will discuss examples from our laboratory’s work on imaging signatures of brain aging and early stages of neurodegenerative diseases, brain development and neuropsychiatric disorders. I will discuss some challenges, such as disease heterogeneity and integration of data from multiple sites in order to achieve sample sizes required by deep learning studies. I will discuss the integration of these methods and results in the context of a dimensional neuroimaging system and its contribution to integrated, precision diagnostics.
  • Bio: Christos Davatzikos is the Wallace T. Miller Sr. Professor of Radiology at the University of Pennsylvania, and Director of the Center for Biomedical Image Computing and Analytics. He holds a secondary appointment in Electrical and Systems Engineering at Penn as well as at the Bioengineering an Applied Mathematics graduate groups. He obtained his undergraduate degree by the National Technical University of Athens, Greece in 1989, and his Ph.D. degree from Johns Hopkins, in 1994, on a Fulbright scholarship. He then joined the faculty in Radiology and later in Computer Science, where he founded and directed the Neuroimaging Laboratory. In 2002 he moved to Penn, where he founded and directed the section of biomedical image analysis. Dr. Davatzikos’ interests are in medical image analysis. He oversees a diverse research program ranging from basic problems of imaging pattern analysis and machine learning, to a variety of clinical studies of aging and Alzheimer’s Disease, schizophrenia, brain cancer, and brain development. Dr. Davatzikos has served on a variety of scientific journal editorial boards and grant review committees. He is an IEEE fellow, a fellow of the American Institute for Medical and Biological Engineering, and member of the council of distinguished investigators of the US Academy of Radiology and Biomedical Imaging Research.
Organizers
  • Tianming Liu, Distinguished Research Professor, Department of Computer Science, UGA
  • Ping Ma, Distinguished Research Professor, Department of Statistics, UGA
  • WenZhan Song, Georgia Power Mickey A. Brown Professor, College of Engineering, UGA
  • Changying “Charlie” Li, Professor, College of Engineering, UGA
  • Haijian Sun, Assistant Professor, College of Engineering, UGA