Colloquium talks will take place on Fridays, 1:00pm - 2:00pm. Talks will be held in a hybrid in-person / virtual format. The in-person portion will be held in the Katharine Reed Cudahy Building, Room 401 on the Marquette University campus. Please address inquiries/suggestions to Dr. Ongie at gregory.ongie@marquette.edu. Those affiliated with Marquette University can join virtually by joining the "MSSC Department Colloquium" Team on Microsoft Teams.

**September 10 - Shane Hesprich (MU Research Computing Services)**

High Performance Computing at Marquette

High Performance Computing (HPC) is a powerful tool used in almost every field from economics and engineering, to healthcare and business information. Raj, Marquette's freely available, centralized HPC resource, provides students and faculty the ability to utilize HPC for research and academic purposes. Understanding how to access and properly utilize this resource can be of great benefit to your career here at Marquette.

**October 15 - Chase Sakitis (Marquette University)**

A Formal Bayesian Approach to SENSE Image Reconstruction

In fMRI, capturing cognitive temporal dynamics is dependent upon the rate at which volume brain images are acquired. The sampling time for an array of spatial frequencies to reconstruct an image is the limiting factor. Multi-coil SENSE image reconstruction is a parallel imaging technique that has greatly reduced image scan time. In SENSE image reconstruction, complex-valued coil sensitivities are estimated once from a priori calibration images and used to form a “known” design matrix to reconstruct every image. However, the SENSE technique is highly inaccurate when the sensitivity design matrix is not positive definite. Here, we propose a formal Bayesian approach where prior distributions for the unaliased images, coil sensitivities, and variances/covariances are assessed from the a priori calibration image information. Images, coil sensitivities, and variances/covariances are estimated a posteriori jointly via the Iterated Conditional Modes maximization algorithm and marginally via MCMC using the Gibbs sampling algorithm. Since the posterior marginal distributions are available, hypothesis testing is possible. This Bayesian SENSE (BSENSE) model to reconstruct images is applied to realistically simulated fMRI data. This BSENSE model accurately reconstructs a single slice image as well as a series of slice images without aliasing artifacts and was used to produce magnitude-only task activation.

**October 20 (Wednesday, 1:00pm) - Danny Smyl (University of Sheffield)**

Some recent advances in inverse problems applied to NDE and SHM

The field of inverse problems, the mathematics of estimating and understanding causalities from effects (data), has taken massive strides in the past 20 years. Since the advent of high performance, probabilistic, and learned computation, inversion-based applications in nondestructive evaluation (NDE) and structural health monitoring (SHM) have become increasingly pervasive. In this seminar, we highlight some key contemporary advances in inverse problems applied to NDE and SHM. In this effort, we evidence recent developments in learned (direct) inversion, multi-state reconstruction, sensor optimization, highly dynamical spatial loading prediction, and finite element model error prediction/compensation.

**October 29 - John Lipor (Portland State University)**

Improving K-Subspaces via Coherence Pursuit

Subspace clustering is a powerful generalization of clustering for high-dimensional data analysis, where low-rank cluster structure is leveraged for accurate inference. K-Subspaces (KSS), an alternating algorithm that mirrors K-means, is a classical approach for clustering with this model. Like K-means, KSS is highly sensitive to initialization, yet KSS has two major handicaps beyond this issue. First, unlike K-means, the KSS objective is NP-hard to approximate within any finite factor for large enough subspace rank. Second, it is known that the subspace estimation step is faulty when an estimated cluster has points from multiple subspaces. In this paper we demonstrate both of these additional drawbacks, provide a proof for the former, and offer a solution to the latter through the use of a robust subspace recovery (RSR) method known as Coherence Pursuit (CoP). While many RSR methods have been developed in recent years, few can handle the case where the outliers are themselves low rank. We prove that CoP can handle low-rank outliers. This and its low computational complexity make it ideal to incorporate into the subspace estimation step of KSS. We demonstrate on synthetic data that CoP successfully rejects low-rank outliers and show that combining Coherence Pursuit with K-Subspaces yields state-of-the-art clustering performance on canonical benchmark datasets.

**November 12 - Jessi Cisewski-Kehe (UW Madison)**

Topological Data Analysis That's Out of This World

Data exhibiting complicated spatial structures are common in many areas of science (e.g., cosmology, biology), but can be difficult to analyze. Persistent homology is an approach within the area of Topological Data Analysis (TDA) that offers a framework to represent, visualize, and interpret complex data by extracting topological features which may be used to infer properties of the underlying structures. For example, TDA is a beneficial technique for analyzing intricate and spatially complex web-like data such as fibrin or the large-scale structure (LSS) of the Universe. LSS is known as the Cosmic Web due to the spatial distribution of matter resembling a 3D web. The accepted cosmological model presumes cold dark matter but discriminating LSS under varying cosmological assumptions is of interest. In order to understand the physics of the Universe, theoretical and computational cosmologists develop large-scale simulations that allow for visualizing and analyzing the LSS under varying physical assumptions. Each object in the 3D data set can represent structures such as galaxies, clusters of galaxies, or dark matter haloes, and topological summaries ("persistence diagrams") can be obtained for these simulated data that summarize the different ordered holes in the data (e.g., connected components, loops, voids). The topological summaries are interesting and informative descriptors of the Universe on their own, but hypothesis tests using the topological summaries provide a way to make more rigorous comparisons of LSS under different theoretical models. We present several possible test statistics for two-sample hypothesis tests using the topological summaries, carry out a simulation study to investigate the performance of the proposed test statistics using cosmological simulation data for inference on distinguishing LSS assuming cold dark matter versus a different cosmological model which assumes warm dark matter.

**November 19 - Robert Krafty (Emory University)**

Interpretable PCA for Multilevel Multivariate Functional Data

Many studies collect functional data from multiple subjects that have both multilevel and multivariate structures. An example of such data comes from popular neuroscience experiments where participants' brain activity is recorded using modalities such as EEG and summarized as power within multiple time-varying frequency bands within multiple electrodes, or brain regions. Summarizing the joint variation across multiple frequency bands for both whole-brain variability between subjects, as well as location-variation within subjects, can help to explain neural reactions to stimuli. This article introduces a novel approach to conducting interpretable principal components analysis on multilevel multivariate functional data that decomposes total variation into subject-level and replicate-within-subject-level (i.e. electrode-level) variation, and provides interpretable components that can be both sparse among variates (e.g. frequency bands) and have localized support over time within each frequency band. Smoothness is achieved through a roughness penalty, while sparsity and localization of components are achieved by solving an innovative rank-one based convex optimization problem with block Frobenius and matrix L1-norm based penalties. The method is used to analyze data from a study to better understand reactions to emotional information in individuals with histories of trauma and the symptom of dissociation, revealing new neurophysiological insights into how subject- and electrode-level brain activity are associated with these phenomena.

**December 3 - Luke Mcguire (University of Arizona)**

Post-wildfire debris flow hazards

Fire temporarily alters soil and vegetation properties, promoting increases in runoff and erosion that can dramatically increase the likelihood of destructive flash floods and debris flows. Debris flows, or fast-moving landslides that consist of a mixture of water, mud, and rock, initiate after fires when surface water runoff rapidly erodes sediment on steep slopes. Due to the complex interactions between runoff generation, soil erosion, and post-fire debris-flow initiation, the study of post-fire debris-flow hazards necessitates an approach that couples these processes within a common modeling framework. Models used to simulate these processes, however, often contain a number of poorly constrained parameters, particularly in post-fire settings where there is limited time to collect data and where parameters related to soil and vegetation properties will change over time as the landscape recovers. Here, we describe physics-based models designed to simulate runoff, erosion, and debris flow processes in burned areas as well as how these models can inform reduced-complexity models used to facilitate rapid hazard assessments. We highlight existing gaps in our ability to assess post-fire debris-flow hazards and motivate the need to expand our ability to use numerical modeling to support post-fire hazard assessment and mitigation efforts.

**December 8 (Wednesday, 2:15pm) - Md. Fitrat Hossain (Marquette University)**

Personalized mHealth Monitoring System for Veterans

The word “Crisis” refers to events or event that may lead to dangerous and unstable situation which may affect personal, social or community life adversely. It is used to represent negative changes in affairs like political, social, economic, or environmental. Mental health crisis is an important social issue which represents one’s behavior, feelings and actions that can be harmful to themselves and the people around them. Because of the wars, in the last decade, mental health crises have severely affected the United States veterans. To take necessary steps to avoid greater damage of a crisis, it is important to predict the different levels of crisis. However, because of the nature of crisis, it is difficult to understand and quantify the different levels of crisis. This research is focusing defining and predicting different levels of mental health crisis of Milwaukee based veterans who are suffering from PTSD in a mHealth setting. As part of this, long-term crisis (a severe stage) defined and validated from PCL-5 score using decision tree and statistical tests. In order to create a mobile based alert system, acute crisis (an intermediate severe situation) has been defined using Cognitive Walkthrough and decision tree based on ecological momentary assessment data.

**December 10 –**

**Adikorley (Marquette University)**

Multivariate Functional Time Series Forecasting : Multivariate Functional Singular Spectrum Analysis approaches applied to "images and curves/remote sensing”

**Sunil Mathew (Marquette University)**

Model interpretability in terms of dropout in Neural Networks using Bayesian learning

Neural networks tend to have many layers that enables it to learn complicated patterns via weights that describe the connection between nodes of each layer. Often large number of nodes tend to cause overfitting and poor generalization. A probabilistic framework enables explainability and better insight to what a node in a model learns. A Bayesian approach can be used to determine connection dropout in each layer which enables better model interpretability & combats overfitting.