SIURO | Volume 9 | SIAM
 

Title


SIAM Undergraduate Research Online

Volume 9


SIAM Undergraduate Research Online Volume 9

A Coordinate Descent Method for Robust Matrix Factorization and Applications

Published electronically January 15, 2016
DOI: 10.1137/15S014472

Author: Spencer Sheen (Woodbridge High School, Irvine, CA)
Sponsor: Hongkai Zhao (University of California at Irvine)

Abstract: Matrix factorization methods are widely used for extracting latent factors for low rank matrix completion and rating prediction problems arising in recommender systems of on-line retailers. Most of the existing models are based on L2 fidelity (quadratic functions of factorization error). In this work, a coordinate descent (CD) method is developed for matrix factorization under L1 fidelity so that the related minimization is done one variable at a time and the factorization error is sparsely distributed. In low rank random matrix completion and rating prediction of MovieLens-100k datasets, the CDL1 method shows remarkable stability and accuracy under gross corruption of training (observation) data while the L2 fidelity based methods rapidly deteriorate. A closed form analytical solution is found for the one-dimensional L1-fidelity sub-problem, and is used as a building block of CDL1 algorithm whose convergence is analyzed. The connection with the well-known convex method, the robust principal component analysis (RPCA), is made. A comparison with RPCA on recovering low rank Gaussian matrices under sparse and independent Gaussian noise shows that CDL1 maintains accuracy at much lower sampling ratios (from much fewer observed entries) than that for RPCA.

A Greens Function Numerical Method for Solving Parabolic Partial Differential Equations

Published electronically January 21, 2016
DOI: 10.1137/14S013664

Author: Luke Edwards (The University of Arizona)
Sponsor: Anna Mazzucato (The Pennsylvania State University)

Abstract:  This article describes the derivation and implementation of a numerical method to solve constant-coefficient, parabolic partial differential equations in two space dimensions on rectangular domains. The method is based on a formula for the Green's function for the problem obtained via reflections at the boundary of the domain from the corresponding formula for the fundamental solution in the whole plane. It is inspired by a related method for variable coefficients equations in the whole space introduced by Constantinescu, Costanzino, Mazzucato, and Nistor in J. Math. Phys, 51 103502 (2010). The benchmark case of the two-dimensional heat equation is considered. We compare the Green's function method with a finite-difference scheme, more precisely, an alternating direction implicit (ADI) method due to Peaceman and Rachford. Our method yields better rates of convergence to the exact solution.

Realistic Spiking Neuron Statistics in a Population are Described by a Single Parametric Distribution

Published electronically February 8, 2016
DOI: 10.1137/15S014289

Author: Lauren Crow (Virginia Commonwealth University)
Sponsor: Cheng Ly (Virginia Commonwealth University)

Abstract:  The spiking of activity of neurons throughout the cortex is random and complicated. This complicated activity requires theoretical formulations in order to understand the underlying principles of neural processing. A key aspect of theoretical investigations is characterizing the probability distribution of spiking activity. This study aims to better understand the statistics of the time between spikes, or interspike interval, in both real data and a spiking model with many time scales. Exploration of the interspike intervals of neural network activity can provide a better understanding of neural responses to different stimuli. We consider different parametric distribution fitting techniques to characterize the random spike times of a population of neurons in the visual cortex of a mammal. Five different probability distribution functions were considered, including three mixture models, and their goodness of fit was determined through two criteria: maximum likelihood and Akaike Information Criteria. Despite being largely heterogeneous, both criteria indicated that one distribution, although different for each criteria, was the best fitting for all of the neurons in the data set. The Gamma-Gamma mixture distribution was the best according to maximum likelihood and the Exponential distribution was the best according to AIC. The statistical methodology applied to a burst model yielded the same results, and the AIC formula was further investigated to better understand its consistent selection of the same parametric distribution. We find that complicated neural spiking activity can sometimes be described by a single parametric distribution, which is hopefully comforting for theorists.

Modeling Lotic Organism Population with Partial Differential Equations

Published electronically February 8, 2016
DOI: 10.1137/15S013892

Author: Chase Viss (Dordt College)
Sponsor: Tom Clark (Dordt College) and Eric Eager (University of Wisconsin – La Crosse)

Abstract:  We present in this paper a mathematical model for a population of caddisfly larvae in the Upper Mississippi River, which can either live in the current of the water or fix themselves to large wood debris submerged throughout the river. The model consists of a system of partial differential equations which captures these coupled dynamics. After introducing the model, we give a qualitative analysis of the dynamics, which includes a steady state solution followed by a numerical solution to the system using a finite difference scheme implemented in R. Finally, we extend the model to a competitive system with the goal of capturing the dynamics of the interaction between native caddisfly larvae and invasive zebra mussels. Our results demonstrate that although analyzing the exact behavior of lotic organism populations remains a difficult task, utilizing mathematical models such as the one presented in this paper can lead to further knowledge regarding which characteristics of lotic organisms have greatest influence over population growth.

A Heuristic Method for Scheduling Band Concert Tours

Published electronically March 14, 2016
DOI: 10.1137/14S013718

Author: Linh Nghiem (University of Miami)
Sponsor: Tallys Yunes (University of Miami)

Abstract:  Scheduling band concert tours is an important and challenging task faced by many band management companies and producers. A band has to perform in various cities over a period of time, and the specific route they follow is subject to numerous constraints, such as: venue availability, travel limits, and required rest periods. A good tour must consider several objectives regarding the desirability of certain days of the week, as well as travel cost. We developed and implemented a heuristic algorithm in Java, which was based on simulated annealing, to automatically generate good tours that both satisfied the above constraints and improved objectives significantly, when compared to the best manual tour created by the client. Our program also enabled the client to see and explore trade-offs among objectives while choosing the best tour that meets the requirements of the business.

Mathematical modeling of the 2014/2015 Ebola epidemic in West Africa

Published electronically March 25, 2016
DOI: 10.1137/15S013806

Authors: Jeff Bartlett, James DeVinney, and Eric Pudlowski (Texas A&M University)
Sponsor: Wolfgang Bangerth (Texas A&M University)

Abstract:  Accurately predicting the future number of infected patients and deaths during an epidemic is an important step towards efficiently allocating resources {doctors, nurses, hospital beds, public awareness campaigns, or foreign aid {in combating a disease that spreads through a community. In this paper, we develop mathematical models for the Ebola epidemic that started in West Africa in 2014 and that has infected more than 20,000 people so far. To this end, we create a discrete time, age structured model using assumptions based on publicly available data from Sierra Leone and West Africa. We show that with reasonable assumptions, we provide an accurate fit to the reported number of infections and deaths due to Ebola in Sierra Leone.b The close fit to past data provides hope that the model can also serve as a prediction for the future and we verify this in an appendix for data that became available after we created our model.

A Numerical Study of Several Viscoelastic Fluid Models

Published electronically March 30, 2016
DOI: 10.1137/15S013879

Author: Corina Putinar (University of California, Davis)
Sponsor: Becca Thomases (University of California, Davis)

Abstract:  Viscoelastic fluids are a type of fluid with a solvent and immersed elastic filaments which create additional stresses on the fluid. The Oldroyd-B equations are a well accepted model of the flow of viscoelastic fluids but in extensional flows, a characteristic of flows where liquids approach or separate from each other, as the Wiessenberg number (Wi), a number that measures the relaxation time of the fluid, approaches infinity the stress of the polymer also goes to infinity. For small Wi, the polymer stress remains bounded but as Wi gets bigger the polymer stress approaches a cusp shape until the solution eventually becomes unbounded. Modifications to the Oldroyd-B model have been proposed that keep the solutions bounded, such as the Polymer Diffusion, Giesekus Model, and Phan-Thien and Tanner model. Here we study how well these modifications approximate the Oldroyd-B model when the stress is very large. An ideal model for numerical simulations would be close to the Oldroyd-B model outside of a small region near the cusp or singularity but still be well-resolved near the singularity. Analysis has been done to see how the proposed solutions differ in regards to stress, time and other factors. When finding such results it is desirable to use minimal computing resources when resolving these near singular solutions. Several different modifications to the Oldroyd-B system with stress diffusion are investigated using MATLAB and discussed to identify which modifications perform the best in this flow geometry.

An extension of Standard Latent Dirichlet Allocation to Multiple Corpora

Published electronically April 14, 2016
DOI: 10.1137/15S014599

Authors: Adam Foster (Queens College, University of Cambridge), Hangjian Li (UCLA), Georg Maierhofer (Trinity College, University of Cambridge) and Megan Shearer (University of Arizona)
Sponsor: Russell Caflisch (UCLA)

Abstract:  Latent Dirichlet Allocation (LDA) is a highly successful topic modeling framework. We describe a new extension to LDA which supports multiple subcorpora, each containing a different type of document. As in LDA, this multiple-corpora LDA (mLDA) model assumes document topic proportions follow a symmetric Dirichlet distribution. However, in mLDA, the Dirichlet parameter is subcorpus dependent. An online algorithm for training mLDA models is derived. The algorithm is applied to data from the USC Shoah Foundation's Visual History Archive. Results show mLDA produced a better language model than standard LDA for this data. Using the same data, the mLDA topic model is used to construct an information retrieval system. Search results from this system outperform those obtained from traditional string-based search systems. A novel approach to the visualization of topics is outlined and visualizations are presented. As a novel development in natural language processing, mLDA will allow the power of topic modeling to be applied to a huge range of fields with diverse data by incorporating more information into a single topic model. It also enhances the applicability of topic modeling to information retrieval.

Monte Carlo Estimation of the Diameter-constrained Reliability of Networks using Parallelism

Published electronically April 20, 2016
DOI: 10.1137/15S014496

Authors: Martin Lapinski (College of Staten Island, CUNY), Helen Lin (College of Staten Island, CUNY), and Myles McHugh (Kean University)
Sponsor: Louis Petingi (College of Staten Island, CUNY)

Abstract:  In this paper we study an heuristic approach, based on parallelism, to estimate the Diameter-Constrained Reliability (DCR) of a given undirected probabilistic graph G = (V,E), with two terminal nodes s and t, a given diameter bound D, and where edges are assigned independent probabilities of failure (node are assumed to be perfect). Since exact methods to evaluate the DCR are computationally expensive (i.e., NP-hard), we propose to implement a Monte Carlo (MC) method based upon MPI parallel processing to estimate the reliability. We conduct computational tests on several topologies while considering different factors such as number of cluster nodes utilized, number of trials performed by the cluster nodes, different ranges generated by random number functions, and then we determine how these factors affect the estimation of the DCR.

Modeling Product Shelf Life

Published electronically May 2, 2016
DOI: 10.1137/15S014150

Authors: Elizabeth Greco (Cornell University), Derek Hoare (Kenyon College), Lin Miao (Kenyon College), and Emily Smith (Kenyon College)
Sponsor: Elin Farnell (Kenyon College)

Abstract: Sproxil, Inc. is a company which produces PINs that manufacturers attach to products and which are then used by consumers to verify the authenticity of the product manufacturer. The goal of this work is to use existing PIN verification data provided by Sproxil to obtain a model for product shelf life: the length of time between PIN generation and product verification. We present several models that can be used to predict information about the shelf lives of various batches of products. We use maximum likelihood estimation to fit gamma distributions, which model the distributions of shelf lives. Cluster analysis is used to determine whether certain types of product batches have similar verification behavior. We find that the size of a product batch has an impact on how quickly verifications occur. Finally, regression analysis is used to find predictive relationships between variables related to shelf lives. We find that certain variables measuring how quickly the verification cycle begins are strong predictors of later stages in the verification cycle.

Similarity, Mass Conservation, and the Numerical Simulation of a Simplified Glacier Equation

Published electronically May 11, 2016
DOI: 10.1137/15S014198

Author: Nicole Sarahs (University of Reading)
Sponsor: Mike Baines (University of Reading)

Abstract: A one-dimensional nonlinear scale-invariant PDE problem with moving boundaries (a model glacier equation) is solved numerically using a moving-mesh scheme based on local conservation and compared with a self-similar scaling solution, showing numerical convergence under mesh refinement. It is also shown, analytically and numerically, that the waiting-time exhibited by this problem for general data ends when the local profile close to the boundary becomes self-similar.

Practical Algorithms for Learning Near-isometric Linear Embeddings

Published electronically May 20, 2016
DOI: 10.1137/15S014769

Authors: Jerry Luo (University of Arizona), Kayla Shapiro (Imperial College London), Hao-Jun Michael Shi (UCLA), Qi Yang (University of Southern California), Kan Zhu (Columbia University)
Sponsor: Wotao Yin (UCLA)

Abstract: We propose two practical non-convex approaches for learning near-isometric, linear embeddings of finite sets of data points. Given a set of training points X, we consider the secant set S(X) that consists of all pairwise difference vectors of X, normalized to lie on the unit sphere. The problem can be formulated as finding a symmetric and positive semi-definite matrix Ψ that preserves the norms of all the vectors in S(X) up to a distortion parameter Ƌ. Motivated by non-negative matrix factorization, we reformulate our problem into a Frobenius norm minimization problem, which is solved by the Alternating Direction Method of Multipliers (ADMM) and develop an algorithm, FroMax. Another method solves for a projection matrix Ψ by minimizing the restricted isometry property (RIP) directly over the set of symmetric, positive semi-definite matrices. Applying ADMM and a Moreau decomposition on a proximal mapping, we develop another algorithm, NILE-Pro, for dimensionality reduction. FroMax is shown to converge faster for smaller Ƌ while NILE-Pro converges faster for larger Ƌ. Both non-convex approaches are then empirically demonstrated to be more computationally efficient than prior convex approaches for a number of applications in machine learning and signal processing.

Optimal Call Center Staffing via Simulation

Published electronically May 23, 2016
DOI: 10.1137/15S014186

Authors: Samuel Justice (University of Iowa), Aidan Lee (The Ohio State University), Alexander Weiss-Christoff (Washington State University), and Arthur Conover (Kenyon College)
Sponsor: Elin Farnell (Kenyon College)

Abstract: We discuss the methodology and results of a semester-long project in mathematical modeling, in which we seek to create an optimal staffing plan for a call center for Nationwide Mutual Insurance Company. The company provided data to use for this purpose along with the mandate that an optimal staffing plan should staff a sufficient number of workers to answer 80% of calls within 30 seconds. We use statistical models based on the data provided as input to Monte Carlo simulations in order to propose an appropriate number of call service representatives needed for a given call type, day, and time interval. Using these results, we explore potential approaches for devising staffing plans for the call center.

Compartmentalizing an SIR Model of n Susceptibility Classes

Published electronically May 25, 2016
DOI: 10.1137/15S014083

Authors: Taliha Nadeem and Rachel Nicinski (Benedictine University)
Sponsor: Anthony DeLegge (Benedictine University)

Abstract: The purpose of this research is to provide a solution to regulate environmental diseases when the spread of the disease is partly based on how susceptible different groups of individuals are. This study will monitor the effects a disease has on a population with groups of different susceptibility through the administration of vaccines: a point vaccination scheme (PVS) and an interval vaccination scheme (IVS). A PVS entails the instantaneous vaccination of individuals by selectively distributing vaccinations at one point in time, while an IVS is a more realistic vaccination method administered over an interval of time. Both schemes are exclusively administered to individuals classified as having the highest susceptibility to disease. The disease itself will be explored through the use of an 𝑆𝐼𝑅 model of 𝑛 susceptibility classes, a model which studies a disease in relation to susceptible, infected, and removed individuals.

To further study the disease, this same model will be modified through the incorporation of differential delay equations, denoting a regaining of susceptibility. These incorporations surface as a reflection of reality: in several diseases, it has been observed that vaccinated individuals or individuals with immunity to the original strain of a disease can regain susceptibility to a mutated strain, such as the common flu whose strains undergo constant mutation. Modification of the disease allows for recovering individuals to contract a viral variant.

A Sample Size Calculator for SMART Pilot Studies

Published electronically June 27, 2016
DOI: 10.1137/15S014058

Author: Hwanwoo Kim (University of Michigan)
Sponsors: Daniel Almirall (University of Michigan), Edward Ionides (University of Michigan)

Abstract: In clinical practice, as well as in other areas where interventions are provided, a sequential individualized approach to treatment is often necessary, whereby each treatment is adapted based on the object’s response. An adaptive intervention is a sequence of decision rules which formalizes the provision of treatment at critical decision points in the care of an individual. In order to inform the development of an adaptive intervention, scientists are increasingly interested in the use of sequential multiple assignment randomized trials (SMART), which is a type of multistage randomized trial where individuals are randomized repeatedly at critical decision points to a set treatment options. While there is great interest in the use of SMART and in the development of adaptive interventions, both are relatively new to the medical and behavioral sciences. As a result, many clinical researchers will first implement a SMART pilot study (i.e., a small-scale version of a SMART) to examine feasibility and acceptability considerations prior to conducting a full-scale SMART study. A primary aim of this paper is to introduce a new methodology to calculate minimal sample size necessary for conducting a SMART pilot.

Mathematical Analysis of Ivermectin as a Malaria Control Method

Published electronically July 11, 2016
DOI: 10.1137/15S014447

Authors: Robert Doughty and Eli Thompson (Miami University)
Sponsor: Anna Ghazaryan (Miami University)

Abstract: Malaria epidemics are detrimental to the health of many people and economies of many countries. There exist methods of malaria control, but the fight against the disease is far from being over. The history of mathematical modeling of malaria spread is more than hundred years old. Recently, a model was proposed in the literature that captures the dynamics of malaria transmission by taking into account the behavior and life cycle of the mosquito and its interaction with the human population. We modify this model by including the effect of an anti-parasitic medication, ivermectin, on several threshold parameters, which can determine the spread of malaria. The modified model takes a form of a system of nonlinear ordinary differential equations. We investigate this model using applied dynamical systems techniques. We were able to show that that exist parameter regimes such that careful use of ivermectin can curtail the spread of malaria without harming the mosquito population. Otherwise, the ivermectin either eradicates the mosquito population, or has little to no effect on the spread of malaria. We suggest that ivermectin can be very effective when used as a malaria control method in conjunction with other methods such as reduction of breeding sites.

A Physiologically-Based Pharmacokinetic Model for Vancomycin

Published electronically July 22, 2016
DOI: 10.1137/15S014642

Author: Rebekah White (East Tennessee State University)
Sponsor: Michele Joyner (East Tennessee State University)

Abstract: Vancomycin is an antibiotic used for the treatment of systemic infections. It is given intravenously usually every twelve or twenty-four hours. This particular drug has a medium level of boundedness, with approximately fifty to sixty percent of the drug being free and thus physiologically effective. A physiologically-based pharmacokinetic (PBPK) model was used to better understand the absorption, distribution, and elimination of the drug. Using optimal parameters, the model could be used in the future to test how various factors, such as body mass index (BMI) or excretion levels, might affect the concentration of the antibiotic.

Modeling the Spread of Ebola with SEIR and Optimal Control

Published electronically August 2, 2016
DOI: 10.1137/16S015061

Author: Harout Boujakjian (George Mason University)
Sponsor: Timothy Sauer (George Mason University)

Abstract: Ebola is a virus that causes a highly virulent infectious disease that has plagued Western Africa, impacting Liberia, Sierra Leone, and Guinea heavily in 2014. Understanding the spread and containment of this disease is vital to its containment and eventual elimination. We use an SEIR model to simulate the transmission of the disease. The model is validated with data from the World Health Organization. Optimal control theory is used to explore the effect of vaccination and quarantine rates on the SEIR model. The goal is to explore the use of these control strategies to effectively contain the Ebola virus.

Car(e) to Share? A Mathematical Anaylsis of the Car-Sharing Industry

Published electronically August 5, 2016
DOI: 10.1137/16S015322
M3 Introduction

Authors: Anirudh Suresh, Eric Gao, Margaret Trautner, Daniel Shebib, and Nancy Cheng (St. John’s School, Houston, TX)
Sponsor: Dwight Raulston (St. John’s School, Houston, TX)

Summary: We live in an era of unprecedented mobility—vehicles are much more affordable than they were at their inception in the early 20th century, and public transport provides an easy and economical means of travel for those without a personal vehicle. The latest trend in the transportation industry is that of car-sharing. Realizing that purchasing and owning a personal vehicle can be unnecessarily expensive, individuals are starting to turn to cheaper and more distributed means of paying for private vehicle transport.

In order to help illuminate various aspects of the car-sharing process, our team developed mathematical models that address some of the main factors influencing car-sharing companies’ decisions. First, we developed a model that determines the proportion of drivers that fit into categories--low, medium, and high—for both hours driven per day and miles driven per day. We realized that much of the information regarding these two factors depended greatly on the amount of traffic in an area or city, which subsequently depended on the population density of that region. Hence, we created a function that gives the expected number of miles driven in a day based on the population density of the city or region and the number of hours driven in a day. We then placed a normal distribution around this expected average value and integrated a weighted cumulative distribution function of that distribution over time to get a table of proportions of drivers in each category. Next, we tested our model in two regions, New York City and Englewood Cliffs, a small suburban locale. Our model produced logical results in that it predicted a majority of cars moving shorter distances in New York City and a majority of cars moving longer distances in the less densely populated town of Englewood Cliffs.

We were also asked to create a model to rank four potential business plans for car-sharing companies in four different cities. We found an equation to model a “price” for the consumer that included both financial cost and opportunity cost, which represents a combination of time spent and the value of that time. We graphed the cost versus user salary for each of 4 different consumer scenarios to determine which potential business plan would be most beneficial given a user’s salary and scenario. This user-benefit model incorporates the population density of a region to give the quantity of users for a car-sharing business in that region. We then calculated the company’s revenue and cost per user for each business model and combined these calculations with the number of users in a region to get the expected profit. We applied this to the four cities and ranked them. This analysis would be highly beneficial to any car-sharing company wishing to expand to a new urban location.

Finally, we were asked to consider the effects of alternative energy vehicles and self-driving vehicles on the car-sharing market. We altered our model from Part II to adjust for the changes in usage, cost, and revenue to show the effects of these future changes. Any company wishing to develop a car-sharing business should consider these insights and future changes in order to keep their service relevant in the fast-paced world of automobile technology.

Title: Topological Complexity for Driverless Vehicles

Published electronically August 11, 2016
DOI: 10.1137/15S014484

Authors: Ricky Salgado and Emiliano Velazquez (Wilbur Wright College)  
Sponsor: Hellen Colman (Wilbur Wright College)

Abstract: The topological complexity is a numerical invariant which measures the number of commands an autonomous robot needs in order to move in a space to perform a task. To explain these ideas, we will walk through the various algebraic definitions and provide physical examples along the way. We calculate the topological complexity for various scenarios, starting with a single robot moving on a simple space, we will add on to this scenario towards a final case including more robots and more complicated spaces where they move. Finally, we provide an example where two driverless vehicles move in a track joining seven colleges throughout Chicago. We determine the number of instructions as well as their content for this case study.

Accuracy of data-based sensitivity indices

Published electronically August 25, 2016
DOI: 10.1137/15S014757

Authors: Thomas Bassine (University of Connecticut), Bryan Cooley (Eastern Tennessee State University), Kenneth Jutz (North Carolina State University), Lisa Mitchell (Brigham Young University)
Sponsor: Pierre Gremaud (North Carolina State University)

Abstract: When analyzing high-dimensional input/output systems, it is common to perform sensitivity analysis to identify important variables and reduce the complexity and computational cost of the problem. In order to perform sensitivity analysis on fixed data sets, i.e. without the possibility of further sampling, we fit a surrogate model to the data. This paper explores the effects of model error on sensitivity analysis, using Sobol' indices (SI), a measure of the variance contributed by particular variables (first order indices) and by interactions between multiple variables (total indices), as the primary measure of variable importance. We also examine partial derivative measures of sensitivity. All analysis is based on data generated by various test functions for which the true SI are known. We fit two non-parametric models, Multivariate Adaptive Regression Splines (MARS) and Random Forest, to the test data, and the SI are approximated using R routines. An analytic solution for SI based on the MARS basis functions is derived and compared to the actual and approximated SI. Further, we apply MARS and Random Forest to data sets of increasing size to explore convergence of error as available data increases. Due to efficiency constraints in the surrogate models, constant relative error is quickly reached and maintained despite increasing size of data. We find that variable importance and SI are well approximated, even in cases where there is significant error in the surrogate model.

A connection between grad-div stabilized FE solutions and pointwise divergence-free FE solutions on general Meshes

Published electronically September 15, 2016
DOI: 10.1137/16S015176

Author: Sarah Malick (Clemson University)
Sponsor: Leo Rebholz (Clemson University)

Abstract: We prove, for Stokes, Oseen, and Boussinesq finite element discretizations on general meshes, that grad-div stabilized Taylor-Hood velocity solutions converge to the pointwise divergence-free solution (found with the iterated penalty method) at a rate of γ−1, where γ is the grad-div parameter. However, pressure is only guaranteed to converge when (Xh; ∇ Xh) satisfies the LBB condition, where Xh is the finite element velocity space. For the Boussinesq equations, the temperature solution also converges at the rate γ−1. We provide several numerical tests that verify our theory. This extends work in [6] which requires special macroelement structure in the mesh.

Modeling Interactions Between Various Cell Populations in a Cancerous System

Published electronically September 27, 2016
DOI: 10.1137/15S014617

Authors: Jamilia Johnson, Cheyenne Peters, Asia Youngblood (Michigan State University), and Aaron Crump (Wayne State University)
Sponsor: Tsvetanka Sendova (Michigan State University)

Abstract: We create two models based on systems of ordinary differential equations (ODEs) to study how normal, benign, metastatic, and immune cell populations evolve in a patient with cancer. The first, one-patch, model is used to simulate the cell populations in a single fixed area. Using stability analysis for this model, we determine a healthy equilibrium point with no tumor cells and derive necessary and sufficient conditions for stability. This model is also used to show the effects of immunotherapy on a cancerous system. To capture the effects of metastatic cancer, a two-patch model is introduced. It looks at the cell populations in two different areas of the body. A healthy equilibrium is also found for this model and sufficient conditions for stability are provided.

Applications of Cumulative Histograms in Diagnosing Breast Cancer

Published electronically October 6, 2016
DOI: 10.1137/16S015139

Author: Anna Grim (University of St. Thomas, St. Paul, MN) 
Sponsor: Chehrzad Shakiban (University of St. Thomas, St. Paul, MN)

Abstract: We present several algorithms that use invariant cumulative histograms to non-invasively diagnose tumors detected on a mammogram. First, we define three specialized cumulative histograms called the cumulative centroid, kappa, and kappa-s histogram. Then we compute metrics over each cumulative histogram to quantitatively distinguish benign versus malignant tumors. Our methodology has been tested on a dataset of 150 tumors and we include an ROC analysis of our results.

Redistricting Youngstown Police Beats

Published electronically October 27, 2016
DOI: 10.1137/15S013880

Authors: Sebastian Haigler, Ashley Orr, Eric Shehadi, Jenna Wise, and Kristi Yazvac (Youngstown State University)
Sponsor: Thomas Wakefield (Youngstown State University)

Abstract: The geographical area which a given police officer patrols is known as a beat. Unfortunately, the beats in Youngstown, Ohio, had been the same for more than a decade, despite changes in calls for service and trends in crime. We analyze historical calls for service data to confirm the disparity in workload across these beats and this analysis shows a highly uneven workload across the current beats. To remedy the inequity, we propose a Cellular Growth Model using cellular automaton to generate new beats.

This model considers both the call volume and the call priority of the historical data through the use of a metric. We generate and present two sets of results, one to check the robustness of our model and another to present to the Youngstown Police Department. First, to train and test our model, we generate new beat alternatives using 80% of our data and then use the remainder of our data to determine which alternative has the lowest mean squared error between the in-sample data and the out-of-sample data. Bootstrapping our data as a means to generate confidence intervals, this alternative is then compared to the original beats.

In application, however, we generate beat alternatives using all available data from the Youngstown Police Department. Our analysis concludes that the new beat alternative has lower standard deviation and lower coefficient of variation than the original beats. As a result, the Youngstown Police Department implemented one of our proposed alternatives in January 2016.

On Practical Approximate Projection Schemes in Signal Space Methods

Published electronically December 2, 2016
DOI: 10.1137/16S015152

Authors: Xiaoyi Gu and Shenyinying Tu (UCLA)
Sponsor: Deanna Needell (Claremont McKenna College)

Abstract: Compressive sensing (CS) is a new technology which allows the acquisition of signals directly in compressed form, using far fewer measurements than traditional theory dictates. Recently, many so-called signal space methods have been developed to extend this body of work to signals sparse in arbitrary dictionaries rather than orthonormal bases. In doing so, CS can be utilized in a much broader array of practical settings. Often, such approaches often rely on the ability to optimally project a signal onto a small number of dictionary atoms. Such optimal, or even approximate, projections have been difficult to derive theoretically. Nonetheless, it has been observed experimentally that conventional CS approaches can be used for such projections, and still provide accurate signal recovery. Here, we present the mathematical formulation of the signal space recovery problem, summarize the empirical evidence, and derive theoretical guarantees for such methods for certain sparse signal structures that match those found in other similar contexts. Our theoretical results also match those observed in experimental studies, and we thus establish both experimentally and theoretically that these CS methods can be used in this setting.

A Patch Model for the Transmission Dynamics of Zika Virus from Rio de Janeiro to Miami during Carnival and the Olympics

Published electronically December 8, 2016
DOI: 10.1137/16S015425

Author: Caitlin Oleson (University of Nevada, Reno) 
Sponsor: Marc Artzrouni (University of Pau, Pau, France)

Abstract: Since late 2015 there has been an outbreak of Zika virus disease in South and Central America. This is particularly alarming because of its connection to microcephaly in infants born to mothers who were infected while pregnant. There are two mass gatherings which happened in 2016 in Brazil: Carnival (February 6-10 2016) and the Olympics (August 5-21, 2016). These events brought large groups of foreigners to Brazil who could have been exposed to the virus and transported it back to their home country. We created a mathematical model to analyze if a group of visitors to Rio de Janeiro for either of these events would cause an outbreak in Miami, Florida when they went home. Our model shows that if conditions are assumed to be the same for the populations in Miami and Rio de Janeiro, the visitors to Carnival could cause an outbreak in Miami in October that in three months infects roughly 75% of the population. If, however, the parameters for the model are modified to reflect different lifestyles and mosquito populations in Miami, the size of the outbreak there can be reduced. The model for Rio de Janeiro suggests that by August the majority of the population will already have been infected, hence immune, and there will be a low number of mosquitoes. Therefore, our model predicts that due to reduced infection rates during the Olympics the chance of visitors bringing back the disease to Miami is very low.

Rumors with Personality: A Differential and Agent-Based Model of Information Spread through Networks

Published electronically December 9, 2016
DOI: 10.1137/16S015103

Author: Devavrat V. Dabke and Eva E. Arroyo (Duke University) 
Sponsor: Anita T. Layton (Duke University)

Abstract: We constructed the “ISTK" model to approximate the spread of viral information - a rumor - through a given (social) network. Initially, we used a set of ordinary differential equations to assess the spread of a rumor in face-to-face interactions in a homogenous population. Our second model translated this system into an equivalent stochastic agent-based model. We then incorporated a network based off of a representative Facebook dataset. Our results showed that incorporating the structure of a network alters the behavior of the rumor as it spreads across the population, while preserving steady states. Our third model considered features: demographic information that characterized individuals in our representative population. We also generated a feature vector for the rumor in order to simulate its “personality." An increase in the average similarity of the rumor to the population resulted in increased propagation through the network. However, the addition of feature vectors prevents the rumor from saturating the network. Our agent-based, feature-equipped ISTK model provides a more realistic mechanism to account for social behaviors, thus allowing for a more precise model of the dynamics of rumor spread through networks.