CAIMS’2012  (June 24-28, 20212)

Fields Institute, Toronto, Canada

 

Computational and Discrete Mathematics Session

For the first time at the CAIMS annual meeting, the two distinct communities of computational PDE and discrete mathematics will be connected in this theme. Discretizations of PDE lead to (large) discrete dynamical systems, and a popular strategy of studying discrete dynamical systems is to go in the opposite direction (ie, limits of large numbers, continuum limits). As a consequence, we hope that linking the two communities will lead to a cross-fertilization of ideas. The session will focus on scientific computing and numerical modeling of complex systems such as percolation, polymers, Monte Carlo simulations of protein folding and numerical simulation of biological and biomedical systems including knotting and linking in biopolymers.

Plenary Speaker

Ted Kim (Santa Barbara)

Subspace Simulation of Non-linear Materials

Everyday materials such as biological tissue exhibit geometric and constitutive non-linearities which are crucial in applications such as surgical simulation and realistic human animation. However, these same non-linearities also make efficient finite element simulation difficult, as computing non-linear forces and their Jacobians can be computationally intensive. Reduced order methods, which limit simulations to a low-dimensional subspace, have the potential to accelerate these simulations by orders of magnitude. In this talk, I will describe a subspace method we have developed that efficiently computes all the necessary non-linear quantities by evaluating them at a discrete set of "cubature" points, an online integration method that can accelerate simulations even when the subspace is not known a priori, and a domain decomposition method that efficiently adds deformations to discretely partitioned, articulated simulations. Using our methods, we have observed speedups of one to four orders of magnitude.

 

Bio: Theodore Kim joined the faculty of Media Arts and Technology Program at the University of California, Santa Barbara, in the Fall of 2011 as an Assistant Professor. He conducts research in physically-based simulation for computer graphics, particularly in fluid dynamics, solid mechanics, and multi-core techniques. Techniques from his research have been used in over a dozen movies. Previously, he has been an Assistant Professor in Computer Science at the University of Saskatchewan, a Post-Doctoral Associate at Cornell University, and a Post-Doctoral Fellow at IBM TJ Watson Research Center. He received his PhD in Computer Science from the University of North Carolina at Chapel Hill in 2006.


Speakers

Peter Grassberger (University of Calgary)
Thomas Prellberg (Queen Mary University, London)
Erkan Tuzel (Worcester Polytechnic Institute)
Vince Lyzinski (Johns Hopkins)
Enzo Orlandini (Padova)
Ray Kapral (University of Toronto)
Ivona Bezakova (Rochester Institute of Technology)
Daniel Stefankovich (Rochester)
Dhavide Aruliah (Ontario Institute of Technology)
Robert Bridson (British Columbia)
Matthew Emmett (University of North Carolina)
Ghen Greif (British Columbia)
George Patrick (British Columbia)
Peter Minev (Alberta)
Felicia Magpantey (McGill)

 

General Information

Thank you all for agreeing to speak in this session, we are looking forward to meet each of you and to hear your talks.

Our session accommodates a very diverse and somewhat eclectic group whose research interests intersect in Computational Mathematics.  The proposal was to draw researchers working in both continuous and discrete areas hoping that some computational approaches (eg data structures or algorithms) used in one area may in fact be useful in other areas.

With this in mind, we request that you ultimately prepare your talks for a broader audience, and give some emphasis to computational methods or algorithms, in addition to the problems or models you are studying. Either your models, or methods, may be useful in a broader sense, crossing over from one area to another.

CAIMS’2012 will take place at the Fields Institute in Toronto, 24-28 June 2012, and the (incomplete) website of the conference is at

http://www.fields.utoronto.ca/programs/scientific/11-12/CAIMS_SCMAI/computational-and-discrete-math.html

More information will be posted on this site closer to the date of the conference.

CAIMS’2012 is hosted by the Fields Institute in Toronto.  The website of the Institute is

 http://www.fields.utoronto.ca/

Information on housing and other resources for short term visitors to the Fields, as well as for events hosted at the Fields, can be found on the Fields Institute Website.  You might be in particular interested in housing:

http://www.fields.utoronto.ca/programs/scientific/11-12/CAIMS_SCMAI/housing.html

More information and resources can be found on the resource website of the Fields, which is here:

http://www.fields.utoronto.ca/resources/members.html

If you travel to Canada, then you may find the following information relevant:

http://www.fields.utoronto.ca/programs/scientific/11-12/CAIMS_SCMAI/travel/

 

 

Please do not hesitate to ask for more information.

Regards

EJ Janse van Rensburg and Ray Spiteri

 

 



Abstracts


"Who said that he understands percolation?"
Peter Grassberger
(University of Calgary)

Although percolation theory was considered a mature subject several years ago, recent progress has changed this radically. While "classical" or "ordinary" percolation (OP) is a second order phase transition between long range connectivity and disconnectedness on diluted regular lattices or random graphs, examples have now been found where this transition can range from infinite order to first order. The latter is of particular importance in social sciences, where first order percolation transitions show up as a consequence of synergistic effects, and I will point out analogies with the relationship between percolation and rough interfaces in physics.   Another case where first order percolation transitions show up is interdependent networks, although first claims about this have to be substantially modified -- in some cases of interdependent networks the transition is second order but in a new universality class.  A similar but even more unexpected result holds for so-called "Achleoptas processes" that were originally claimed to show first order transitions, but which actually show second order transitions with a completely new type of finite size scaling. Finally, I will present "agglomerative percolation" (AP), a model originally introduced to understand the claim that network renormalization can demonstrate the fractality of some small world networks. Due to a spontaneously broken symmetry on bipartite graphs, AP leads e.g. to different scaling behaviors on square and triangular 2-d lattices, in flagrant violations of universality.


Rare event sampling with stochastic growth algorithms
Thomas Prellberg (Queen Mary University London)

We discuss uniform sampling algorithms that are based on stochastic growth methods, using sampling of extreme configurations of polymers in simple lattice models as a motivation. We shall show how a series of clever enhancements to a fifty-odd year old algorithm, the Rosenbluth method, leads to a suite of cutting-edge algorithms capable of uniform sampling of equilibrium statistical mechanical systems of polymers in situations where competing algorithms failed to perform well. Examples range from collapsed homo-polymers near sticky surfaces to models of protein folding.

 

Constrained Polymer Dynamics in a Mesoscale Solvent
Erkan Tuzel (Physics, Worcester Polytechnic Institute)

While modeling polymer solutions, the presence of multiple time scales, such as the intermolecular bond potentials, makes quantitative analysis of results difficult, and simulations costly. Here, we show how these degrees of freedom can be replaced by rigid bond constraints—as commonly done in Brownian dynamics—for polymers embedded in a recently introduced hydrodynamic solvent known as Stochastic Rotation Dynamics (SRD) (or Multi-Particle Collision Dynamics - MPCD). We then discuss applications of this approach to various systems of biological interest.


Strong Stationary Duality for Diffusions
Vince Lyzinski (Johns Hopkins)

Strong stationary duality has had a wide-ranging impact on Markov chain theory since its conception by Diaconis and Fill in 1990. Its diverse applications range from perfect sampling extensions of Markov Chain Monte Carlo to the establishment of cut-off phenomena for wide classes of Markov chains. We extend the idea of strong stationary duality to one-dimensional diffusion processes and in doing so recover some classical Markov chain results in the diffusion setting. This is joint work with my PhD advisor James Allen Fill.


Dynamics of knotted polymers
Enzo Orlandini (Padova)

Knots are frequent in long polymer rings at equilibrium and it is now well established that their presence can affect to some extent the static properties of the chain.  On the other hand, the presence of topological constraints (knots) in circular and linear polymers may influence also their dynamical properties.  This has been indeed shown in recent experiments where the motion of a single knotted DNA has been followed within a viscous solution and in the presence of a stretching force. These experiments raise interesting challenges to the theoretical comprehension of the problem, an issue which is still in its infancy.  As a first step towards the understanding of the mechanism underlying the mobility of a knot along the ring backbone and its effect on the overall polymer dynamic we investigate, by Monte Carlo and molecular dynamics simulations, the dynamics of knotted rings under good solvent conditions. By using an algorithm that detects position and size of the knot we are able to monitor the motion of the topologically entangled sub region both in space and along the ring backbone.

This allows identifying in knotted rings a novel, slow topological timescale, whose origin can be related to a self-reptation motion of the knotted region.

For open chains, knotted configurations do not represent an equilibrium state any more. However, under suitable conditions, (for example very tight knots or quite rigid chains) knotted metastable states persist for a very long time and a statistical description of their dynamical properties is then possible. By performing off lattice molecular dynamic simulations of a semiflexible polymer we estimate the average living time and the survival probability as a function of the initial conditions (size of the initial knot) and knot type.  This analysis has been extended to the case in which the linear polymer is subject to an external stretching force.


Mesoscopic Dynamics of Biopolymers and Protein Machines
Ray Kapral (Toronto)

The dynamics of polymers and biopolymers in solution and in crowded molecular environments which mimic some features of the interior of a biochemical cell will be discussed. In particular, the dynamics of protein machines that utilize chemical energy to effect cyclic conformational changes will be described. The investigation of the dynamics of such complex systems requires knowledge of the time evolution on physically relevant long distance and time scales. This often necessitates a coarse grained or mesoscopic treatment of the dynamics. A particle-based mesoscopic dynamical method, multiparticle collision dynamics, which conserves mass, momentum and energy, has been constructed and utilized to study the dynamics of these systems. The dynamics can be described by a Markov chain model in the full phase space of the system and, using projection operator or other methods, can be shown to lead to the continuum Navier-Stokes or reaction-diffusion descriptions on long distance and time scales.

 

Counting and sampling minimum cuts in weighted planar graphs
Ivona Bezakova (RIT)

We will discuss two minimum cut problems in weighted planar graphs: minimum source-sink cuts and contiguous minimum single-source-multi-sink cuts.

A source-sink cut is a set of vertices containing the source vertex and not the sink vertex (or, in the case of multiple sinks, not containing any of the sink vertices). A cut is minimum if the sum of the weights of the cut edges, connecting a vertex in the cut set with a vertex outside the cut set, is the smallest possible. A cut is contiguous if the cut set can be separated from the remaining vertices by a simply connected planar region whose boundary intersects only the cut edges.

We will present an O(n^2) algorithm counting all minimum source-sink cuts in weighted planar graphs, where n is the number of vertices. We will also sketch an O(n^3) algorithm counting all contiguous minimum single-source-multi-sink cuts. In both cases, having completed the counting part, subsequent sampling is very fast: a uniformly random cut can be produced in additional linear time.

The counting algorithms share a common outline. First, we reduce the problem to the problem of counting a different type of cuts in an unweighted planar directed acyclic graph (these cuts can also be thought of as maximal antichains in the corresponding partially ordered set). These cuts correspond to certain cycles in the planar dual graph and we employ dynamic programming to count them. We will discuss the first algorithm in detail and briefly sketch the issues encountered by the contiguity requirement.

Minimum source-sink and contiguous minimum single-source-multi-sinks cuts have applications in computer vision and medical imaging where the underlying graph often forms a 2D grid (lattice).

Based on joint works with Adam Friedlander and Zach Langley.

Connection between counting and sampling
Daniel Stefankovic (Rochester)

Counting problems arise in a wide variety of areas, for example, computer science, enumerative combinatorics, and statistical physics (estimating the value of a partition function is a counting problem). I will talk about the following aspect of approximate counting: given a sampling algorithm, how can one efficiently translate it into a counting algorithm?

 

Toward Accurate Methods for Forensic Bloodstain Pattern Analysis
Dhavide Aruliah (University of Ontario Institute of Technology)

At present, bloodstain pattern analysis (BPA) is largely an empirical, qualitative sub-discipline of forensic science. Forensic BPA specialists analyze bloodstains found at crime scenes and attempt to infer the location of the blood-letting impact (or impacts) that caused to the bloodstains. Traditional BPA methods for reconstructing crime scenes (notably stringing) ignore the effects of gravity and aerodynamic drag on trajectories of blood droplets in flight. Such assumptions may produce acceptable qualitative predictions under certain conditions (e.g., when analyzing bloodstains caused by droplets moving at high speeds due to discharge of a firearm). However, in many circumstances, back-tracking blood droplets along straight lines from bloodstains to a static impact location are misleading (e.g., when bloodstains are caused by assault with a blunt instrument or an edged weapon). Our ultimate aim is to develop software tools for quantitative analysis to support forensic BPA analysts.

We describe our framework for recording simulated blood-letting events and extracting droplet flight trajectories. The simulations consist of fake blood encased in ballistic gel being splattered by projectiles. The resulting blood flight trajectories are recorded by a stereo pair of high-speed cameras and the bloodstains are photographed to provide a collection of video and static image data sets to validate our inversion framework. We employ a sophisticated algorithm incorporating background removal, segmentation, 2D motion tracking and 3D reconstruction to extract meaningful flight trajectories from the video data.

 

Generic implicit constrained dynamics and algebraic preconditioners for graphics
Robert Bridson (University of British Columbia)

The emerging approach of "virtual practical effects" in feature film production leverages artists' intuition for the real world by simulating physics in a virtual world - instead of struggling to control complex software models, artists set up effects as they might in reality and let physics do the rest. This demands robust numerical solvers which can couple diverse physics together in unanticipated ways. We present a framework which unifies many previous elements (from viscous incompressible fluids to rigid body contact to inextensible cloth) and reduces to a sequence of least-squares-like problems. We further explore a new approach to algebraic almost-black-box domain decomposition which shows promise for tackling linear systems of this category.

 

Numerical Solution of Saddle-Point Linear Systems
Chen Greif (University of British Columbia)

Constrained partial differential equations and optimization problems typically require the need to solve special linear systems known as saddle-point systems. When the matrices are very large and sparse, iterative methods must be used. A challenge here is to derive and apply solution methods that exploit the properties and the structure of the given discrete operators, and yield fast convergence while imposing reasonable computer storage requirements. In this talk I will provide an overview of solution techniques. In particular, we will discuss effective preconditioners and their spectral properties, bounds on convergence rates, and computational challenges.

 

The Parallel Full Approximation Scheme in Space and Time (PFASST) algorithm
Matthew Emmett (University of North Carolina)

The Parallel Full Approximation Scheme in Space and Time (PFASST) algorithm for parallelizing PDEs in time is presented.

To efficiently parallelize PDEs in time, the PFASST algorithm decomposes the time domain into several time slices.  After a provisional solution is obtained using a relatively inexpensive time integration scheme, the solution is iteratively improved using a deferred correction scheme.  To further improve parallel efficiency, the PFASST algorithm uses a hierarchy of discretizations at different spatial and temporal resolutions and employs an analog of the multi-grid full approximation scheme to transfer information between the discretizations.

 

Mathematics and Algorithms for Simple Computing on Surfaces
Colin Macdonald (University of Oxford)

The Closest Point Method is a simple numerical technique for solving partial differential equations (PDEs) on curved surfaces or other general domains.  The method works by embedding the surface in a higher-dimensional space and solving the PDE in that space. Numerically, we can use simple finite difference and interpolation schemes on uniform Cartesian grids.  This presentation provides a brief overview of the Closest Point Method and outlines some current results.  In the spirit of a minisymposium that is examining links between continuous and discrete computational mathematics, I will discuss the issue of finding a narrow band surrounding a complicated surface (this is useful for an efficient implementation) and how we approach this (discrete) problem.

Joint work with Tom Maerz, Ingrid von Glehn, and Yujia Chen (Oxford) and Steve Ruuth (SFU).

 

Stability of backward Euler applied to a model state dependent delay differential equation
Felicia Magpantay (York University)

We consider the stability properties of the backward Euler method for delay differential equations (DDEs) with respect to a test equation. We consider two cases: (i) constant delay (ii) state dependent delay. Runge-Kutta methods applied to DDEs have been studied by many authors who have mainly considered stability regions independent of the delay and/or require the step-size to be a submultiple of the delay (for the constant delay case). These assumptions put too much restriction on the method and cannot be used in the state dependent case. We direct attention to the dependence of the stability regions to the delay function and present results that use general step sizes. The techniques used are derived from the method of Lyapunov functionals to directly prove the stability of zero solution. We also prove the local stability of backward Euler when the stepsize used in an important case.

Joint work with A.R. Humphries and N. Guglielmi.

 

A Direction Splitting Algorithm for Flow Problems in Complex/Moving Geometries
Peter Minev (University of Alberta)

An extension of the direction splitting method for the incompressible Navier-Stokes equations proposed in [1], to flow problems in complex, possibly time dependent geometries will be presented.  The idea stems from the idea of the fictitious domain/penalty methods for flows in complex geometry. In our case, the velocity boundary conditions on the domain boundary are approximated with a second-order of accuracy while the pressure subproblem is harmonically extended in a fictitious domain such that the overall domain of the problem is of a simple rectangular/parallelepiped shape. 

The new technique is still unconditionally stable for the Stokes problem and retains the same convergence rate in both, time and space, as the Crank-Nicolson scheme. A key advantage of this approach is that the algorithm has a very impressive parallel performance since it requires the solution of one-dimensional problems only, which can be performed very efficiently in parallel by a domain-decomposition Schur complement approach. Numerical results illustrating the convergence of the scheme in space and time will be presented. Finally, the implementation of the scheme for particulate flows will be discussed and some validation results for such flows will be presented.

[1] J.L. Guermond, P.D. Minev, A new class of massively parallel direction splitting for the incompressible Navier-Stokes equations. Computer Methods in Applied Mechanics and Engineering, 200 (2011), 2083–2093.



Automatically generated variational integrators
George Patrick (University of Saskatchewan)

Many fundamental physical systems have variational formulations, such as mechanical systems in their Lagrangian formulation. Discretization of the variational principles leads to (implicit) symplectic and momentum preserving one step integration methods.  However, such methods can be very complicated.

I will describe some advances in the basic theory of variational integrators, and a software system called AUDELSI, which converts any ordinary one step method into a variational integrator of the same order.