Peer-reviewed

Improved Resolution of Boundary Layers for Spectral Collocation, with Manfred Trummer, SIAM J. Sci. Comput. 41-5 (2019)
We propose a new algorithm to improve the accuracy of spectral methods for singularly perturbed two-point boundary value problems. Driscoll and Hale [J. Numer. Anal., 36 (2016), pp. 108--132] suggest resampling as an alternative to row replacement when including boundary conditions. Testing this with an iterated sine-transformation [T. Tang and M. R. Trummer [SIAM J. Sci. Comput., 17 (1996), pp. 430--438] designed for boundary layers reveals artificial boundary conditions imposed by the transformation. The transformation is regularized to prevent this. The new regularized sine-transformation is employed to solve boundary value problems with and without resampling. It shows superior accuracy provided the regularization parameter is chosen from an optimal range.
Preconditioning of spectral methods via Birkhoff interpolation, with Manfred Trummer, Numerical Algorithms, 79(2), 555-573 (2018)
High-order differentiation matrices as calculated in spectral collocation methods usually include a large round-off error and have a large condition number (Baltensperger and Berrut Computers and Mathematics with Applications 37(1), 41– 48 1999; Baltensperger and Trummer SIAM J. Sci. Comput. 24(5), 1465–1487 2003; Costa and Don Appl. Numer. Math. 33(1), 151–159 2000). Wang et al. (Wang et al. SIAM J. Sci. Comput. 36(3), A907–A929 2014) present a method to precondition these matrices using Birkhoff interpolation. We generalize this method for all orders and boundary conditions and allowing arbitrary rows of the system matrix to be replaced by the boundary conditions. The preconditioner is an exact inverse of the highest-order differentiation matrix in the equation; thus, its product with that matrix can be replaced by the identity matrix. We show the benefits of the method for high-order differential equations. These include improved condition number and, more importantly, higher accuracy of solutions compared to other methods.

Submitted for review

A provably robust algorithm for triangle-triangle intersections in floating point arithmetic, with Martin J. Gander, preprint, submitted to TOMS (2021)
Motivated by the unexpected failure of the triangle intersection component of the Projection Algorithm for Nonmatching Grids (PANG), this article provides a robust version with proof of backward stability. The new triangle intersection algorithm ensures consistency and parsimony across three types of calculations. The set of intersections produced by the algorithm, called representations, is shown to match the set of geometric intersections, called models. The article concludes with a comparison between the old and new intersection algorithms for PANG using an example found to reliably generate failures in the former.
Cycles in Newton-Raphson preconditioned by Schwarz (ASPIN and its cousins) with Martin J. Gander, preprint, submitted to DD26 proceedings (2021)
Newton-Raphson preconditioned by Schwarz methods does not have sufficient convergence criteria. We explore an alternating Schwarz method accelerated by Newton-Raphson to find an example where the underlying Schwarz method converges but the Newton-Raphson acceleration fails. Alternating Schwarz is posed as a fixed point iteration to make use of theory for generic root-finding methods. An algorithm is proposed combining several aspects of this theory and others to guarantee convergence.

Other

Spectral Differentiation: Integration and Inversion, masters thesis, Simon Fraser University (2018)
Pseudospectral differentiation matrices suffer from large round-off error, and give rise to illconditioned systems used to solve differential equations numerically. This thesis presents two types of matrices designed to precondition these systems and improve robustness towards this round-off error for spectral methods on Chebyshev-Gauss-Lobatto points. The first of these is a generalization of a pseudospectral integration matrix described by Wang et al. [18]. The second uses this integration matrix to construct the matrix representing the inverse operator of the differential equation. Comparison is made between expected and calculated eigenvalues. Both preconditioners are tested on several examples. In many cases, accuracy is improved over the standard methodology by several orders of magnitude. Using these matrices on general sets of points is briefly discussed.