Applied Mathematics

Approximate Mapping of Temperatures from Coarser to Finer Grid using Temporal Derivatives - Intrim Report

Speaker: 
Ilona Ambartsumyan
Speaker: 
Cuiyu He
Speaker: 
Eldar Khattatov
Speaker: 
Sewoong Kim
Speaker: 
Lidia Mrad
Speaker: 
Minho Song
Date: 
Mon, Aug 10, 2015
Location: 
Institute for Mathematics and its Applications
Conference: 
PIMS-IMA Math Modeling in Industry XIX
Abstract: 

In many practical situations encountered in industries, there is incomplete knowledge of material properties, boundary conditions, and sources for a given material/manufacturing process. However, process monitors such as thermocouples are typically used to measure temperature evolution in certain locations to bridge the resulting gaps. Spatial gradients of temperature are needed to predict required quantities such as internal stresses developed during the process. The temperature measurements are typically performed on a coarse grid. Computation of stresses needs temperatures on a much finer grid for a more precise estimation of spatial gradients. Usually bilinear and/or weighted interpolation techniques are used to improve the spatial resolution of temperatures. However, in the cases where there are strong exothermic and/or endothermic reactions occurring during the process, such interpolation techniques are error-prone. Using more thermocouples to measure temperature on finer grid would be an easy solution. However, such measurement is intrusive as well as perturbing in addition to increasing the cost of data acquisition. The mapping of temperatures from coarser grid to finer grid is an ill-posed problem. In addition to the spatial distribution temperatures, the thermocouple measurements also contain valuable temporal information in the form of derivatives. This additional information of temporal derivatives helps in improving the conditioning of the apparently ill-posed problem. The objective of this exercise is to develop an algorithm/procedure for mapping temperatures from coarse grid to a finer grid recognizing as well as the valuable temporal information in the corresponding derivatives.

Deducing Rock Properties from Spectral Seismic Data

Speaker: 
Jiajun Han
Date: 
Wed, Aug 5, 2015
Location: 
Institute for Mathematics and its Applications
Conference: 
PIMS-IMA Math Modeling in Industry XIX
Abstract: 

Seismic Data in Exploration Geoscience

The recovery and production of hydrocarbon resources begins with an exploration of the earth’s subsurface, often through the use of seismic data collection and analysis. In a typical seismic data survey, a series of seismic sources (e.g. dynamite explosions) are initiated on the surface of the earth. These create vibrational waves that travel into the earth, bounce off geological structures in the subsurface, and reflect back to the surface where the vibrations are recorded as data on geophones. Computer analysis of the recorded data can produce highly accurate images of these geological structures which can indicate the presence of reservoirs that could contain hydrocarbon fluids. High quality images with an accurate analysis by a team of geoscientists can lead to the successful discovery of valuable oil and gas resources. Spectral analysis of the seismic data may reveal additional information beyond the geological image. For instance, selective attenuation of various seismic frequencies is a result of varying rock properties, such as density, elasticity, porosity, pore size, or fluid content. In principle this information is present in the raw data, and the challenge is to find effective algorithms to reveal these rock properties.

Spectral Analysis

Through the Fourier transform, frequency content of a seismic signal can be observed. The Short Time Fourier transform is an example of a time-frequency method that decomposes a signal into individual frequency bands that evolve over time. Such time-frequency methods have been successfully used to analyze complex signals with rich frequency content, including recordings of music, animal sounds, radio-telescope data, amongst others. These time-frequency methods show promise in extracting detailed information about seismic events, as shown in Figure 1, for instance.

Figure 1: Sample time-frequency analysis of a large seismic event (earthquake). From Hotovec, Prejean, Vidale, Gomberg, in J. of Volcanology and Geothermal Research, V. 259, 2013.

Problem Description

Are existing time-frequency analytical techniques effective in providing robust estimation of physical rock parameters that are important to a successful, economically viable identification of oil and gas resources? Can they accurate measure frequency-dependent energy attenuation, amplitude-versus-offset effects, or other physical phenomena which are a result of rock and fluid properties?

Using both synthetic and real seismic data, the goal is to evaluate the effectiveness of existing time-frequency methods such as Gabor and Stockwell transforms, discrete and continuous wavelet transforms, basis and matching pursuit, autoregressive methods, empirical mode decomposition, and others. Specifically, we would like to determine whether these methods can be utilized to extract rock parameters, and whether there are modifications that can make them particularly effective for seismic data.

The source data will include both land-based seismic surveys as well as subsurface microseismic event recordings, as examples of the breadth of data that is available for realistic analysis.

Figure 2: (a). Seismic data set from a sedimentary basin in Canada. The erosional surface and channels are highlighted by arrows. The same frequency attribute are extract from short time Fourier transform (b), continuous wavelet transform (c) and empirical mode decomposition (d).

Deep Learning for Image Anomaly Detection

Speaker: 
Jesse Berwald
Date: 
Wed, Aug 5, 2015
Location: 
Institute for Mathematics and its Applications
Conference: 
PIMS-IMA Math Modeling in Industry XIX
Abstract: 

The machine learning community has witnessed significant advances recently in the realm of image recognition [1,2]. Advances in computing power – primarily through the use of GPUs – has enabled a resurgence of neural networks with far more layers than was previously possible. For instance, the winning team, GoogLeNet [1,3], at the ImageNet 2014 competition triumphed with a 43.9% mean average precision, while the previous year’s winner, University of Amsterdam, won with 22.6% mean average precision.

Neural networks mimic the neurons in the brain. As in the human brain, multiple layers of computational “neurons” are designed to react to a variety of stimuli. For instance, a typical scheme to construct a neural network could involve building a layer of neurons that detects edges in an image. An additional layer could then be added which would be trained (optimized) to detect larger regions or shapes. The combination of these two layers could then identify and separate different objects present in a photograph. Adding further layers would allow the network to use the shapes to decipher the types of objects recorded in the image.

Goal of this project

An issue facing industries that deal with large numbers of digital photographs, such as magazines and retailers, is photo accuracy. Nearly all photos used in such contexts undergo some amount of editing (“Photoshopping”). Given the volume of photographs, mistakes occur [4]. Many of these images fall within a very narrow scope. An example would be the images used within a specific category of apparel on a retailer’s website. Detecting anomalies automatically in such cases would enable retailers such as Target to filter out mistakes before they enter production. By training a modern deep convolution neural network [1,5] on a collection of correct images within a narrow category, we would like to construct a network which will learn to recognize well-edited images. This amounts to learning a distribution of correct images so that poorly-edited images may be flagged as anomalies or outliers.

Keywords: neural networks, deep learning, image processing, machine learning

Prerequisites: Programming experience in Python. Experience in a Linux environment is a plus.

Fast and Somewhat Accurate Algorithms

Speaker: 
Chai Wah Wu
Date: 
Wed, Aug 5, 2015
Location: 
Institute for Mathematics and its Applications
Conference: 
PIMS-IMA Math Modeling in Industry XIX
Abstract: 

In applications such as image processing, computer vision or image compression, often times accuracy and precision are less important than processing speed as the input data is noisy and the decision making process is robust against minor perturbations. For instance, the human visual system (HVS) makes pattern recognition decisions even though the data is blurry, noisy or incomplete and lossy image compression is based on the premise that we cannot distinguish minor differences in images. In this project we study the tradeoff between accuracy and system complexity as measured by processing speed and hardware complexity.

Knowledge of linear algebra, computer science, and familiarity with software tools such as Matlab or Python is desirable. Familiarity with image processing algorithms is not required.

Fig. 1: error diffusion halftoning using Shiau-Fan error diffusion

Fig. 2: error diffusion halftoning using a single lookup table

References:
1. Wu, C. W., "Locally connected processor arrays for matrix multiplication and linear transforms," Proceedings of 2011 IEEE International Symposium on Circuits and Systems (ISCAS), pp.2169,2172, 15-18 May 2011.

2. Wu, C. W., Stanich, M., Li, H., Qiao, Y., Ernst, L., "Fast Error Diffusion and Digital Halftoning Algorithms Using Look-up Tables," Proceedings of NIP22: International Conference on Digital Printing Technologies, Denver, Colorado, pp. 240-243, September 2006.

Climate Change – does it all add up?

Speaker: 
Chris Budd
Date: 
Tue, May 5, 2015
Location: 
PIMS, University of Victoria
Conference: 
PIMS-UVic Distinguished Lecture
Abstract: 

Climate change has the potential to affect all of our lives. But is it really happening, and what has maths got to do with it?

In this talk I will take a light hearted view of the many issues concerned with predicting climate change and how mathematics and statistics can help make some sense of it all. Using audience participation I will look at the strengths and weaknesses of various climate models and we will see what the math can tell us about both the past and the future of the Earth's climate and how mathematical models can help in our future decision making.

From Euler to Born and Infeld, Fluids and Electromagnetism

Author: 
Yann Brenier
Date: 
Wed, Jun 10, 2015
Location: 
Centre Bernoulli, EFP-Lausanne
Conference: 
Marsden Memorial Lecture
Abstract: 

As the Euler theory of hydrodynamics (1757), the Born-Infeld theory of electromagnetism (1934) enjoys a simple and beautiful geometric structure. Quite surprisingly, the BI model which is of relativistic nature, shares many features with classical hydro- and magnetohydro-dynamics. In particular, I will discuss its very close connection with Moffatt’s topological approach to Euler equations, through the concept of magnetic relaxation.

 

The Marsden Memorial Lecture Series is dedicated to the memory of Jerrold E Marsden (1942-2010), a world-renowned Canadian applied mathematician. Marsden was the Carl F Braun Professor of Control and Dynamical Systems at Caltech, and prior to that he was at the University of California (Berkeley) for many years. He did extensive research in the areas of geometric mechanics, dynamical systems and control theory. He was one of the original founders in the early 1970’s of reduction theory for mechanical systems with symmetry, which remains an active and much studied area of research today.

 

This lecture is part of the Centre Interfacultaire Bernoulli Workshop on Classic and Stochastic Geometric Mechanics, June 8-12, 2015, which in turn is a part of the CIB program on

Geometric Mechanics, Variational and Stochastic Methods, 1 January to 30 June 2015.

Compressed Sensing

Speaker: 
Ben Adcock
Date: 
Thu, Feb 26, 2015
Location: 
Calgary Place Tower (Shell)
Conference: 
Shell Lunchbox Lectures
Abstract: 

Many problems in science and engineering require the reconstruction of an object - an image or signal, for example - from a collection of measurements.  Due to time, cost or other constraints, one is often severely limited by the amount of data that can be collected.  Compressed sensing is a mathematical theory and set of techniques that aim to improve reconstruction quality from a given data set by leveraging the underlying structure of the unknown object; specifically, its sparsity.  

 

In this talk I will commence with an overview of the fundamentals of compressed sensing and discuss some of its applications.  However, I will next explain that, despite the large and growing body of literature on compressed sensing, many of these applications do not fit into the standard framework.  I will then describe a more general framework for compressed sensing which aims to bridge this gap.  Finally, I will show that this new framework is not just useful in explaining existing applications of compressed sensing.  The new insight it brings leads to substantially better compressed sensing-based approaches than the current state-of-the-art in a number of applications.

Wavelets and Directional Complex Framelets with Applications to Image Processing

Speaker: 
Bin Han
Date: 
Tue, Mar 24, 2015
Location: 
Calgary Place Tower (Shell)
Conference: 
Shell Lunchbox Lectures
Abstract: 

Wavelets have been successfully applied to many areas. For high-dimensional problems such as image/video processing, separable wavelets are widely used but are known to have some shortcomings such as lack of directionality and translation invariance. These shortcomings limit the full potential of wavelets. In this talk, we first present a brief introduction to orthonormal wavelets and tight framelets as well as their fast transforms using filter banks. Next we discuss recent exciting developments on directional tensor product complex tight framelets (TP-CTFs) for problems in more than one dimension. For image/video denoising and inpainting, we show that directional complex tight framelets have superior performance compared with current state-of-the-art methods. Such TP-CTFs inherit almost all the advantages of traditional wavelets but with directionality for capturing edges, enjoy desired features of the popular discrete Fourier/Cosine transform for capturing oscillating textures, and are computationally efficient. Such TP-CTFs are also naturally linked to Gabor (or windowed Fourier) transform and can be further extended. We expect that our approach of TP-CTFs using directional complex framelets can be applied to many other high-dimensional problems.

Economies with Financial Frictions: A Continuous Time Approach 3

Speaker: 
Yuliy Sannikov
Date: 
Fri, Jul 25, 2014
Location: 
PIMS, University of British Columbia
Conference: 
The Economics and Mathematics of Systemic Risk and Financial Networks
Abstract: 

The recent financial crisis has made obvious the need for models of financial stability. These three lectures will cover recent advancements in the modeling of crisis episodes, with particular emphasis on the use of continuous-time methods which make these models more tractable. Useful background reading includes the following

Economies with Financial Frictions: A Continuous Time Approach 2

Speaker: 
Yuliy Sannikov
Date: 
Thu, Jul 24, 2014
Location: 
PIMS, University of British Columbia
Conference: 
The Economics and Mathematics of Systemic Risk and Financial Networks
Abstract: 

The recent financial crisis has made obvious the need for models of financial stability. These three lectures will cover recent advancements in the modeling of crisis episodes, with particular emphasis on the use of continuous-time methods which make these models more tractable. Useful background reading includes the following

Syndicate content