Academic Freedom. Massive Datasets. Global Impact.

  • Freedom

    Take your ideas and research anywhere you want to go, working with researchers from top academic groups in Machine Learning, Computer Vision, and Natural Language Processing. Drive the state-of-the-art while publishing your work in top-tier academic conferences and journals.

  • Data

    Work with massive datasets from decades of real market data. Datasets that can fuel state-of-the-art research in NLP, time-series analysis, graph mining, deep learning, and reinforcement learning.

  • Impact

    Develop resources that will be used by millions of people around the world. Apply research findings and solve important business and data problems for one of the world’s largest banks and Canada’s biggest company.

Research Areas

Reinforcement Learning
Natural Language Processing
Information Retrieval
Deep Learning
High Performance Computing
Unsupervised Learning
Optimization in Machine Learning
Time-Series Analysis


  • RBC Research Institute to Present Two Spotlight Workshop Papers at ICML 2017

    • RBC Research Institute is proud to have two spotlight workshop papers at this year’s International Conference on Machine Learning (ICML) conference in Sydney, Australia. Representatives from our team will be downloading at least a dozen Netflix movies on their respective iPads in preparation for the flight to present their research next month.

      Below, a quick roundup of our work.

      Implicit Manifold Learning in Generative Adversarial Networks

      Our first spotlight paper, on Implicit Manifold Learning in Generative Adversarial Networks, was authored by Kry Lui, Yanshuai Cao, Maxime Gazeau, and Kelvin Zhang. Maxime and Kelvin are currently deciding which one of them will attend based on the empirically driven criteria of who can handle jet lag better [update: Kelvin won].

      The idea behind the Generative Adversarial Networks was to discover how to create fake pictures that look like real pictures. Starting from an input dataset – whether that’s images of cats, dogs, or people – the goal was to get the algorithm to generate similar data that weren’t in that initial dataset.

      But we still wanted our pictures to look accurate and be diverse. It turns out that the choice of the cost function was crucial in order to achieve these two desirable properties. We studied two different cost functions from the perspective of sharpness (how realistic the pictures are) and mode dropping (you want to generate diverse enough data without limited selection) — namely the Jensen-Shannon divergence and the Wasserstein distance.

      We showed that the Jensen-Shannon divergence is a sensible objective to optimize when it comes to learning how to generate realistic samples, while the Wasserstein distance can give better sample diversity. We concluded that it’s worthwhile to look for ways to combine them or to seek new distances that inherit the best of both.

      Automatic Selection of t-SNE Perplexity

      Our second paper, authored by Yanshuai Cao and Luyu Wang, sought to reduce the time-consuming trial-and-error method of finding the right balance for data visualization using the t-SNE algorithm. t-SNE is a very powerful visualization tool, but until now we didn’t have any structured way to find the parameter that would automatically lead to the best result.

      We proposed a new decision function for selecting the t-SNE perplexity hyperparameter and, by eliciting preference from human experts and inferring their hidden utility with a Gaussian process model, we found our algorithm matched human judgment. To effect, our solution is able to automatically set the balance between retaining the local structure and global structure and can identify the balance that is the best for any visualization. This work has been implemented as a feature in the data visualization software tool, Kaleidoscope, that we have already shipped to RBC business units.

      Congratulations to our researchers. Stay tuned to this page for dispatches from the Outback and please stop by the office after August 12 for some Marmite brownies.


Working for a bank doesn’t mean you have to number-crunch in a cubicle, dream in Excel, and wear Gordon Gekko three-piece suits. RBC Research offers a stimulating work environment and the platform to do world-class machine learning in a start-up culture. If contemplating on how machines can autonomously build knowledge from observation is what you do in your spare time, then you are one of us. We move quickly, challenge each other, and are here to invent the future of banking.

Does this sound like you? Then let’s talk. You can join us full-time or as an intern. Check out the career opportunities listed below and watch for more opportunities as our team grows.

Contact Us


MaRS Heritage Building

101 College St, Suite 350

Toronto, ON M5G 1L7


University of Alberta

CCIS 3-232

Edmonton, AB T6G 2M9