Peter O'Donovan
[last name without apostrophe]@dgp.toronto.edu
My research interests lies in computer graphics, vision, HCI, and machine learning.
More specifically, my interests are in learning models of aesthetics, interfaces for graphic design, as well as non-photorealistic rendering and image/video processing
for artistic effects.
I am a computer scientist working at Adobe Systems in Seattle, Washington. I completed my PhD at the University of Toronto's
Dynamic Graphics Project
lab, working under the supervision of
Aaron Hertzmann.
I also interned with
Aseem Agarwala at Adobe in 2010 and 2011.
Prior to this I worked as a software analyst team lead and developed
interfaces to large-scale billing systems for the energy market. I completed a B.Sc. Honours in Computer Science at the
University of Saskatchewan where I worked with
David Mould in computer
graphics and did my honours thesis on optical flow and video stabilization with
Mark Eramian.
DesignScape: Design with Interactive Layout Suggestions
ACM SIGCHI Conference on Human Factors in Computing Systems, (Proc. CHI), 2015.
Peter O'Donovan, Aseem Agarwala, and Aaron Hertzmann
Paper Project Page
Exploratory Font Selection Using Crowdsourced Attributes
ACM Transactions on Graphics (Proc. SIGGRAPH), 2014, 33, 4.
Peter O'Donovan, Jānis Lībeks, Aseem Agarwala, and Aaron Hertzmann
Paper Project Page
Collaborative Filtering of Color Aesthetics
Computational Aesthetics, 2014.
Peter O'Donovan, Aseem Agarwala, and Aaron Hertzmann
Paper Project Page
Learning Layouts for Single-Page Graphic Designs
IEEE Transactions on Visualization and Computer Graphics (TVCG), 2014, 20,8.
Peter O'Donovan, Aseem Agarwala, and Aaron Hertzmann
Paper Project Page
This paper presents an approach for automatically creating graphic design layouts using a new energy-based model derived from design principles. The model includes several new algorithms for analyzing graphic designs, including the prediction of perceived importance, alignment detection, and hierarchical segmentation. Given the model, we use optimization to synthesize new layouts for a variety of single-page graphic designs. Model parameters are learned with Nonlinear Inverse Optimization (NIO) from a small number of example layouts. To demonstrate our approach, we show results for applications including generating design layouts in various styles, retargeting designs to new sizes, and improving existing designs. We also compare our automatic results with designs created using crowdsourcing and show that our approach performs as well as, or better than, novice designers.
Nonlinear Classification via Linear SVMs and Multi-Task Learning
ACM International Conference on Conference on Information and Knowledge Management (Proc. CIKM), 2014, 33, 4.
Xue Mao, Ou Wu, Weiming Hu, Peter O'Donovan
Paper
AniPaint: Interactive Painterly Animation From Video
IEEE Transactions on Visualization and Computer Graphics (TVCG), 2012, 18, 3.
Peter O'Donovan and Aaron Hertzmann
Paper Project Page
We presents an interactive system for creating painterly animation from video sequences. We introduce an
approach for controlling the results of painterly animation: keyframed Control Strokes can affect automatic stroke’s placement, orientation,
movement, and color. Furthermore, we introduce a new automatic synthesis algorithm that traces strokes though a video sequence in a
greedy manner using an objective function to guide placement. This allows the method to capture fine details,
respect region boundaries, and achieve greater temporal coherence than previous methods.
Color Compatibility From Large Datasets
ACM Transactions on Graphics (Proc. SIGGRAPH), 2011, 30, 4.
Peter O'Donovan, Aseem Agarwala, and Aaron Hertzmann
Paper Project Page
This paper studies color compatibility theories using large datasets, and develops new tools for choosing colors.
There are three parts to this work. First, using on-line datasets, we test new and existing theories of human color preferences.
For example, we test whether certain hues or hue templates may be preferred by viewers.
Second, we learn quantitative models that score the quality of a five-color set, called a color theme.
Such models can be used to rate the quality of a new color theme.
Third, we demonstrate simple prototypes that apply a learned model to tasks in color design, including improving existing themes and extracting themes from images.
Felt-Based Rendering
4th International Symposium on Non-Photorealistic
Animation and Rendering (NPAR 2006).
Peter O'Donovan and David Mould
Paper
Felt is mankind's oldest and simplest textile, composed of a pressed mass of fibers. Images can be formed directly in the fabric by arranging the fibers to represent the image before pressure is applied, a process called "felt painting". Here, we describe an automated synthesis method that transforms input images into felt-painted images.
Using Semantic Web Methods for
Distributed Learner Modeling
2nd International Workshop on Applications of Semantic Web
Technologies for E-Learning (SW-EL 04) held in conjunction with the International Semantic Web
Conference (ISWC 2004)
Mike Winter, Chris Brooks, Gord McCalla, Jim Greer, Peter O'Donovan
Paper
Here describe a semantic web approach
for representing student models based on
distributed student data from learning environments
where the learner uses multiple applications and resources
to accomplish learning tasks. We also present a proposal
for revising those student models based on
arbitrary, web-based learner actions.
Learning View-based Mixture of Experts for Human Action Recognition
CSC2539 (Topics in Computer Vision: Visual Motion Analysis)
Peter O'Donovan
Paper
Many methods for action recognition use a view-independent approach
where actions from different views are treated identically. However, this results in
models which must deal with significantly different motions from different views
such as classifying a boxer from a rear view versus a side view. In this paper,
I explore the use of a view-based Mixture of Experts (MoE) model where each
expert is trained on data from a relative view between the camera and the subject.
This allows the experts to model a particular view and results in improved classification rates.
Seperate view and action classifiers were trained using both SVMs and LD-CRF models and the results
compared on the HumanEVA dataset.
Static Gesture Recognition with Restricted
Boltzmann Machines
CSC2515 (Introduction to Machine Learning)
Peter O'Donovan
Paper Dataset
In this paper I investigate a new technique for the recognition of static gestures
(poses) from laptop camera images. I apply Restricted Boltzmann Machines
(RBMs) to model the manifold of 3 human gestures: pointing, thumbs up, fingers
spread, as well as the default no-gesture case. The generative RBM model
performs significantly better than other classification techniques including classical
discriminative neural networks, and k-Nearest Neighbors on dimensionality
reduced images.