Daniel Taranovsky BSc.Hon., MSc. |
This webpage is intended to be a forum for presenting some of my academic work, rather than an exhaustive representation of my views, interests, and aspirations. Feel free to contact me regarding anything that interests you in this document. This document is copyright (c) Daniel Taranovsky, 2001. You are granted permission for the non-commercial use, reproduction, distribution or display of this document in any format under the following restrictions. This document must remain intact and complete and appropriate credit is given as to its source. All other rights reserved by the author. This webpage was last updated Wednesday, June 13, 2001.
I recently worked on a research project to study effective ways of controlling virtual figures. This project involved implementing a prototype system that allows the user to control a seated character with high-level directives. For example, the user may command the puppet to grasp an object or scratch his head, and the motion to accomplish the task is generated by the system. The work includes a significant inverse kinematics component, which was used to generate natural-looking postures when responding to high-level commands. Below is the thesis abstract, while a more in-depth discussion can be found in some of my written work.
Abstract
Controlling the motion of virtual characters with many degrees of freedom can be difficult and time consuming. For some applications, complete control over all joints at every time step is not necessary and actually hinders the creative process. However, endowing the character with autonomous behaviour and decision-making capabilities completely absolves the user of clearly specifying his intentions. In many circumstances the ideal level of control allows the user to specify motion in terms of high-level tasks with timing and stylistic parameters. The user is not encumbered by low-level details, while retaining complete control over the motion’s semantic interpretation. This relatively unexplored level of motion specification is termed “guided control”, and is the focus of our work. We present the issues and results encountered from implementing a prototype animation system with guided control of a virtual puppet.
Results
The following animations were not generated with keyframing techniques. A script detailing the
state of the environment was input to the system, while another script assigning temporal constraints to high-level tasks generated the motion.
This movie is generated from the script "1. Pickup blue object with left hand ; 2. Stack blue object on yellow object".
The same script was executed in four different environments. The height and position of the table, and the size and position of the objects are user specified parameters. env_mov3.mov (169K , 50K/sec , 20 frames/sec) | |
This movie is generated from the script "1. Pickup yellow object with left hand ; 2. Stack yellow object on blue object".
The same script was executed in four different environments, where the size and position of the objects were modified in each scenario. env_mov4.mov (178K , 50K/sec , 20 frames/sec) | |
This movie is an example of concurrent task execution, where the puppet performs two independent tasks simultaneously. The natural-language translation of the motion script is "Move left hand to (x,y,z) and right hand to (a,b,c) at the same time". The script was executed four times with different values for x, y, z, a, b, and c. slides_v2.mov (86K , 50K/sec , 20 frames/sec) | |
Two puppets playing a game of poker. ncards_v2.mov (1060K , 50K/sec , 20 frames/sec, audio included) | |
This movie demonstrates cooperative use of both hands to lift a pot. cauld_v2.mov (87K , 50K/sec , 20 frames/sec) | |
A striking motion with a sword is simulated by repetitively reaching for several points in space. The hand holding a pitchfork has its position and orientation locked while the other hand performs reaching tasks. sword_v2.mov (528K , 50K/sec , 20 frames/sec) | |
A puppet is cutting through a log with a chainsaw. The motion was generated by simultaneously sliding both hands hands down while maintaining its orientation. sawing_v3.mov (538K , 50K/sec , 20 frames/sec) |
Daniel Taranovsky. Guided Control of Intelligent Virtual Puppets. M.Sc. Thesis, University of Toronto, 2001.
(Also Technical Report CSRG-434, Department of Computer Science, University of Toronto, 2001.)
2001.pdf (7.15 MB)
Daniel Taranovsky, Michiel van de Panne. Guided Control : A System for Directable Characters. Graphics Interface Poster Proceedings, 2001.
giposter2.pdf (59 KB)
Daniel Taranovsky. CPU Scheduling in Multimedia Operating Systems. Unpublished report, 1999.
os_res5.pdf (73 KB)
I completed my undergraduate studies at McGill University in the School of Computer Science. Recently I finished my graduate studies at the University of Toronto as a member of the Dynamic Graphics Project. The DGP lab is affiliated with several departments within the university, including the Department of Computer Science where I was registered as a student. My supervisor was Professor Michiel van de Panne.
dtaranovsky@sympatico.ca