University of North Dakota Home A picture of the stunning Jeremiah Neubert
A to Z Index'Directory'Map


I am a assistant professor at the University of North Dakota. Previously I worked as a research associate at Cambridge University in the MIL group investigating augmented reality and its application to industrial environments. Before coming to Cambridge University I was at the University of Wisconsin-Madison in the and Intelligent Systems Laboratory conducting research in vision and robotics under the supervision of Nicola Ferrier .


I am currently constructing a lab at the University of North Dakota to conduct research related to robotics, computer vision, and augmented reality. This will include projects such as improving modern assistive robots by using an augmented reality to specify tasks to the robot. In addition, I will also be investigating new methods of robot control that is inspired by human motion. If you are a prospective graduate interested in attending the University of North Dakota and would like to work with me please contact me.

Past Work

Image of tracking

Semi-Autonomous Generation of Appearance-based Edge Models from Image Sequences:

Currently, there are several powerful tracking techniques that require 3D models augmented with image information which require hours to create. Using the modeling system presented here models with all the information needed to take advantage of these system can be created in mere minutes. The system creates crude appearance-based edge models that rely on keyframes to capture unmodeled geometric structure and pose related changes in the object's appearance directly from an image sequence with a few user annotations by taking advantage of structure from motion techniques. The paper outlines the modifications needed to allow an existing tracking method to use the models. Thanks to ABB for their support of this work.

[2007 ISMAR Paper] [ABB presentation]

Control Panel: [Tracking]
Printer: [Input] [Keyframe Selection] [Tracking]

Tracking Video

Hybrid Tracking for Man-machine Interfaces:

This work generously sponsored by ABB to develop hybrid tracking and spatial reference technologies which can be combined to deliver new human-machine interfaces. The project focuses on creating algorithms for tracking objects so that graphics can be overlaid on the image allowing a user to interact with it. The object localization system utilizes both image tracking and an inertial rate gyroscope unit to robustly track objects.


PDAs as Tangible Interfaces:

This method we created identifies handheld devices (e.g. smart phones and pocket PCs) to facilitate the use of these devices as tangible interfaces for desktop augmented reality systems. The proposed system leverages the ability of these handheld devices to programmatically control their backlight intensity to display a binary code. The codes produced are non-intrusive, require no specialized hardware, and can be generated with most handheld devices. This technique is shown to accurately and robustly identify up to 16 different devices in under 500 msec and is easily expandable to 256 or more devices.

[2006 ISMAR Paper] [Video]
Our robot system

Visually Modulated Motion:

Current visual servoing systems are intended to provide slow iterative motions and thus are not capable of performing tasks that require quick and complex movements. Visually modulated motion (VMM) was created to address this limitation. The VMM system maps visual input directly to a set of motor commands that generate a complex and fast motion that is relatively short in duration. Once the motion is performed the outcome is analyzed and the mapping between the visual input and the motor commands is updated. This biologically inspired paradigm of learning from previous motions gives the VMM system the ability to achieve the desired outcome with sufficient repetition.

[2006 ICPR Paper] [Video]
Our robot system

Interface to Visual Servoing System:

I have been actively researching a general purpose user-friendly interface that allows the easy specification of tasks for traditional visual servoing systems. The interface that we developed allows the user to specify a task to the robot with a set of cues generated by mouse clicks. These cues not only specify the object but aid in the segmentation process. We have demonstrated that this system can benefit people that use assistive or tele-operated robots by greatly reducing the time to complete a task.

[2003 RA Paper]
Our robot system

Active Stereo Reconstruction:

Active stereo systems are composed of two cameras with computer controlled vergence, pan, and tilt, which resemble the human vision system. The calibration of active stereo vision systems, needed for traditional model based 3D reconstruction, requires the calibration of two cameras and their kinematics. To avoid the difficult calibration process we explored the use of neural networks to reconstruct the 3D positions of objects from information collected by the stereo head. We were able to successfully demonstrate that our artificial neural network was a good alternative to traditional model based methods.

In order to compare our neural network to traditional model based reconstruction, a method to calibrate the active stereo system was needed. We found that methods presented in literature were sensitive to error and produced undesirable results. Building on these reported techniques we created a new method for calibration that was robust to errors in the calibration data.

[2001 ICRA Paper] [2001 ANNIE Paper] [2002 ICRA Paper]

Jeremiah Neubert
Upson II Room 272
243 Centennial Drive Stop 8155
Grand Forks   ND   58202-8155
Tel: 701-777-2107 Fax: 701-777-4838
Email: jeremiah(dot)neubert(@)und(dot)edu