Data Repository for
Haptic Algorithm Evaluation
Emanuele Ruffaldi1, Dan Morris2, Timothy Edmunds3, Dinesh Pai3, and Federico Barbagli2
1PERCRO, Scuola Superiore S. Anna
2Computer Science Department, Stanford University
3Computer Science Department, Rutgers University, {dmorris,barbagli}, {tedmunds,dpai}

[overview]   [data pipeline]   [data formats]   [data sets]   [code]   [example analyses]   [user contributions]   [contact]  


This page hosts the public data repository described in our 2006 Haptics Symposium paper:
Standardized Evaluation of Haptic Rendering Systems.

The goal of this repository is threefold:

Data Processing Pipeline

Physical models are scanned and used for analysis of haptic rendering techniques according to the following pipeline; we use the term trajectory to refer to a stream of positions and forces.

Comparing our rendered trajectory to our raw trajectory provides a quantitative basis for evaluating a haptic rendering technique. This page will soon include quantitative results comparing various rendering algorithms according to the metrics presented in our paper.

This pipeline is summarized in the following diagram:

Data Formats

This section describes the data formats in which all files on this page are presented.

Data Sets

This section contains several data sets acquired with our scanner and processed with the pipeline described above. Data is specified in the .traj format defined above. Note that the forces in an "in-trajectory" file are defined to be zero, and the normal forces in an "out-trajectory" file are defined to be one or zero, representing contact or non-contact with the object surface.

  • (400KB) contains the trajectory files describe above, for a simple plane. The following files are included:
    • probe.traj: the raw trajectory
    • out.traj: the out-trajectory
    • in.traj: the in-trajectory, computed using a stiffness of .5N/mm
    • proxy.traj: the rendered trajectory, computed using the proxy algorithm as implemented in CHAI 1.25
    • potential.traj: the rendered trajectory, computed using a potential-field algorithm that applies a penalty force to the nearest surface point at each iteration
    • plane_4.obj: a mesh representing the plane
  • (3MB) contains the trajectory files describe above, for a model of a duck. The following files are included:
    • probe.traj: the raw trajectory
    • out_9k.traj: the out-trajectory
    • in_9k.traj: the in-trajectory, computed using a stiffness of .5N/mm
    • proxy_9k.traj: the rendered trajectory, computed using the proxy algorithm as implemented in CHAI 1.25
    • potential_9k.traj: the rendered trajectory, computed using a potential-field algorithm that applies a penalty force to the nearest surface point at each iteration
    • duck001_9k.obj: a mesh representing the duck


This section contains utility code used to process our data formats, and will later be updated to include example analyses. Also see
CHAI 3D for the haptic and graphic rendering code used for our analyses.

Example Analyses

This section contains several analyses performed using the above data. The results presented here serve as reference data for comparison to other algorithms/implementations, and the approaches presented here provide examples of potential applications for this repository.

1. Friction Identification

duck model and trajectories provided above were used to find the optimal (most realistic) coefficient of dynamic friction for haptic rendering of this object. This analysis uses the friction cone algorithm available in CHAI 3D (version 1.31). The in-trajectory derived from the physical-scanned (raw) trajectory is fed to CHAI for rendering, and the resulting forces are compared to the physically-scanned forces. The coefficient of dynamic friction is iteratively adjusted until a minimum error between the physical and rendered forces is achieved.

This analysis demonstrates the use of our repository for improving the realism of a haptic rendering system, and also quantifies the value of using friction in a haptic simulation, in terms of mean-squared-force-error.

Results for the no-friction and optimized-friction cases follow, including the relative computational cost in floating-point operations:

dynamic friction radius (mm)RMS force error (N)Floating-point Ops
0.00000 (disabled)0.13203010.36M

Friction identification: summary of results

We observe that the trajectory computed with friction enabled contains significantly lower force-vector-error than the no-friction trajectory, indicating a more realistic rendering, with a negligible increase in floating-point computation.
2. Comparison Between Proxy (god-object) and Voxel-Based Haptic Rendering

The duck model and trajectories provided above were used to compare the relative force errors produced by proxy (god-object) and voxel-based haptic rendering algorithms for a particular trajectory, and to assess the impact of voxel resolution on the accuracy of voxel-based rendering. This analysis does not include any cases in which the proxy provides geometric correctness that the voxel-based rendering could not; i.e. the virtual haptic probe never "pops through" the model.

Voxel-based rendering was performed by creating a fixed voxel grid and computing the nearest triangle to each voxel center. The stored triangle positions and surface normals are used to render forces for each voxel through which the probe passes.

Results for the proxy algorithm and for the voxel-based algorithm (at two resolutions) follow, including the computational cost in floating-point operations, the initialization time in seconds (on a 1.5GHz Pentium), and the memory overhead:

AlgorithmVoxel Resolution RMS Force Error (N)Floating-point OpsInit Time (s)Memory
proxy  .12910.38M0.000

Proxy vs. voxel comparison: summary of results

We observe that the voxel-based approach offers comparable force error and significantly reduced online computation, at the cost of significant preprocessing time and memory overhead, relative to the proxy (god-object) approach. Analysis of this particular trajectory does not capture the fact that the proxy-based approach offers geometric correctness in many cases where the voxel-based approach would break down.

3. Quantification of the Impact of Force Shading on Haptic Rendering

The duck model and trajectories provided above were used to quantify the impact of force shading on the accuracy of haptic rendering of a smooth, curved object. Force shading uses interpolated surface normals to determine the direction of feedback within a surface primitive, and is the haptic equivalent of Gouraud shading. Friction confounds the effects of force shading, so the forces contained in each out-trajectory were projected onto the surface normal of the "true" (maximum-resolution) mesh before performing these evaluations.

Results are presented for several polygon-reduced versions of the duck model, including the average compute time per haptic sample (on a 3GHz P4) and the total computational cost in floating-point operations (for the complete trajectory). We also indicate the percentage reduction in RMS force error due to force shading.

Model Size (kTri)Shading EnabledMean Compute Time (us)Floating-point OpsRMS Force Error (N)Error Reduction (%)

Force shading: summary of results

We observe that force shading results in a significant reduction in RMS force error - as much at 36% - for models of typical size, at some cost (approximately 20%) in floating-point operations). For models at the maximum available resolution, force shading is not expected to contribute significantly to rendering quality, and indeed we see the error reduction fall off as meshes get very large. For models that are reduced to only several hundred polygons, the geometry no longer effectively captures the shape of the original mesh, and we see a falloff in error reduction as the quality of rendering decreases.

User Contributions

We hope to significantly expand the data sets available here over the coming months, and we welcome any of the following from users of this repository:


email us with any contributions, questions, or to discuss the data presented here. We welcome any information you have about how you're using this data or how it could be made more useful.

Written and maintained by Dan Morris

Support for this work was provided by NIH grant LM07295, the AO Foundation, and NSF grants IIS-0308157, EIA-0215887, ACI-0205671, and EIA-0321057. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.