The materials used for this research come from public online databases, and images elaborated in scientific essays and research found with Google search. Re-use of this material is intended to be an artistic research. If some researcher and institution believes this work has violated its copyright, or the subject does not want to be included in this search, please contact me, accept my apologies and your material will be removed.

The Japanese Female Facial Expression (JAFFE) Database,

(as described on the original website)”The database contains 213 images of 7 facial expressions (6 basic facial expressions + 1 neutral) posed by 10 Japanese female models. Each image has been rated on 6 emotion adjectives by 60 Japanese subjects. The database was planned and assembled by Michael Lyons, Miyuki Kamachi, and Jiro Gyoba. We thank Reiko Kubota for her help as a research assistant. The photos were taken at the Psychology Department in Kyushu University.”

213 images of 7 facial expressions – (happiness, sadness, surprise, anger, disgust, fear and a neutral face). JAFFE is one of the classic datasets, which provides a standard test for training A.I. machines on Emotion Detection. Ten Japanese girl posed in front of a camera at the Psychology Department in Kyushu University in 1998, showing seven basic facial expressions.

Olivetti Face Dataset
The Database of Faces,  AT&T Laboratories Cambridge

(as described on the original website) Database of Faces contains a set of face images taken between April 1992 and April 1994 at the lab. There are ten different images of each of 40 distinct subjects. The images were taken at different times, varying the lighting, facial expressions (open / closed eyes, smiling / not smiling) and facial details (glasses / no glasses).

” Parameterisation of a stochastic model for human face identifican” Ferdinando Samaria, Andy Harter, Proceedings of 2nd IEEE Workshop on Applications of Computer Vision, Sarasota FL, December 1994

credits:  AT&T Laboratories Cambridge.

The “Yale Face Database”


“There are 11 images per subject, one per different facial expression or configuration: center-light, w/glasses, happy, left-light, w/no glasses, normal, right-light, sad, sleepy, surprised, and wink.”

The Extended Yale Face Database B

acknowledgement:  “the Exteded Yale Face Database B”

The extended Yale Face Database B contains 16128 images of 28 human subjects under 9 poses and 64 illumination conditions.
reference: Kuang-Chih Lee, Jeffrey Ho, and David Kriegman in “Acquiring Linear Subspaces for Face Recognition under Variable Lighting, PAMI, May, 2005
reference Athinodoros Georghiades, Peter Belhumeur, and David Kriegman’s paper, “From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose”, PAMI, 2001,

Side views of motorbikes, cars and cows

Authors Bastian Leibe and Bernt Schiele
Institute Darmstadt University of Technology
Cow data provided by Derek Magee, University of Leeds.
This work has been part of the CogVis project, funded in part by the Commission of the European Union (IST-2000-29375), and the Swiss Federal Office for Education and Science (BBW 00.0617)

Publications B. Leibe, A. Leonardis and B. Schiele. Combined object categorization and segmentation with an implicit shape model. In Proceedings of the Workshop on Statistical Learning in Computer Vision. Prague, Czech Republic, May 2004.

The Third International Competitions on Fingerprint Verification

For Benchmark fingerprint-based systems, both in matching techniques and sensing devices.

The Berkeley Segmentation Dataset

The BSDS300 is a classical dataset used for research on image segmentation and boundary detection. It contains hand-labeled segmentations images from 30 human subjects.

D. Martin and C. Fowlkes and D. Tal and J. Malik, A Database of Human Segmented Natural Images and its Application to Evaluating Segmentation Algorithms and Measuring Ecological Statistics, Proc. 8th Int’l Conf. Computer Vision (2001)