The Street View Text Dataset

From TC11
Revision as of 12:03, 16 October 2012 by Dimos (talk | contribs)
Jump to: navigation, search

Datasets -> Datasets List -> Current Page

Created: 2012-10-06
Last updated: 2012-10-16

Contact Author

Kai Wang
EBU3B, Room 4148
Department of Comp. Sci. and Engr.
University of California, San Diego
9500 Gilman Drive, Mail Code 0404
La Jolla, CA 92093-0404 
Email: k...@cs.ucsd.edu

Current Version

Example images from the Street View Text dataset.

1.0 (also available from the Author's Web site)

Keywords

OCR, Real Scene, Urban Scene, Scene Text, Word Spotting, Scene Text Recognition, Scene Text Detection, Scene Text Localization

Description

The Street View Text (SVT) dataset was harvested from Google Street View. Image text in this data exhibits high variability and often has low resolution. In dealing with outdoor street level imagery, we note two characteristics. (1) Image text often comes from business signage and (2) business names are easily available through geographic business searches. These factors make the SVT set uniquely suited for word spotting in the wild: given a street view image, the goal is to identify words from nearby businesses. More details about the data set can be found in our paper, Word Spotting in the Wild [1]. For our up-to-date benchmarks on this data, see our paper, End-to-end Scene Text Recognition [2].

This dataset only has word-level annotations (no character bounding boxes) and should be used for

  • cropped lexicon-driven word recognition and
  • full image lexicon-driven word detection and recognition.

If you need character training data then you should look into the Chars74K and ICDAR datasets.



This page is editable only by TC11 Officers .