The Street View Text Dataset
Kai Wang EBU3B, Room 4148 Department of Comp. Sci. and Engr. University of California, San Diego 9500 Gilman Drive, Mail Code 0404 La Jolla, CA 92093-0404 Email: k...@cs.ucsd.edu
1.0 (also available from the Author's Web site)
OCR, Real Scene, Urban Scene, Scene Text, Word Spotting, Scene Text Recognition, Scene Text Detection, Scene Text Localization
The Street View Text (SVT) dataset was harvested from Google Street View. Image text in this data exhibits high variability and often has low resolution. In dealing with outdoor street level imagery, we note two characteristics. (1) Image text often comes from business signage and (2) business names are easily available through geographic business searches. These factors make the SVT set uniquely suited for word spotting in the wild: given a street view image, the goal is to identify words from nearby businesses. More details about the data set can be found in our paper, Word Spotting in the Wild . For our up-to-date benchmarks on this data, see our paper, End-to-end Scene Text Recognition .
This dataset only has word-level annotations (no character bounding boxes) and should be used for
- cropped lexicon-driven word recognition and
- full image lexicon-driven word detection and recognition.
Metadata and Ground Truth Data
We used Amazon's Mechanical Turk to harvest and label the images from Google Street View. To build the data set, we created several Human Intelligence Tasks (HITs) to be completed on Mechanical Turk.
Workers are assigned a unique city and are requested to acquire 20 images that contain text from Google Street view. They were instructed to: (1) perform a Search Nearby:* on their city, (2) examine the businesses in the search results, and (3) look at the associated street view for images containing text from the business name. If words are found, they compose the scene to minimize skew, save a screen shot, and record the business name and address.
Workers are presented with an image and a list of candidate words to label with bounding boxes. This contrasts with the ICDAR Robust Reading data set in that we only label words associated with businesses. We used Alex Sorokin's Annotation Toolkit to support bounding box image annotation. For each image, we obtained a list of local business names using the Search Nearby:* in Google Maps at the image's address. We stored the top 20 business results for each image, typically resulting in 50 unique words. To summarize, the SVT data set consists of images collected from Google Street View, where each image is annotated with bounding boxes around words from businesses around where the image was taken.
The annotations are in XML using tags similar to those from the ICDAR 2003 Robust Reading Competition.
- Kai Wang, Boris Babenko and Serge Belongie, "End-to-end Scene Text Recognition", ICCV 2011, Barcelona, Spain (PDF). Galleries: ICDAR, SVT.
- Kai Wang and Serge Belongie, "Word Spotting in the Wild", ECCV 2010, Heraklion, Crete, Greece (PDF).
This page is editable only by TC11 Officers .