============================================================================ IAPR TC-11 Newsletter July 2016 http://www.iapr-tc11.org ========== Contents ======================================================== * Message from the Editor * Dates 'n' Deadlines - MANPU 2016, Cancun, Mexico August 10 (http://manpu2016.imlab.jp) * Call for Participation: - Competition on Detection and Recognition of Arabic Text in Videos (AcTiVComp16) in conjunction with ICPR, Cancun, Mexico, December 2016 * Call for Papers: - 1st Int. Workshop on coMics ANalysis, Processing and (repost) Understanding (MANPU2016), December 4, 2016, Cancun Center, Cancun, Q.Roo, Mexico http://manpu2016.imlab.jp * Job offers: - 2x Postdoctoral positions, L3i laboratory, (repost) University of La Rochelle, France - Tenure-track Associate Professor position at (repost) Tokyo University of Agriculture and Technology. * New and Recently Published Datasets * Call for Dataset Submissions * Call for Contributions ============================================================================ ========== Message from the Editor ========================================= Welcome to the July edition of our newsletter. This edition brings to you the Call for Participation for the Competition on Detection and Recognition of Arabic Text in Videos (AcTiVComp16) which will be organized in conjunction with ICPR 2016. With the best wishes for a relaxing holiday season, Gernot A. Fink, IAPR TC-11 Newsletter Editor / Education Officer Gernot.Fink@udo.edu ============================================================================ ========== Dates 'n' Deadlines ============================================= Event/Location/Web: Event Date: Deadline (paper submission): ---------------------------------------------------------------------------- * MANPU 2016, Cancun, Mexico December 4 August 10 (http://manpu2016.imlab.jp) * ICDAR 2017, Kyoto, Japan November 10-15, 2017 March 15, 2017 (http://www.iapr.org/icdar2017) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - * IWRR 2016, Amsterdam, The Netherlands October 9 - passed - (http://www.cvc.uab.es/IWRR2016) * ICPR 2016, Cancun, Mexico December 4-8, 2016 - passed - (http://www.icpr2016.org/) * CVPR 2016, Las Vegas, United States June 26-July 1, 2016 - passed - (http://www.pamitc.org/cvpr16/) * BMVC 2016, York, UK September 19-22 - passed - (http://bmvc2016.cs.york.ac.uk/) * ICFHR 2016, Shenzen, China October 23-26 - passed - (http://www.nlpr.ia.ac.cn/icfhr2016) * MedPRAI 2016, Tebessa, Algeria November 22-23 - passed - (http://medprai2016.sciencesconf.org/) ============================================================================ ========== New and Recently Published: TC-11 Datasets ====================== For a list of all datasets available visit: http://tc11.cvc.uab.es ============================================================================ ========== Call for Participation: Summer School on Document Analysis ====== Call for participation Competition on Detection and Recognition of Arabic Text in Videos [AcTiVComp16] In conjunction with the 23rd International Conference on Pattern Recognition (ICPR'16) in Cancun, Mexico, December 2016. Recognizing text in video sequences, often called Video Optical Character Recognition (Video-OCR), is an essential task in many applications like content-based multimedia ?retrieval, automatic broadcast annotation, large archive managing, etc. Therefore, the field has gained increasing attention of the researchers in the last decades. In this context, we propose the organization of a contest for Arabic text detection and recognition in news video frames at the ICPR 2016, in order to stress the recent advances in the field of Video-OCR . This competition include two main tasks: Detection task and Recognition task, for each one we propose a standard annotated dataset and a set of evaluation protocols. For more details about the contest, please visit the website: http://diuf.unifr.ch/diva/AcTiVComp We strongly invite all researchers in the field of Video-OCR to register and participate in AcTiVComp16 including those working on scene text and/or non-arabic video text. If you have any specific questions, do not hesitate to contact the contact person: Oussama Zayene ============================================================================ ========== Call for Papers: MANPU 2016 (in conj. with ICPR 2016) == repost = CALL FOR PAPERS The First International Workshop on coMics ANalysis, Processing and Understanding (MANPU2016) December 4, 2016 Cancun Center, Cancun, Q.Roo, Mexico http://manpu2016.imlab.jp To be held in conjunction with the 23rd International Conference on Pattern Recognition (ICPR2016) Comics is a medium constituted of images combined with text and other visual information in order to narrate a story. Nowadays, comic books are a widespread cultural expression all over the world. The market of comics continues to grow, for example, the market in Japan is about 4.25 billion USD in 2015. Moreover, from the research point of view, comics images are attractive targets because the structure of a comics page includes various elements (such as panels, speech balloons, captions, leading characters, ?), the drawing of which depends on the style of the author and presents a large variability. Therefore, comics image analysis is not a trivial problem and is still immature compared with other kinds of image analysis. Scope and Topics The scope of this workshop includes, but is not limited to, Comics Image Processing Comics Analysis and Understanding Comics Recognition Comics Retrieval and Spotting Comics Enrichment Reading Behavior Analysis Comics Generation Copy protection - Fraud detection Physical/digital comics interfaces Important dates 10 August 2016 Paper submission deadline 1 September 2016 Notification of acceptance 4 September 2016 Last day of early-bird registration 4 December 2016 Workshop Datasets To evaluate the proposed works, participants will be able to use the following datasets that are publicly available. Researchers can request to download them at each website. eBDtheque consists of 100 images with ground truth for panels, speech balloons, tails, text lines, leading characters. http://ebdtheque.univ-lr.fr/ Manga109 consists of over 20 thousand images of 109 volumes (21,142 images). http://www.manga109.org/index_en.php Paper Submissions MANPU2016 follows a single-blind review process. Authors are required to include their names and affiliations in their papers. The manuscript length is 6 pages maximum. Dual submission or concurrent submission to the MANPU2016 workshop and other conference or journal are not allowed. The template files for submission will be announced as soon as possible. Submission site https://easychair.org/conferences/?conf=manpu2016 General Co-Chairs Jean-Marc Ogier, University of La Rochelle, France Kiyoharu Aizawa, The University of Tokyo, Japan Koichi Kise, Osaka Prefecture University, Japan Program Co-Chairs Jean-Christophe Burie, University of La Rochelle, France Toshihiko Yamasaki, The University of Tokyo, Japan Motoi Iwatam Osaka Prefecture University, Japan Program Committee Ying Cao, City University of Hong Kong Wei-Ta Chu, National Chung Cheng University, Taiwan Mathieu Delalandre, Tours University, France Clément Guérin, University of La Rochelle, France Seiji Hotta, Tokyo University of Agriculture and Technology, Japan Dimosthenis Karatzas, Computer Vision Centre, Spain Johannes Kopf, Facebook, USA Rynson W. H. Lau, City University of Hong Kong, Hong Kong Muhammad Muzzamil Luqman, University of La Rochelle, France Yusuke Matsui, National Institute of Informatics, Japan Mitsunori Matsushita, Kansai University, Japan Naoki Mori, Osaka Prefecture University, Japan Satoshi Nakamura, Meiji University, Japan Srikanta Pal, University of La Rochelle, France Christophe Rigaud, University of La Rochelle, France Akihiko Shirai, Kanagawa Institute of Technology, Japan Takaaki Shiratori, Oculus Research, USA Wong Tien-Tsin, Chinese University of Hong Kong, Hong Kong Workshop Secretariat manpu2016-inquiry@m.cs.osakafu-u.ac.jp ============================================================================ ========== Job Offer: PostDoc Univ. La Rochelle (1) =============== repost = PROPOSITION OF POSTDOCTORAL POSITION L3i laboratory University of La Rochelle France TOPIC Document analysis and description for hybrid document authentication SUMMARY OF THE PROPOSED WORK The recruited person will integrate the SHADES project within the L3i lab (University of La Rochelle in France). The objective of this project is to provide a new tool for authenticating the entirety of the content of a document through an advanced compact signature in order to fight against fraud and falsification. This signature is based on the document's content (text and graphics) and structure (spatial relationships) what we call a semantic signature. Thanks to a hashing of the documentâs information during the signature computation, no information from the original document will be deduced from its signature alone. The signature can then be inserted in the document or used in company content management software in order to check the authenticity of the document without compromising its confidentiality. Currently, there is ongoing working on this subject. The objective of this postdoctoral position is to propose, based on this work, stable algorithms for document content analysis and description methods, as well as hashing. KEY WORDS Image/document processing, stability of image/document analysis algorithms, hashing, document authentication CONTEXT The L3i is a research lab in La Rochelle. La Rochelle is a city in the south west of France on the Atlantic coast and is one of the most attractive and dynamic cities in France. The L3i works since several years on fraud detection in documents and document security and has become a worldwide reference in this domain. The SHADES (Semantic Hash for Advanced Document Electronic Signature) project is an interdisciplinary project on document authentication, financed by the French National Research Agency (ANR). It involves the company ITESOFT, the FNTC (a professional federation representing the trusted third parties at national and international level), and two research labs in computer science (the L3i of University of La Rochelle and the LIPADE of the Paris Descartes University) and one in law (the CEJEP of the University of La Rochelle). The recruited person will work in strong collaboration with the LIPADE and ITESOFT, but will also interact with the other partners. DESCRIPTION OF THE SUBJECT Many documents need to be secured, ideally by the means of an electronic signature. Typically, the electronic signature is obtained by computing a hash code on the documentâs pixels values. If two documents have the same signature, then they are authentic copies of each other and if their signatures are different, one of the two documents is fraudulent or at least different from the other one. This concept works well for naturally born digital documents. However, nowadays a document, the so-called hybrid document, is often used in electronic or paper form according to the need. Hence, the hybrid document undergoes a life cycle of printing and scanning and thus different degraded versions of the document exist as the printing and scanning process introduces specific degradations, such as print and scan noise, in the document. Thus, the concept of electronic signature cannot be applied. For this reason, our work intends to develop an advanced electronic signature for the field of securing hybrid documents, the so-called hybrid security. Our idea is to extract the layout, the text and the images from the document, to describe the page in a stable manner, and to compute a hash that will be the same for all the authentic copies of the document. In consequence, this requires document analysis techniques with an extreme stability especially with regard to print and scan noise. Many document analysis algorithms have been evaluated with respect to accuracy. Anyhow, the concept of accuracy does not apply to our security context. The two concepts of accuracy and stability should not be confused. Accuracy requires a ground truth to evaluate how close a result is to this ground truth. Accuracy can be evaluated with only one result as long as there is also a ground truth. Stability does not require a ground truth. Stability requires at least two results with similar inputs to see how close these results are together compared to how close the inputs were. In our case, similar inputs are two photocopies of the same document. A consequence of this is that an algorithm can be very stable and yet not be accurate. For instance, this can be an algorithm that always makes the same mistakes or for instance in the case of a segmentation algorithm, an algorithm producing always one region covering the whole image. Such an algorithm would have an absolute stability and zero accuracy. The contrary is not true. An algorithm with an absolute accuracy will always produce results that are identical to the ground truth and hence will be identical between each other. Furthermore, stability should also not be confused with robustness. A robust algorithm is an algorithm capable of providing a relevant output even when a certain amount of noise is contained in its input. The variation of this output is not constrained as it is for stability. Our recent work has shown that traditional document analysis algorithms such as optical character recognition (OCR) and segmentation algorithms are unstable [1,4] as they contain thresholds and parameters [3]. Our first approach of developing a layout descriptor, without using thresholds and parameters, shows that stable algorithms can be achieved. The objective of this postdoctoral position is continue the ongoing work and to develop stable document analysis algorithms and description methods. PROFILE The applicant should have a completed PhD in computer science, signal processing or applied mathematics. The ideal candidate will have a strong background in image or document analysis. Good programming skills are required. The recruited person will be involved in the management of the project, participate to consortium meetings and contribute to deliverables. Therefore, good communication skills and autonomy are mandatory. HOW TO APPLY The application should include a brief description of research interests and past experience, a CV, degrees and grades, motivation letter, relevant publications, letter(s) of recommendation and contact information of reference persons. DETAILS Starting date: preferably between September and December 2016 Duration: 18 months Salary: approximately 2300 EUR net per month CONTACTS Petra Gomez-Krämer: petra.gomez@univ-lr.fr Jean-Marc Ogier: jean-marc.ogier@univ-lr.fr REFERENCES [1] S. Eskenazi, P. Gomez-Krämer, and J.-M. Ogier. Evaluation of the stability of four document segmentation algorithms. In International Workshop on Document Analysis Systems (DAS), 2016. [2] S. Eskenazi, P. Gomez-Krämer, and J.-M. Ogier. The Delaunay document layout descriptor. In ACM International Symposium on Document Engineering (DocEng), 2015. [3] S. Eskenazi, P. Gomez-Krämer, and J.-M. Ogier. Let's be done with thresholds. In International Conference on Document Analysis and Recognition (ICDAR), 2015. [4] S. Eskenazi, P. Gomez-Krämer, and J.-M. Ogier. When document security brings new challenges to document analysis. In International Workshop on Computational Forensics (IWCF), Lecture Notes in Computer Science (LNCS 8915), pages 104-116. Springer, 2015. ============================================================================ ========== Job Offer: PostDoc Univ. La Rochelle (2) =============== repost = Postdoctoral researcher position Robust biometrics (face authentication) under spoofing attacks on mobile platforms The L3i lab of the University of La Rochelle (France) is looking for a talented young researcher to join a team working on mobile authentication using identity documents capture and video interaction. You will work closely with approx. 8 persons (professors, associate professors, software engineers, postdoctoral researchers and PhD students) in 2 labs and 2 companies to design, prototype, evaluate and disseminate a new solution for the robust distant authentication of individuals using identity document scans and a live video session. CONTEXT As part of the MOBIDEM project ("Mobile Identity for the Masses"), which aims at lowering the barrier for the acquisition of digital certificates for online signatures, the L3i is in charge of conceiving and developing an efficient and effective solution for the authentication of individuals during the creation of the digital identity. This projects involves two industrials and another academic lab, and will lead to new solutions to build online trust. The MOBIDEM project is funded by the French government. WORKING ENVIRONMENT The L3i lab ("Image Interaction Informatics") is the ICT research component of the University of La Rochelle, France. Located on the seaside in one of the most touristic cities in France, the L3i lab has grown a solid experience in collaborative research projects with many academics labs and industrial partners. The L3i lab has a huge team of researchers (postdocs, software engineers, professors), who have been working on R&D projects in the domain of document image analysis (DIA). The L3i lab has several state of the art document scanners and sophisticated equipment for conducting high quality research. Composed of a hundred staff members, we value scientific excellence, initiative, and cooperation. PRACTICAL INFORMATION Duration: 1 year (renewable for 1 extra year) Salary and benefits: starting at EUR 2000 per month after tax, regular French health care, 25+ days annual leave Starting date: before dec. 2016 Position open until : sept. 2016 EXPECTED QUALIFICATIONS AND SKILLS PhD in Computer Science, Mathematics, or other relevant field with a sound publication record Experience in biometrics, ideally in face recognition / authentication Proficiency in C/C++/Matlab/OpenCV and/or Scientific Python Strong mathematical knowledge Experience in collaborative academic / industrial projects is a plus Experience in machine learning and computer vision in general Very good communication skills in English, both spoken and written Speaking French is a plus, willingness to learn is expected otherwise Experience in video processing is a plus Experience with one or more deep learning framework would be greatly appreciated Willingness to prototype with mobile devices Highly motivated and enthusiast Autonomous and problem-solver Team-player For more information about the position, please contact: muriel.visani(at)univ-lr.fr and joseph.chazalon(at)univ-lr.fr TO APPLY, PLEASE: 1. Review the current state of the position at http://l3i.univ-larochelle.fr/recrutementMOBIDEM 2. Send an application to muriel.visani(at)univ-lr.fr and joseph.chazalon(at)univ-lr.fr with: Your contact information (name, email, address with country) A resume with relevant experience (1 or 2 pages) A cover letter presenting key skills and motivations 2 references (letters if possible, names and contacts at least) [optional, if available] reviews of the PhD thesis by recognized scholars [optional but well received] short summary of research project ============================================================================== ========== Job Offer: Associate Professor, Tokyo ================ (repost) = A tenure-track associate professor position is available at Tokyo University of Agriculture and Technology. Area name is: Cognitive Interactive Technology, related with (1) Data Engineering, (2) Media Information Processing, (3) Machine Learning, (4) Pattern Recognition. https://jrecin.jst.go.jp/seek/SeekJorDetail?D01A9990=30A3079FDDAFD866D3418515F39CFA88&fn=4&dt=1&id=D116030987&ln_jor=1 If you are interested in the position, please e-mail to Prof. Nakagawa at nakagawa@cc.tuat.ac.jp ============================================================================ ========== Call for Contributions ========================================== This newsletter needs your support in order to provide useful information to the TC11 community. Therefore, please contribute relevant news by sending a short notice to the newsletter editor Gernot A. Fink . Such news could be the obvious announcements of conferences and workshops, job opportunities, reports on past conferences, book reviews, or anything that might be of interest to a wider audience involved in the construction of reading systems. ============================================================================ ========== Subscription Information ======================================== This newsletter is sent to subscribers of the IAPR TC11 mailing list. To manage your subscription, please visit the mailing list homepage at: https://www.jiscmail.ac.uk/cgi-bin/webadmin?A0=IAPR-TC11 The homepage for IAPR TC11 is http://www.iapr-tc11.org ============================================================================