Pill Image Recognition Challenge: Submission Instructions

Introduction

On January 19, 2016, the National Library of Medicine published a Federal Register Notice "Pill Image Recognition challenge". Through this Challenge, NLM seeks your input in developing high-quality algorithms and software that rank known prescription pill images in the NLM RxIMAGE database by similarity to images of unknown prescription pills. NLM plans to use your work to develop a future software system and a future API (Application Programming Interface) that searches the RxIMAGE database to find the best candidate matches to the image of an unknown prescription pill taken by a mobile device, and displays the results.

These PIR Challenge “Submission Instructions” include:

  • Submission Instructions, which include:
    • Links to the data to be used in preparing Challenge submissions, include file specs
    • Submission details including software specs, examples, details about a submission entry, VM (Virtual Machine) specs, about the evaluation system, and instructions for mailing in the submissions
    • Evaluation and Winner Selection
  • Register for the Challenge
  • A submission checklist
  • MAP formula and examples - Mean Average Precision (MAP) will be used in judging the Challenge
  • Source code of the software that will be used to evaluate Challenge submissions
  • Notifications and FAQs about the Challenge

Question and Comments

Questions and comments about the NLM Pill Image Recognition initiative are welcome at PIR@nlm.nih.gov.

January 19, 2016
April 4, 2016 to May 31, 2016
August 1, 2016
$25,000 First Prize plus $15,000 (Second Place), $5,000 (Third Place), and two $2,500 prizes (Honorable Mention)

Please read Federal Register Notice for a description of the Challenge including the rules for participation. These Submission Instructions supplement the FRN.

Please use the PIR Challenge Checklist at http://pir.nlm.nih.gov/challenge/checklist to make sure your entry is complete. For any questions or comments related to the Challenge, please view the http://pir.nlm.nih.gov/challenge/notifications-and-FAQs or contact us at PIR@nlm.nih.gov.

Download the Data

For use in designing your submission to this Challenge, you are provided with 7000 images of pills taken as part of the NLM Computational Photography Project for Pill Identification (C3PI) Project.

Consumer-quality Images (9 GB)
Reference Images (800 MB)

File Specifications

Reference images: This directory, named DR, contains 2000 JPEG image files (one for the front and one for the back of each of 1000 pills) of high quality JPEG reference images, which are taken from the NLM RxIMAGE database of pill images and are a subset of the images that will be retrieved by the NLM system. Below are some examples of the reference images in this directory:

Consumer-quality images: This directory, named DC, contains 5000 JPEG consumer-quality images of the same 1000 pills that were photographed for DR. However, they were taken with digital cameras such as the ones in mobile devices, intended to be similar in quality to photos that future users of the NLM system may take with their smartphones. Below are some examples of the consumer-quality images in this directory:

For a pill for which there is at least one consumer-quality image in DC, DR has two reference images of that pill, one of the front of the pill and one of the back of the pill. Conversely, for a pill for which there are two reference images in DR (one of the front of the pill and one of the back of the pill), DC has five consumer-quality images of that pill, taken under different conditions.

Ground truth table: This is a two-column table with column headers ref_images and cons_images. In each row of the table, the first column names a reference image, and the second column names a consumer-quality image corresponding to the reference image. There is a separate row in the table for each (reference image, consumer-quality image) pair, even when multiple reference and consumer-quality images are all photos of the same pill.

For example, if images reference1.jpg, reference2.jpg, consumer1.jpg, consumer2.jpg, consumer3.jpg are all photos of the same pill, the corresponding entries in the ground truth table would be:

To The Top

Submission Details

Software Specifications

Your software must be able to take as inputs directories containing arbitrary numbers of consumer and reference images, and produce an output matrix in which all of the reference images are ranked in order of how similar they are to each consumer image, as shown below.

Each row of the output matrix, referred to here as MR, should rank the reference images by similarity to the consumer image. In that row, the reference image most similar to the consumer image should have a rank of 1, and the reference image least similar to the consumer image should have a rank of N (the number of reference images), as shown below. The ordering of the rankings should be strictly increasing and there should be no equal values of ranking for a given consumer image.

You may use any software development environment to implement your algorithm; however, your software must be command-line executable.

Your software should produce an output file, [yourentryname]_MR.csv, where [yourentryname] has five or more characters, containing an ASCII comma-separated value matrix in the same directory as the executable in which:

  • The row labels of the matrix are the filenames of the consumer-quality images in the directory DC in alphabetical order
  • The column labels of the matrix are the filenames of the reference images in the directory DR in alphabetical order
  • Each entry in the matrix is a rank value of how similar each reference image is to the consumer-quality image in that row

Examples

Example 1: Screen capture of part of a file for [yourentryname]_MR.csv:


Example 2: Running our software

Let directory DC contain the following two files of consumer-quality images:

  • 10.jpg
  • 2.jpg

Note: In alphabetical order, file 10.jpg comes before 2.jpg.

Let directory DR contain the following two files of reference images:

  • 00093-7155-98_NLMIMAGE10_4A21A50D.jpg
  • 68180-0481-01_NLMIMAGE10_FA217D1B.jpg

Let the executable be in a file named [yourentryname]_executable.exe. One invokes the executable with the output file [yourentryname]_MR.csv with the following command:

After execution, [yourentryname]_MR.csv contains:

,00093-7155-98_NLMIMAGE10_4A21A50D.jpg,68180-0481-01_NLMIMAGE10_FA217D1B.jpg
10.jpg,2,1
2.jpg,1,2

This output is interpreted as the following rank matrix:

00093-7155-98_NLMIMAGE10_4A21A50D.jpg
68180-0481-01_NLMIMAGE10_FA217D1B.jpg
10.jpg
2
1
2.jpg
1
2

Example of an Apple MacOSX command:
[yourentryname]_executable DC DR [yourentryname]_MR.csv

Example of a Windows command:
C:\> [yourentryname]_executable.exe DC DR [yourentryname]_MR.csv

Submission Entry

Your complete submission must be mailed, Mailing Instructions, to NLM on a USB drive containing a virtual machine (VM) named [yourentryname]_VM that includes the following items:

  1. Executable software for ranking how well consumer images of pills taken by digital cameras match reference images. The software shall be a batch-mode program or a script whose input consists of a directory of consumer images and a directory of reference images. The output shall be a comma-separated-value (csv) M-by-N matrix MR of ranks that for i = 1,…,M compares consumer image i with reference images j = 1,…,N. For each consumer image, no rank can appear more than once. The software does not need to identify pills by name.
  2. Source code for the executable that is both human- and machine-readable. The source code can be written in any programming language(s).
  3. A .csv file containing the matrix MRC (C for Challenge) of ranks that is the output from executing the executable using DC and DR as input. In this case MRC is a matrix that has 3000 rows and has 2000 columns. For each row i, MRC(i,j) will rank how well reference image j matches consumer-quality image i, for j = 1,…,2000. If reference image J best matches consumer-quality image I then MRC(I,J) = 1, and if reference image K is the worst match to consume-quality image I then MRC(I,K) = 2000.
  4. A text file written in English and containing the algorithm in pseudo-code that the source code implements, and a description of how it works and any tools or packages that it uses. The pseudo-code is to have the complete pipeline from the input directories to a matrix of ranks, and also include any code that implements features or does offline training.
  5. A one-page text file written in English that contains the following:
    1. Title of entry
    2. Names and email addresses of the team captain and all team members
    3. A five or more character identifier for the entry that is used as a prefix in the names of all of the team’s submitted files
    4. A brief description of the submission

VM Specifications

The VM can have any of the following 64-bit operating systems:

  • CentOS 6.x
  • CentOS 7.0
  • Fedora 20x
  • Mac OS X 10.10.x
  • Mac OS X 10.8.x
  • Mac OS X 10.9.x
  • Red Hat Enterprise Linux 6.x
  • Red Hat Enterprise Linux 7.0
  • Ubuntu 14.x
  • Windows 7 SP1
  • Windows 8.x
  • Windows 10.0

Evaluation System

The NLM PIR Challenge evaluation system will be run with the following configuration:

  • 3.7GHz Intel Xeon E5 quad-core CPU, 10MB of L3 cache
  • 512GB PCIe-based flash storage
  • 64GB (4 x 16GB) of 1866MHz DDR3 ECC RAM
  • Apple Mac Pro
  • Mac OS X 10.10.x
  • VMware Fusion 7.0

Submissions must be in a format that is compatible with VMware Fusion 7.0, which can import virtual machines created with the following:

  • Microsoft Virtual PC for Mac Version 7
  • Open Virtualization Format (OVF and OVA) 1.0

VMware and other third-party products have tools to convert between virtual machine file formats.

Mailing Instructions

Please use the PIR Challenge Checklist, http://pir.nlm.nih.gov/challenge/checklist, to make sure your entry is complete, and then mail your submission to:

Pill Image Recognition Challenge
Computational Photography Project for Pill Identification
National Library of Medicine
Building 38A, Room B1-N30
8600 Rockville Pike
Bethesda, MD 20894

To The Top

Evaluation and Winner Selection

Please read the "Basis Upon Which Winners Will be Selected" in the Federal Register Notice.

To evaluate your results, NLM will be running your software on a database of consumer-quality and reference images different from the ones provided to you. The score you receive will be computed as the Mean Average Precision (MAP)(1) of the ranking matrix, which can have a real-number positive value with 1 being the maximum (perfect) score. The challenge entries will be ranked by their MAP scores, and the entries with the highest scores will be selected as winners.

For more information about the MAP, please take a look at this MAP example, http://pir.nlm.nih.gov/challenge/MAP_example, and the NLM PIR Challenge evaluation code, //pir.nlm.nih.gov/challenge/MAP_evaluation.docx.

To The Top


(1) Manning CD, Raghavan P, Schütze H. Introduction to Information Retrieval. New York: Cambridge University Press; 2008. 506 p.