Data and Image Analysis Special Interest Group Meeting,

SBS 11th Conference Geneva, Switzerland, September 15, 2005

Approximately 50 people participated in the meeting.

This year we continued the theme of the meeting in 2004 quality of data generated in cell imaging-based assays. Specifically the panel of presenters was asked to address the following subjects:

Quality of data generated by cell image analysis:

            -        How to measure it?

            -        What factors affect it?

            -        How much is it worth?

 The following presentations were made at the meeting:

 1. John T. Elliott, Research Scientist, Cell and Tissue Measurements Group, NIST.

    Identifying Statistically Relevant Differences between Distributions of Cell Response

2. Ralph J. Garippa, Research Leader, Cell-based HTS and Robotics Group, Roche Discovery Technologies.

    Use of the AcuityXpress Informatics Platform for Expanding the Potential of High-Content Screening

3. Ilya Ravkin, CTO, Vitra Bioscience.

    Comparison of several classes of algorithms for cytoplasm to nucleus translocation   (more ...)

4. D. Lansing Taylor, President and CEO, Cellumen.

    Improving the Quality of HCS Assays with Multiplexing, Statistics and Informatics

5. Eugeni Vaisberg, Associate Director, Research Informatics, Cytokinetics.

    Cytometrix(TM) - platform for high content, high throughput analysis of cell-based assays

6. Ahmad Yekta, Staff Scientist, GE Healthcare Biosciences.

    Quantifying Image Quality and its Influence on HCS Data

The talks fell into two groups: more quantitative low level analysis (1,3,6) and more qualitative system level presentations (2,4,5).

Discussion centered on two topics:

1. Statistical significance vs. biological significance based on the presentation by John Elliott.

2. Best of class in each category (imagers, image analysis algorithms, data analysis methods) vs. vertically integrated turn-key HCS systems.

There was a feeling in the group that next year there should be more time allocated for the SIG and specifically for discussion.

In conjunction with the SIG meeting a comparison of image analysis algorithms was organized. The rationale for this comparison is that while we derive quantitative data from images, the assessment of how well we are doing this is qualitative or non-existent. The goal of this undertaking is to put different software offerings on a quantitative scale for a direct comparison of results and quality. Of many important features by which image analysis algorithms and systems can be compared: speed, flexibility, user interaction and price we attempted to assess only one feature the ability of the algorithm to extract from images numerical data with the greatest dynamic range and smallest variability.

Ilya Ravkin reported on the progress of this undertaking.

The idea to compare algorithms started to take shape about a year ago. It took about 10 months to collect reasonable sets of images and establish the rules. The first announcement about the comparison and availability of images was sent a month and a half prior to the meeting. At the time of the meeting there have been 6 requests for images (2 academic, 2 HW/SW vendors, 2 users). Two sets of results were received: one for CNT and one for Transfluor. Clearly, there was not enough time to complete the comparison and we will continue the process until everyone who wants to participate will have a chance to do so.

The immediate tasks are: put images and results on the SBS ftp site; try to collect more image sets, especially of plates acquired on different instruments; reach more people who may want to participate; refine the comparison rules; process the results; start working on a publication.

Despite the fact that the algorithm comparison has only started, there are already several important realities:

1 - a publicly accessible image library for two important assays; 2 - a methodology of quantitative algorithm evaluation; 3 - users are now able to ask vendors for quantitative evaluation of their algorithms and for comparison with this publicly available benchmark.

Ilya Ravkin, SIG Co-chairman