Protein crystals are an essential part of X-ray crystallography, which is the method CSIRO uses to figure out the molecular structure of proteins. CSIRO is interested in the molecular structure of proteins because it is essential for our biotechnology research.
In order to find a protein crystal, structural biologists must set up thousands of experiments. Less than 5% of the experiments successfully grow crystals, and to find them we need to look at every experiment (many times) over the course of weeks. We developed Cinder to help us look for crystals.
Cinder is an easy app to use, just swipe right when you see crystals. Your classification of each experiment is pushed back to CSIRO so we can use it to develop search algorithms in our scientific software.
What is it? Cinder is a crystal-image classification tool developed for mobile platforms. It is used to score crystallization experiment images into one of four categories: Clear, Precipitate, Other and Crystal.
Why develop it? A few reasons, all linked to becoming more efficient at protein crystallisation. One reason is to provide a tool to help new scientists learn what protein crystallisation experiments look like. Another is to help C3 users look at their results. The ultimate goal is to develop a completely reliable AI package that will find crystals in images. In C3 we are using MARCO, which is deeplearning tool trained on hundreds of thousands of scored images, to automatically classify all of our crystallisation images. MARCO is pretty good – it probably gets 90% of the classifications right. But we would like to refine MARCO to make it even better. We plan on doing this by using crowd-sourcing to resolve ambiguous cases. When we find an image that has both a MARCO classification and a human classification – and they conflict, we put that image up on the Cinder app, so that we can get a lot of different pairs of eyes looking at the image, to see if we can work out what the most appropriate classification should be.
How does it work? Conceptually it has some similarity to the mobile dating application Tinder, however we needed more granularity than “keep” (a crystal hit) or “discard” (everything else), so included vertical swipes on top of the horizontal. Why? Precipitation and Other events can be helpful to guide an optimisation strategy.
How do we get it? From the links above or direct download the android apk below and follow the instructions here.
Cinder has three options: Solo, Community and Kinder
Solo is used by scientists who have set up experiments in C3. Click on ‘Solo’, after opening the app, and then enter your C3 user name and password. The most recent images from your experiments will be presented in some random order. Swipe (or tap) to score the image, and a new image will appear. No information about the crystallisation condition or protein is given (just to keep the app fast). However, if you want to know more, the ‘Details’ tab will give the barcode and well number of the image. All the Cinder Solo scores are uploaded to the central C3 database and will be visible through the See3 viewing software.
Community is a list of images which we would like to be able to classify more robustly – each week a new batch of difficult images (ones where the MARCO and human classification disagree) are uploaded. Every one of these images which you score for us will be used to help improve our AI for machine recognition of crystal outcomes.
Kinder is a tool to help guide you through (some of ) the possible outcomes of a crystallisation experiment. When you open this part of the app a number of images which have been pre-classified and annotated is loaded. You are presented with an image, and get to select a classification. If you get it right, then you get to see the annotation, and can move onto the next example.
We are always looking for more images to put into the Kinder training set! If you would like to submit images for the Kinder training set please use the form below or email us.
Left: A Kinder example, waiting for your classification. Centre: Kinder view after getting the classification correct Right: Zoomed in to see the detail of an experiment (needle clusters).