To automatically create a negative training set for a given concept, the mainstream methods randomly sample a relatively small subset from a large pool of (user-tagged) examples. The pool may consist of web images with free texts or consumer photos with user provided tags. Apart from the obvious fact that random sampling is simple and easy to use, we attribute its popularity to two reasons. Given widely available user-tagged images online, in this paper we study which images are relevant negatives for learning visual concept classifiers. To that end, we propose Negative Bootstrap. Given a specific concept and a few positive examples, the new algorithm combines random sampling and adaptive selection to iteratively find relevant negatives. To address the inefficiency in applying ensemble classifiers, we introduce Model Compression to compress an ensemble of histogram intersection kernel SVMs. Consequently, the prediction time is independent of the size of the ensemble. To justify our proposals, we exploit 610K user-tagged images as pseudo negative examples, and conduct visual concept search experiments on two popular benchmark sets and a third test set of one million Flickr images. Learning classifiers for many visual concepts are important for image categorization and retrieval. As a classifier tends to misclassify negative examples which are visually similar to positive ones, inclusion of such misclassified and thus relevant negatives should be stressed during learning. User-tagged images are abundant online, but which images are the relevant negatives remains unclear. Sampling negatives at random is the de facto standard in the literature. In this paper, we go beyond random sampling by proposing Negative Bootstrap. Given a visual concept and a few positive examples, the new algorithm iteratively finds relevant negatives. Per iteration, we learn from a small proportion of many user-tagged images, yielding an ensemble of meta classifiers. Labeled examples are crucial to learn visual concept classifiers for image categorization and retrieval. To be more precise, we need positive and negative examples with respect to a specific concept. When the number of concepts is large, obtaining labeled examples in an efficient way is essential. Traditionally, labeled examples are annotated by expert annotators. However, expert labeling is labor intensive and time consuming, making well-labeled examples expensive to obtain and consequently their availability is limited. Much research has been conducted towards inexpensive solutions to acquire positive examples, e.g., from web image search results or socially tagged data , or by online collaborative annotation . For instance, train a visual classifier on web image search results of a given concept, and re-rank the search results by the classifier. Though the automated approaches are not comparable to dedicated manual annotation, their output provides a good starting point for manual labeling. Learning classifiers for many visual concepts are important for image categorization and retrieval. As a classifier tends to misclassify negative examples which are visually similar to positive ones, inclusion of such misclassified and thus relevant negatives should be stressed during learning. User-tagged images are abundant online, but which images are the relevant negatives remains unclear. Sampling negatives at random is the de facto standard in the literature. They beyond random sampling by proposing Negative Bootstrap. Given a visual concept and a few positive examples, the new algorithm iteratively finds relevant negatives. Per iteration, we learn from a small proportion of many user-tagged images, yielding an ensemble of meta classifiers. To automatically create a negative training set for a given concept, the mainstream methods randomly sample a relatively small subset from a large pool of (user-tagged) examples . The pool may consist of web images with free texts or consumer photos with user provided tags. Apart from the obvious fact that random sampling is simple and easy to use, we attribute its popularity to two reasons.