Resources

Learner Response Corpus

In my research on recognizing children's understanding of science concepts, I led the development of an annotated corpus of elementary students' responses to assessment questions.

We acquired grade 3-6 responses to 287 questions from the Assessing Science Knowledge (ASK) project (Lawrence Hall of Science, 2006). The responses, which range in length from moderately short verb phrases to several sentences, cover sixteen diverse teaching and learning modules, spanning life science, physical science, earth and space science, scientific reasoning, and technology. We generated a corpus by transcribing a random sample (approximately 15400) of the students' handwritten responses.

The ASK assessments included a reference answer for each of their constructed response questions. We decomposed these reference answers into fine-grained facets and annotated each facet according to the student's apparent understanding of that facet. Please see my Publications page for more detail regarding the corpus, in particular see:

Rodney D. Nielsen, Wayne Ward, James H. Martin and Martha Palmer. (2008). Annotating Students' Understanding of Science Concepts. In Proceedings of the Sixth International Language Resources and Evaluation Conference, (LREC'08), Marrakech, Morocco, May 28-30, 2008. Published by the European Language Resources Association, (ELRA), Paris, France.

Training Data

Download the annotated learner answer corpus.

Download the reference answer markup.


Sarcasm Dataset

In our research on detecting domain-general features for sarcasm detection, we developed a dataset of sarcastic and non-sarcastic tweets. To do so, we downloaded tweets containing the trailing hashtags: "#sarcasm," "#happiness," "#sadness," "#anger," "#fear," "#disgust," and "#surprise" during February and March 2016. We labeled the #sarcasm tweets as sarcastic, and the tweets containing the other six hashtags (corresponding to Paul Ekman's six basic emotions) as non-sarcastic. The non-sarcastic hashtags were chosen because their associated tweets were still expected to express opinions, similarly to sarcastic tweets, but in a non-sarcastic way. Note that this almost certainly increases the difficulty of discriminating between sarcastic and non-sarcastic tweets, since both are emotionally charged (see González-Ibáñez, Muresan, and Wacholder (2011) and Ghosh, Guo, and Muresan (2015) for some interesting research regarding this), but as distinguishing between literal and sarcastic sentiment is useful for real-world applications of sarcasm detection, we consider the presence of sentiment in our dataset to be a worthwhile challenge.

The tweet IDs for tweets from the dataset that were still publicly available at the time this was posted are provided below, divided into the training and test sets we used in our work (reference below). For some of the original tweets that were no longer available, we were able to find identical publicly-available retweets, so we include the IDs for those retweets as well. For your convenience, we also provide a script for downloading the tweets here.

Download the training set.

Download the test set.

For much more information about our work on sarcasm detection, refer to the paper below. Please also cite this paper if you use the tweets from this dataset in your own research.

Natalie Parde and Rodney D. Nielsen. #SarcasmDetection is soooo general! Towards a Domain-Independent Approach for Detecting Sarcasm. In the Proceedings of the 30th International FLAIRS Conference. Marco Island, Florida, May 22-24, 2017.