Authorship attribution is an important problem in information retrieval and computational linguistics but also in applied areas such as law and journalism where knowing the author of a document (such as a ransom note) may enable e.g. law enforcement to save lives. The most common framework for testing candidate algorithms is the closed-set attribution task: given a sample of reference documents from a restricted and finite set of candidate authors, the task is to determine the most likely author of a previously unseen document of unknown authorship. This task may be quite challenging in cross-domain conditions, when documents of known and unknown authorship come from different domains (e.g., thematic area, genre). In addition, it is often more realistic to assume that the true author of a disputed document is not necessarily included in the list of candidates.
Fanfiction refers to fictional forms of literature which are nowadays produced by admirers ('fans') of a certain author (e.g. J.K. Rowling), novel ('Pride and Prejudice'), TV series (Sherlock Holmes), etc. The fans heavily borrow from the original work's theme, atmosphere, style, characters, story world etc. to produce new fictional literature, i.e. the so-called fanfics. This is why fanfiction is also known as transformative literature and has generated a number of controversies in recent years related to the intellectual rights property of the original authors (cf. plagiarism). Fanfiction, however, is typically produced by fans without any explicit commercial goals. The publication of fanfics typically happens online, on informal community platforms that are dedicated to making such literature accessible to a wider audience (e.g. fanfiction.net). The original work of art or genre is typically refered to as a fandom.
This edition of PAN focuses on cross-domain attribution in fanfiction, a task that can be more accurately described as cross-fandom attribution in fanfiction. In more detail, all documents of unknown authorship are fanfics of the same fandom (target fandom) while the documents of known authorship by the candidate authors are fanfics of several fandoms (other than the target-fandom). In contrast to the PAN-2018 edition of this task, we focus on open-set attribution conditions, namely the true author of a text in the target domain is not necessarily included in the list of candidate authors.
Each problem consists of a set of known fanfics by each candidate author and a set of unknown fanfics located in separate folders. The file problem-info.json
that can be found in the main folder of each problem, shows the name of folder of unknown documents and the list of names of candidate author folders.
The fanfics of known authorship belong to several fandoms (excluding the target fandom). The file fandom-info.json
(it can be found in the main folder of each problem) provides information about the fandom of all fanfics of known authorsihp, as follows.
The true author of each unknown document can be seen in the file ground-truth.json
, also found in the main folder of each problem. Note that all unknown documents that are not written by any of the candidate authors belong to the class.
In addition, to handle a collection of such problems, the file collection-info.json
includes all relevant information. In more detail, for each problem it lists its main folder, the language (either "en"
, "fr"
, "it"
, or "sp"
), and the encoding (always UTF-8
) of documents.
Many approaches have been proposed recently to identify the author of a given document. Thereby, one fact is often silently assumed: i.e., that the given document is indeed written by only author. For a realistic author identification system it is therefore crucial to at first determine whether a document is single- or multiauthored.
To this end, previous PAN editions aimed to analyze multi-authored documents. As it has been shown that it is a hard problem to reliably identify individual authors and their contribution within a single document (Author Diarization, 2016; Style Breach Detection, 2017), last year's task substantially relaxed the problem by asking only for binary decision (single- or multi-authored). Considering the promising results achieved by the submitted approaches (see the overview paper for details), we continue last year's task and additionally ask participants to predict the number of involved authors.
Given a document, participants thus should apply intrinsic style analyses to hierarchically answer the following questions:
Is the document written by one or more authors, i.e., do style changes exist or not?
If it is multi-authored, how many authors have collaborated?
All documents are provided in English and may contain zero up to arbitrarily many style changes, resulting from arbitrarily many authors.
The training set: contains 50% of the whole dataset and includes solutions. Use this set to feed/train your models.
Like last year, the whole data set is based on user posts from various sites of the StackExchange network, covering different topics and containing approximately 300 to 2000 tokens per document.
For each problem instance X, two files are provided:
problem-X.txt contains the actual text
problem-X.truth contains the ground truth, i.e., the correct solution in JSON format:
{ "authors": number_of_authors, "structure": [author_segment_1, ..., author_segment_3], "switches": [ character_pos_switch_segment_1, ..., character_pos_switch_segment_n, ] }
An example for a multi-author document could look as follows:
{ "authors": 4, "structure": ["A1", "A2", "A4", "A2", "A4", "A2", "A3", "A2", "A4"], "switches": [805, 1552, 2827, 3584, 4340, 5489, 7564, 8714] }
whereas a single-author document would have exactly the following form:
{ "authors": 1, "structure": ["A1"], "switches": [] }
Note that authors within the structure correspond only to the respective document, i.e., they are not the same over the whole dataset. For example, author A1 in document 1 is most likely not the same author as A1 in document 2 (it could be, but as there are hundreds of authors the chances are very small that this is the case). Further, please consider that the structure and the switches are provided only as additional resources for the development of your algorithms, i.e., they are not expected to be predicted.
To tackle the problem, you can develop novel approaches, extend existing algorithms from last year's task or adapt approaches from related problems such as intrinsic plagiarism detection or text segmentation. You are also free to additionally evaluate your approaches on last year's training/validation/test dataset (for the number of authors use the corresponding meta data).
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Authorship attribution is an important problem in information retrieval and computational linguistics but also in applied areas such as law and journalism where knowing the author of a document (such as a ransom note) may enable e.g. law enforcement to save lives. The most common framework for testing candidate algorithms is the closed-set attribution task: given a sample of reference documents from a restricted and finite set of candidate authors, the task is to determine the most likely author of a previously unseen document of unknown authorship. This task may be quite challenging in cross-domain conditions, when documents of known and unknown authorship come from different domains (e.g., thematic area, genre). In addition, it is often more realistic to assume that the true author of a disputed document is not necessarily included in the list of candidates.
Fanfiction refers to fictional forms of literature which are nowadays produced by admirers ('fans') of a certain author (e.g. J.K. Rowling), novel ('Pride and Prejudice'), TV series (Sherlock Holmes), etc. The fans heavily borrow from the original work's theme, atmosphere, style, characters, story world etc. to produce new fictional literature, i.e. the so-called fanfics. This is why fanfiction is also known as transformative literature and has generated a number of controversies in recent years related to the intellectual rights property of the original authors (cf. plagiarism). Fanfiction, however, is typically produced by fans without any explicit commercial goals. The publication of fanfics typically happens online, on informal community platforms that are dedicated to making such literature accessible to a wider audience (e.g. fanfiction.net). The original work of art or genre is typically refered to as a fandom.
This edition of PAN focuses on cross-domain attribution in fanfiction, a task that can be more accurately described as cross-fandom attribution in fanfiction. In more detail, all documents of unknown authorship are fanfics of the same fandom (target fandom) while the documents of known authorship by the candidate authors are fanfics of several fandoms (other than the target-fandom). In contrast to the PAN-2018 edition of this task, we focus on open-set attribution conditions, namely the true author of a text in the target domain is not necessarily included in the list of candidate authors.
Each problem consists of a set of known fanfics by each candidate author and a set of unknown fanfics located in separate folders. The file problem-info.json
that can be found in the main folder of each problem, shows the name of folder of unknown documents and the list of names of candidate author folders.
The fanfics of known authorship belong to several fandoms (excluding the target fandom). The file fandom-info.json
(it can be found in the main folder of each problem) provides information about the fandom of all fanfics of known authorsihp, as follows.
The true author of each unknown document can be seen in the file ground-truth.json
, also found in the main folder of each problem. Note that all unknown documents that are not written by any of the candidate authors belong to the class.
In addition, to handle a collection of such problems, the file collection-info.json
includes all relevant information. In more detail, for each problem it lists its main folder, the language (either "en"
, "fr"
, "it"
, or "sp"
), and the encoding (always UTF-8
) of documents.